Feb 9 19:28:13.004630 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:28:13.004688 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:28:13.004710 kernel: BIOS-provided physical RAM map: Feb 9 19:28:13.004723 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:28:13.004736 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:28:13.004748 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:28:13.004763 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 9 19:28:13.004777 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 9 19:28:13.004792 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 19:28:13.004805 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:28:13.004817 kernel: NX (Execute Disable) protection: active Feb 9 19:28:13.004829 kernel: SMBIOS 2.8 present. Feb 9 19:28:13.004841 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 9 19:28:13.004854 kernel: Hypervisor detected: KVM Feb 9 19:28:13.004869 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:28:13.004886 kernel: kvm-clock: cpu 0, msr 3afaa001, primary cpu clock Feb 9 19:28:13.004899 kernel: kvm-clock: using sched offset of 6984123139 cycles Feb 9 19:28:13.004913 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:28:13.007959 kernel: tsc: Detected 1996.249 MHz processor Feb 9 19:28:13.007974 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:28:13.007982 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:28:13.007990 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 9 19:28:13.007997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:28:13.008008 kernel: ACPI: Early table checksum verification disabled Feb 9 19:28:13.008016 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 9 19:28:13.008023 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:28:13.008031 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:28:13.008038 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:28:13.008046 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 9 19:28:13.008053 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:28:13.008061 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:28:13.008068 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 9 19:28:13.008078 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 9 19:28:13.008085 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 9 19:28:13.008093 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 9 19:28:13.008100 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 9 19:28:13.008107 kernel: No NUMA configuration found Feb 9 19:28:13.008115 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 9 19:28:13.008122 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 9 19:28:13.008130 kernel: Zone ranges: Feb 9 19:28:13.008142 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:28:13.008149 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 9 19:28:13.008157 kernel: Normal empty Feb 9 19:28:13.008164 kernel: Movable zone start for each node Feb 9 19:28:13.008172 kernel: Early memory node ranges Feb 9 19:28:13.008179 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:28:13.008190 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 9 19:28:13.008197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 9 19:28:13.008205 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:28:13.008213 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:28:13.008220 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 9 19:28:13.008227 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 19:28:13.008235 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:28:13.008243 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:28:13.008250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:28:13.008259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:28:13.008267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:28:13.008274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:28:13.008282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:28:13.008289 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:28:13.008297 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:28:13.008305 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 9 19:28:13.008312 kernel: Booting paravirtualized kernel on KVM Feb 9 19:28:13.008320 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:28:13.008328 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:28:13.008337 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:28:13.008345 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:28:13.008352 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:28:13.008360 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 9 19:28:13.008367 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 9 19:28:13.008375 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 9 19:28:13.008382 kernel: Policy zone: DMA32 Feb 9 19:28:13.008391 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:28:13.008401 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:28:13.008409 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:28:13.008417 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:28:13.008425 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:28:13.008432 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 9 19:28:13.008440 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:28:13.008448 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:28:13.008455 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:28:13.008464 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:28:13.008473 kernel: rcu: RCU event tracing is enabled. Feb 9 19:28:13.008480 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:28:13.008488 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:28:13.008496 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:28:13.008504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:28:13.008511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:28:13.008519 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:28:13.008526 kernel: Console: colour VGA+ 80x25 Feb 9 19:28:13.008536 kernel: printk: console [tty0] enabled Feb 9 19:28:13.008544 kernel: printk: console [ttyS0] enabled Feb 9 19:28:13.008552 kernel: ACPI: Core revision 20210730 Feb 9 19:28:13.008559 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:28:13.008567 kernel: x2apic enabled Feb 9 19:28:13.008574 kernel: Switched APIC routing to physical x2apic. Feb 9 19:28:13.008582 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:28:13.008590 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:28:13.008597 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 9 19:28:13.008605 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 9 19:28:13.008615 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 9 19:28:13.008623 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:28:13.008630 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:28:13.008638 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:28:13.008646 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:28:13.008662 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:28:13.008670 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 9 19:28:13.008677 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:28:13.008685 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:28:13.008695 kernel: LSM: Security Framework initializing Feb 9 19:28:13.008702 kernel: SELinux: Initializing. Feb 9 19:28:13.008710 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:28:13.008718 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:28:13.008726 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 9 19:28:13.008733 kernel: Performance Events: AMD PMU driver. Feb 9 19:28:13.008741 kernel: ... version: 0 Feb 9 19:28:13.008748 kernel: ... bit width: 48 Feb 9 19:28:13.008756 kernel: ... generic registers: 4 Feb 9 19:28:13.008773 kernel: ... value mask: 0000ffffffffffff Feb 9 19:28:13.008781 kernel: ... max period: 00007fffffffffff Feb 9 19:28:13.008790 kernel: ... fixed-purpose events: 0 Feb 9 19:28:13.008798 kernel: ... event mask: 000000000000000f Feb 9 19:28:13.008806 kernel: signal: max sigframe size: 1440 Feb 9 19:28:13.008814 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:28:13.008822 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:28:13.008830 kernel: x86: Booting SMP configuration: Feb 9 19:28:13.008839 kernel: .... node #0, CPUs: #1 Feb 9 19:28:13.008847 kernel: kvm-clock: cpu 1, msr 3afaa041, secondary cpu clock Feb 9 19:28:13.008855 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 9 19:28:13.008863 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:28:13.008870 kernel: smpboot: Max logical packages: 2 Feb 9 19:28:13.008878 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 9 19:28:13.008886 kernel: devtmpfs: initialized Feb 9 19:28:13.008894 kernel: x86/mm: Memory block size: 128MB Feb 9 19:28:13.008902 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:28:13.008912 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:28:13.008920 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:28:13.008946 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:28:13.008954 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:28:13.008962 kernel: audit: type=2000 audit(1707506892.279:1): state=initialized audit_enabled=0 res=1 Feb 9 19:28:13.008970 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:28:13.008978 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:28:13.008985 kernel: cpuidle: using governor menu Feb 9 19:28:13.008993 kernel: ACPI: bus type PCI registered Feb 9 19:28:13.009004 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:28:13.009012 kernel: dca service started, version 1.12.1 Feb 9 19:28:13.009020 kernel: PCI: Using configuration type 1 for base access Feb 9 19:28:13.009028 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:28:13.009036 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:28:13.009044 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:28:13.009052 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:28:13.009060 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:28:13.009067 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:28:13.009077 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:28:13.009084 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:28:13.009092 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:28:13.009100 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:28:13.009108 kernel: ACPI: Interpreter enabled Feb 9 19:28:13.009116 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:28:13.009124 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:28:13.009132 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:28:13.009139 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:28:13.009149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:28:13.009312 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:28:13.009410 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:28:13.009424 kernel: acpiphp: Slot [3] registered Feb 9 19:28:13.009433 kernel: acpiphp: Slot [4] registered Feb 9 19:28:13.009441 kernel: acpiphp: Slot [5] registered Feb 9 19:28:13.009449 kernel: acpiphp: Slot [6] registered Feb 9 19:28:13.009461 kernel: acpiphp: Slot [7] registered Feb 9 19:28:13.009469 kernel: acpiphp: Slot [8] registered Feb 9 19:28:13.009477 kernel: acpiphp: Slot [9] registered Feb 9 19:28:13.009486 kernel: acpiphp: Slot [10] registered Feb 9 19:28:13.009494 kernel: acpiphp: Slot [11] registered Feb 9 19:28:13.009502 kernel: acpiphp: Slot [12] registered Feb 9 19:28:13.009511 kernel: acpiphp: Slot [13] registered Feb 9 19:28:13.009519 kernel: acpiphp: Slot [14] registered Feb 9 19:28:13.009527 kernel: acpiphp: Slot [15] registered Feb 9 19:28:13.009536 kernel: acpiphp: Slot [16] registered Feb 9 19:28:13.009546 kernel: acpiphp: Slot [17] registered Feb 9 19:28:13.009554 kernel: acpiphp: Slot [18] registered Feb 9 19:28:13.009562 kernel: acpiphp: Slot [19] registered Feb 9 19:28:13.009571 kernel: acpiphp: Slot [20] registered Feb 9 19:28:13.009579 kernel: acpiphp: Slot [21] registered Feb 9 19:28:13.009587 kernel: acpiphp: Slot [22] registered Feb 9 19:28:13.009595 kernel: acpiphp: Slot [23] registered Feb 9 19:28:13.009604 kernel: acpiphp: Slot [24] registered Feb 9 19:28:13.009612 kernel: acpiphp: Slot [25] registered Feb 9 19:28:13.009622 kernel: acpiphp: Slot [26] registered Feb 9 19:28:13.009630 kernel: acpiphp: Slot [27] registered Feb 9 19:28:13.009639 kernel: acpiphp: Slot [28] registered Feb 9 19:28:13.009647 kernel: acpiphp: Slot [29] registered Feb 9 19:28:13.009655 kernel: acpiphp: Slot [30] registered Feb 9 19:28:13.009663 kernel: acpiphp: Slot [31] registered Feb 9 19:28:13.009672 kernel: PCI host bridge to bus 0000:00 Feb 9 19:28:13.009773 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:28:13.009853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:28:13.014029 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:28:13.014163 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:28:13.014243 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 19:28:13.014321 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:28:13.014438 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:28:13.014545 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:28:13.014659 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:28:13.014751 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 9 19:28:13.014840 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:28:13.014938 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:28:13.015028 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:28:13.015110 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:28:13.015198 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:28:13.015284 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 19:28:13.015365 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 19:28:13.015480 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 9 19:28:13.015565 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 9 19:28:13.015651 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 9 19:28:13.015734 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 9 19:28:13.015819 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 9 19:28:13.015901 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:28:13.016010 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:28:13.016099 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 9 19:28:13.016182 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 9 19:28:13.016264 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 9 19:28:13.016345 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 9 19:28:13.016438 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:28:13.016521 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:28:13.016611 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 9 19:28:13.016707 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 9 19:28:13.016803 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 9 19:28:13.016891 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 9 19:28:13.022090 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 9 19:28:13.022208 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:28:13.022300 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 9 19:28:13.022382 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 9 19:28:13.022394 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:28:13.022403 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:28:13.022411 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:28:13.022420 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:28:13.022428 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:28:13.022439 kernel: iommu: Default domain type: Translated Feb 9 19:28:13.022447 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:28:13.022545 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:28:13.022630 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:28:13.022712 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:28:13.022724 kernel: vgaarb: loaded Feb 9 19:28:13.022732 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:28:13.022741 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:28:13.022749 kernel: PTP clock support registered Feb 9 19:28:13.022760 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:28:13.022768 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:28:13.022776 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:28:13.022784 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 9 19:28:13.022792 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:28:13.022800 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:28:13.022808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:28:13.022816 kernel: pnp: PnP ACPI init Feb 9 19:28:13.022918 kernel: pnp 00:03: [dma 2] Feb 9 19:28:13.022949 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:28:13.022957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:28:13.022965 kernel: NET: Registered PF_INET protocol family Feb 9 19:28:13.022973 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:28:13.022982 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:28:13.022990 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:28:13.022998 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:28:13.023006 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:28:13.023022 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:28:13.023030 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:28:13.023038 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:28:13.023046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:28:13.023054 kernel: NET: Registered PF_XDP protocol family Feb 9 19:28:13.023138 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:28:13.023211 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:28:13.023284 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:28:13.023355 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:28:13.023432 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 19:28:13.023513 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:28:13.023594 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:28:13.023676 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:28:13.023687 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:28:13.023695 kernel: Initialise system trusted keyrings Feb 9 19:28:13.023704 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:28:13.023716 kernel: Key type asymmetric registered Feb 9 19:28:13.023724 kernel: Asymmetric key parser 'x509' registered Feb 9 19:28:13.023732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:28:13.023740 kernel: io scheduler mq-deadline registered Feb 9 19:28:13.023748 kernel: io scheduler kyber registered Feb 9 19:28:13.023756 kernel: io scheduler bfq registered Feb 9 19:28:13.023764 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:28:13.023772 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 9 19:28:13.023781 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:28:13.023788 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:28:13.023798 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:28:13.023806 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:28:13.023814 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:28:13.023822 kernel: random: crng init done Feb 9 19:28:13.023830 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:28:13.023838 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:28:13.023846 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:28:13.023854 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:28:13.023990 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 9 19:28:13.024076 kernel: rtc_cmos 00:04: registered as rtc0 Feb 9 19:28:13.024150 kernel: rtc_cmos 00:04: setting system clock to 2024-02-09T19:28:12 UTC (1707506892) Feb 9 19:28:13.024223 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 9 19:28:13.024235 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:28:13.024243 kernel: Segment Routing with IPv6 Feb 9 19:28:13.024251 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:28:13.024259 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:28:13.024267 kernel: Key type dns_resolver registered Feb 9 19:28:13.024277 kernel: IPI shorthand broadcast: enabled Feb 9 19:28:13.024285 kernel: sched_clock: Marking stable (716084331, 120524822)->(898550091, -61940938) Feb 9 19:28:13.024294 kernel: registered taskstats version 1 Feb 9 19:28:13.024302 kernel: Loading compiled-in X.509 certificates Feb 9 19:28:13.024310 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:28:13.024318 kernel: Key type .fscrypt registered Feb 9 19:28:13.024326 kernel: Key type fscrypt-provisioning registered Feb 9 19:28:13.024334 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:28:13.024344 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:28:13.024352 kernel: ima: No architecture policies found Feb 9 19:28:13.024360 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:28:13.024368 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:28:13.024376 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:28:13.024384 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:28:13.024392 kernel: Run /init as init process Feb 9 19:28:13.024400 kernel: with arguments: Feb 9 19:28:13.024408 kernel: /init Feb 9 19:28:13.024417 kernel: with environment: Feb 9 19:28:13.024424 kernel: HOME=/ Feb 9 19:28:13.024432 kernel: TERM=linux Feb 9 19:28:13.024440 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:28:13.024451 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:28:13.024462 systemd[1]: Detected virtualization kvm. Feb 9 19:28:13.024471 systemd[1]: Detected architecture x86-64. Feb 9 19:28:13.024480 systemd[1]: Running in initrd. Feb 9 19:28:13.024490 systemd[1]: No hostname configured, using default hostname. Feb 9 19:28:13.024499 systemd[1]: Hostname set to . Feb 9 19:28:13.024508 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:28:13.024516 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:28:13.024525 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:28:13.024533 systemd[1]: Reached target cryptsetup.target. Feb 9 19:28:13.024542 systemd[1]: Reached target paths.target. Feb 9 19:28:13.024550 systemd[1]: Reached target slices.target. Feb 9 19:28:13.024560 systemd[1]: Reached target swap.target. Feb 9 19:28:13.024569 systemd[1]: Reached target timers.target. Feb 9 19:28:13.024578 systemd[1]: Listening on iscsid.socket. Feb 9 19:28:13.024586 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:28:13.024595 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:28:13.024603 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:28:13.024612 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:28:13.024622 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:28:13.024631 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:28:13.024639 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:28:13.024648 systemd[1]: Reached target sockets.target. Feb 9 19:28:13.024667 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:28:13.024687 systemd[1]: Finished network-cleanup.service. Feb 9 19:28:13.024698 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:28:13.024708 systemd[1]: Starting systemd-journald.service... Feb 9 19:28:13.024717 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:28:13.024726 systemd[1]: Starting systemd-resolved.service... Feb 9 19:28:13.024737 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:28:13.024746 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:28:13.024755 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:28:13.024769 systemd-journald[184]: Journal started Feb 9 19:28:13.024827 systemd-journald[184]: Runtime Journal (/run/log/journal/06641ebdb265489da0004e3ea24610a3) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:28:12.982024 systemd-modules-load[185]: Inserted module 'overlay' Feb 9 19:28:13.051291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:28:13.051318 kernel: Bridge firewalling registered Feb 9 19:28:13.051344 systemd[1]: Started systemd-journald.service. Feb 9 19:28:13.051359 kernel: audit: type=1130 audit(1707506893.039:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.032344 systemd-resolved[186]: Positive Trust Anchors: Feb 9 19:28:13.055958 kernel: audit: type=1130 audit(1707506893.050:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.032356 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:28:13.060593 kernel: audit: type=1130 audit(1707506893.055:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.032396 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:28:13.067604 kernel: audit: type=1130 audit(1707506893.060:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.067622 kernel: SCSI subsystem initialized Feb 9 19:28:13.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.033426 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 9 19:28:13.035627 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 9 19:28:13.051765 systemd[1]: Started systemd-resolved.service. Feb 9 19:28:13.056609 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:28:13.089952 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:28:13.089978 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:28:13.089995 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:28:13.090007 kernel: audit: type=1130 audit(1707506893.085:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.061250 systemd[1]: Reached target nss-lookup.target. Feb 9 19:28:13.068817 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:28:13.069996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:28:13.082589 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:28:13.092404 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 9 19:28:13.097909 kernel: audit: type=1130 audit(1707506893.092:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.093359 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:28:13.094172 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:28:13.102885 kernel: audit: type=1130 audit(1707506893.097:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.099188 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:28:13.104666 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:28:13.110591 dracut-cmdline[203]: dracut-dracut-053 Feb 9 19:28:13.113443 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:28:13.113562 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:28:13.120257 kernel: audit: type=1130 audit(1707506893.115:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.181036 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:28:13.194981 kernel: iscsi: registered transport (tcp) Feb 9 19:28:13.219730 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:28:13.219822 kernel: QLogic iSCSI HBA Driver Feb 9 19:28:13.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.267788 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:28:13.278022 kernel: audit: type=1130 audit(1707506893.267:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.278260 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:28:13.364251 kernel: raid6: sse2x4 gen() 6739 MB/s Feb 9 19:28:13.381017 kernel: raid6: sse2x4 xor() 4356 MB/s Feb 9 19:28:13.398009 kernel: raid6: sse2x2 gen() 12921 MB/s Feb 9 19:28:13.415080 kernel: raid6: sse2x2 xor() 8176 MB/s Feb 9 19:28:13.432041 kernel: raid6: sse2x1 gen() 10737 MB/s Feb 9 19:28:13.449857 kernel: raid6: sse2x1 xor() 6626 MB/s Feb 9 19:28:13.449999 kernel: raid6: using algorithm sse2x2 gen() 12921 MB/s Feb 9 19:28:13.450028 kernel: raid6: .... xor() 8176 MB/s, rmw enabled Feb 9 19:28:13.450904 kernel: raid6: using ssse3x2 recovery algorithm Feb 9 19:28:13.467712 kernel: xor: measuring software checksum speed Feb 9 19:28:13.467817 kernel: prefetch64-sse : 18464 MB/sec Feb 9 19:28:13.473905 kernel: generic_sse : 15658 MB/sec Feb 9 19:28:13.474009 kernel: xor: using function: prefetch64-sse (18464 MB/sec) Feb 9 19:28:13.592006 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:28:13.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.608526 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:28:13.609000 audit: BPF prog-id=7 op=LOAD Feb 9 19:28:13.610000 audit: BPF prog-id=8 op=LOAD Feb 9 19:28:13.612861 systemd[1]: Starting systemd-udevd.service... Feb 9 19:28:13.627244 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 9 19:28:13.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.632369 systemd[1]: Started systemd-udevd.service. Feb 9 19:28:13.639905 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:28:13.658330 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Feb 9 19:28:13.711413 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:28:13.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.714741 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:28:13.779052 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:28:13.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.846957 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 9 19:28:13.864103 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:28:13.864151 kernel: GPT:17805311 != 41943039 Feb 9 19:28:13.864164 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:28:13.864175 kernel: GPT:17805311 != 41943039 Feb 9 19:28:13.864186 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:28:13.864197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:28:13.878960 kernel: libata version 3.00 loaded. Feb 9 19:28:13.886066 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:28:13.899957 kernel: scsi host0: ata_piix Feb 9 19:28:13.901090 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:28:13.947425 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (436) Feb 9 19:28:13.947448 kernel: scsi host1: ata_piix Feb 9 19:28:13.947603 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 9 19:28:13.947615 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 9 19:28:13.959467 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:28:13.962902 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:28:13.964142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:28:13.968466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:28:13.970161 systemd[1]: Starting disk-uuid.service... Feb 9 19:28:13.981195 disk-uuid[460]: Primary Header is updated. Feb 9 19:28:13.981195 disk-uuid[460]: Secondary Entries is updated. Feb 9 19:28:13.981195 disk-uuid[460]: Secondary Header is updated. Feb 9 19:28:13.988967 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:28:13.997986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:28:15.030003 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:28:15.031604 disk-uuid[461]: The operation has completed successfully. Feb 9 19:28:15.106691 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:28:15.107708 systemd[1]: Finished disk-uuid.service. Feb 9 19:28:15.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.109878 systemd[1]: Starting verity-setup.service... Feb 9 19:28:15.153005 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 9 19:28:15.270036 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:28:15.273368 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:28:15.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.275117 systemd[1]: Finished verity-setup.service. Feb 9 19:28:15.482952 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:28:15.484072 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:28:15.486097 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:28:15.487807 systemd[1]: Starting ignition-setup.service... Feb 9 19:28:15.490507 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:28:15.611408 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:28:15.611469 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:28:15.611482 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:28:15.620336 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:28:15.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.622000 audit: BPF prog-id=9 op=LOAD Feb 9 19:28:15.625395 systemd[1]: Starting systemd-networkd.service... Feb 9 19:28:15.653422 systemd-networkd[622]: lo: Link UP Feb 9 19:28:15.653438 systemd-networkd[622]: lo: Gained carrier Feb 9 19:28:15.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.654275 systemd-networkd[622]: Enumeration completed Feb 9 19:28:15.654817 systemd[1]: Started systemd-networkd.service. Feb 9 19:28:15.655203 systemd-networkd[622]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:28:15.655667 systemd[1]: Reached target network.target. Feb 9 19:28:15.658876 systemd[1]: Starting iscsiuio.service... Feb 9 19:28:15.659325 systemd-networkd[622]: eth0: Link UP Feb 9 19:28:15.659330 systemd-networkd[622]: eth0: Gained carrier Feb 9 19:28:15.670319 systemd[1]: Started iscsiuio.service. Feb 9 19:28:15.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.673256 systemd[1]: Starting iscsid.service... Feb 9 19:28:15.677658 iscsid[627]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:28:15.677658 iscsid[627]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:28:15.677658 iscsid[627]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:28:15.677658 iscsid[627]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:28:15.677658 iscsid[627]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:28:15.677658 iscsid[627]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:28:15.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.682085 systemd-networkd[622]: eth0: DHCPv4 address 172.24.4.205/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:28:15.684191 systemd[1]: Started iscsid.service. Feb 9 19:28:15.687358 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:28:15.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:15.713528 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:28:15.714176 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:28:15.714626 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:28:15.715113 systemd[1]: Reached target remote-fs.target. Feb 9 19:28:15.716500 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:28:15.738157 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:28:15.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:16.083619 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:28:16.240240 systemd[1]: Finished ignition-setup.service. Feb 9 19:28:16.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:16.244424 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:28:16.644462 ignition[649]: Ignition 2.14.0 Feb 9 19:28:16.644482 ignition[649]: Stage: fetch-offline Feb 9 19:28:16.644625 ignition[649]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:28:16.644701 ignition[649]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:28:16.649236 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:28:16.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:16.646854 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:28:16.647121 ignition[649]: parsed url from cmdline: "" Feb 9 19:28:16.652496 systemd[1]: Starting ignition-fetch.service... Feb 9 19:28:16.647130 ignition[649]: no config URL provided Feb 9 19:28:16.647144 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:28:16.647164 ignition[649]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:28:16.647178 ignition[649]: failed to fetch config: resource requires networking Feb 9 19:28:16.647383 ignition[649]: Ignition finished successfully Feb 9 19:28:16.669185 ignition[655]: Ignition 2.14.0 Feb 9 19:28:16.669212 ignition[655]: Stage: fetch Feb 9 19:28:16.669444 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:28:16.669485 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:28:16.671723 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:28:16.672013 ignition[655]: parsed url from cmdline: "" Feb 9 19:28:16.672023 ignition[655]: no config URL provided Feb 9 19:28:16.672037 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:28:16.672057 ignition[655]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:28:16.678285 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 9 19:28:16.678319 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 9 19:28:16.678325 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 9 19:28:17.018716 ignition[655]: GET result: OK Feb 9 19:28:17.018999 ignition[655]: parsing config with SHA512: c19acf5b295f14675ea8b785bc2d36b1de5fb157cf9787fa264beb5488844c38a90c3dc36b9f01b7346bd10fdb644c6dc37d4096975e67823b4207c18c863c87 Feb 9 19:28:17.048258 systemd-networkd[622]: eth0: Gained IPv6LL Feb 9 19:28:17.090190 unknown[655]: fetched base config from "system" Feb 9 19:28:17.091033 unknown[655]: fetched base config from "system" Feb 9 19:28:17.091050 unknown[655]: fetched user config from "openstack" Feb 9 19:28:17.094429 ignition[655]: fetch: fetch complete Feb 9 19:28:17.094457 ignition[655]: fetch: fetch passed Feb 9 19:28:17.094586 ignition[655]: Ignition finished successfully Feb 9 19:28:17.099349 systemd[1]: Finished ignition-fetch.service. Feb 9 19:28:17.121819 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:28:17.121862 kernel: audit: type=1130 audit(1707506897.099:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.102135 systemd[1]: Starting ignition-kargs.service... Feb 9 19:28:17.138227 ignition[661]: Ignition 2.14.0 Feb 9 19:28:17.139770 ignition[661]: Stage: kargs Feb 9 19:28:17.141135 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:28:17.142475 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:28:17.144149 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:28:17.146444 ignition[661]: kargs: kargs passed Feb 9 19:28:17.146997 ignition[661]: Ignition finished successfully Feb 9 19:28:17.148849 systemd[1]: Finished ignition-kargs.service. Feb 9 19:28:17.158397 kernel: audit: type=1130 audit(1707506897.148:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.150224 systemd[1]: Starting ignition-disks.service... Feb 9 19:28:17.157783 ignition[666]: Ignition 2.14.0 Feb 9 19:28:17.157791 ignition[666]: Stage: disks Feb 9 19:28:17.157907 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:28:17.157945 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:28:17.162724 systemd[1]: Finished ignition-disks.service. Feb 9 19:28:17.168257 kernel: audit: type=1130 audit(1707506897.163:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.159736 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:28:17.164771 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:28:17.160998 ignition[666]: disks: disks passed Feb 9 19:28:17.169413 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:28:17.161042 ignition[666]: Ignition finished successfully Feb 9 19:28:17.171177 systemd[1]: Reached target local-fs.target. Feb 9 19:28:17.172695 systemd[1]: Reached target sysinit.target. Feb 9 19:28:17.174169 systemd[1]: Reached target basic.target. Feb 9 19:28:17.177406 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:28:17.276717 systemd-fsck[674]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:28:17.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.287391 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:28:17.300726 kernel: audit: type=1130 audit(1707506897.287:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.300371 systemd[1]: Mounting sysroot.mount... Feb 9 19:28:17.319968 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:28:17.321629 systemd[1]: Mounted sysroot.mount. Feb 9 19:28:17.323078 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:28:17.328081 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:28:17.330164 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:28:17.333242 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 9 19:28:17.338246 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:28:17.338312 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:28:17.345487 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:28:17.353255 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:28:17.356777 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:28:17.372417 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:28:17.382965 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681) Feb 9 19:28:17.390028 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:28:17.390091 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:28:17.390106 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:28:17.395228 initrd-setup-root[710]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:28:17.402890 initrd-setup-root[718]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:28:17.411654 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:28:17.413008 initrd-setup-root[728]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:28:17.593363 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:28:17.605583 kernel: audit: type=1130 audit(1707506897.593:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.596185 systemd[1]: Starting ignition-mount.service... Feb 9 19:28:17.609489 systemd[1]: Starting sysroot-boot.service... Feb 9 19:28:17.630346 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:28:17.630657 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:28:17.666899 ignition[749]: INFO : Ignition 2.14.0 Feb 9 19:28:17.666899 ignition[749]: INFO : Stage: mount Feb 9 19:28:17.669533 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:28:17.669533 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:28:17.671598 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:28:17.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.677878 ignition[749]: INFO : mount: mount passed Feb 9 19:28:17.677878 ignition[749]: INFO : Ignition finished successfully Feb 9 19:28:17.678902 kernel: audit: type=1130 audit(1707506897.673:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.673542 systemd[1]: Finished ignition-mount.service. Feb 9 19:28:17.685801 systemd[1]: Finished sysroot-boot.service. Feb 9 19:28:17.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.690958 kernel: audit: type=1130 audit(1707506897.685:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.697288 coreos-metadata[680]: Feb 09 19:28:17.697 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:28:17.717461 coreos-metadata[680]: Feb 09 19:28:17.717 INFO Fetch successful Feb 9 19:28:17.718343 coreos-metadata[680]: Feb 09 19:28:17.717 INFO wrote hostname ci-3510-3-2-0-71772ab2c7.novalocal to /sysroot/etc/hostname Feb 9 19:28:17.721448 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 9 19:28:17.721558 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 9 19:28:17.730639 kernel: audit: type=1130 audit(1707506897.721:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.730675 kernel: audit: type=1131 audit(1707506897.721:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:17.723677 systemd[1]: Starting ignition-files.service... Feb 9 19:28:17.734620 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:28:17.814033 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Feb 9 19:28:17.827494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:28:17.827565 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:28:17.827593 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:28:18.165894 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:28:18.189887 ignition[777]: INFO : Ignition 2.14.0 Feb 9 19:28:18.191981 ignition[777]: INFO : Stage: files Feb 9 19:28:18.194129 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:28:18.196853 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:28:18.204139 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:28:18.275887 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:28:18.293464 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:28:18.293464 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:28:18.364001 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:28:18.365914 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:28:18.369492 unknown[777]: wrote ssh authorized keys file for user: core Feb 9 19:28:18.371151 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:28:18.375009 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:28:18.375009 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:28:18.836338 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:28:19.709683 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:28:19.714489 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:28:19.714489 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:28:19.714489 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:28:20.041815 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:28:20.542365 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:28:20.546306 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:28:20.556638 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:28:20.556638 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:28:20.703701 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:28:21.678869 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 19:28:21.678869 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:28:21.678869 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:28:21.686859 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:28:21.788092 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:28:23.967611 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 19:28:23.969505 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:28:23.970483 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:28:23.971619 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:28:23.972816 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:28:23.972816 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:28:24.377870 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:28:24.380082 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:28:24.380082 ignition[777]: INFO : files: op(a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:28:24.418989 ignition[777]: INFO : files: op(a): op(b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(c): [started] processing unit "coreos-metadata.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(c): op(d): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(c): [finished] processing unit "coreos-metadata.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(e): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(e): op(f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(e): op(f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(e): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:28:24.421865 ignition[777]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:28:24.478860 kernel: audit: type=1130 audit(1707506904.442:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.438796 systemd[1]: Finished ignition-files.service. Feb 9 19:28:24.480593 ignition[777]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:28:24.480593 ignition[777]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:28:24.480593 ignition[777]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:28:24.480593 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:28:24.480593 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:28:24.480593 ignition[777]: INFO : files: files passed Feb 9 19:28:24.480593 ignition[777]: INFO : Ignition finished successfully Feb 9 19:28:24.521603 kernel: audit: type=1130 audit(1707506904.482:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.521633 kernel: audit: type=1131 audit(1707506904.482:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.521646 kernel: audit: type=1130 audit(1707506904.503:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.448443 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:28:24.457887 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:28:24.523450 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:28:24.459523 systemd[1]: Starting ignition-quench.service... Feb 9 19:28:24.481259 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:28:24.481471 systemd[1]: Finished ignition-quench.service. Feb 9 19:28:24.503345 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:28:24.504981 systemd[1]: Reached target ignition-complete.target. Feb 9 19:28:24.518272 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:28:24.540515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:28:24.542163 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:28:24.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.550074 systemd[1]: Reached target initrd-fs.target. Feb 9 19:28:24.557151 kernel: audit: type=1130 audit(1707506904.544:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.557195 kernel: audit: type=1131 audit(1707506904.549:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.556032 systemd[1]: Reached target initrd.target. Feb 9 19:28:24.557565 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:28:24.558309 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:28:24.573170 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:28:24.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.584875 kernel: audit: type=1130 audit(1707506904.573:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.584162 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:28:24.598665 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:28:24.600429 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:28:24.602271 systemd[1]: Stopped target timers.target. Feb 9 19:28:24.603962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:28:24.604236 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:28:24.609618 kernel: audit: type=1131 audit(1707506904.604:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.605911 systemd[1]: Stopped target initrd.target. Feb 9 19:28:24.610129 systemd[1]: Stopped target basic.target. Feb 9 19:28:24.611029 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:28:24.612068 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:28:24.613052 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:28:24.614032 systemd[1]: Stopped target remote-fs.target. Feb 9 19:28:24.614980 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:28:24.615903 systemd[1]: Stopped target sysinit.target. Feb 9 19:28:24.617102 systemd[1]: Stopped target local-fs.target. Feb 9 19:28:24.617943 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:28:24.618746 systemd[1]: Stopped target swap.target. Feb 9 19:28:24.619574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:28:24.624306 kernel: audit: type=1131 audit(1707506904.619:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.619729 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:28:24.620584 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:28:24.629382 kernel: audit: type=1131 audit(1707506904.624:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.624780 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:28:24.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.624941 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:28:24.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.625771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:28:24.625946 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:28:24.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.629979 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:28:24.647241 iscsid[627]: iscsid shutting down. Feb 9 19:28:24.630117 systemd[1]: Stopped ignition-files.service. Feb 9 19:28:24.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.649419 ignition[815]: INFO : Ignition 2.14.0 Feb 9 19:28:24.649419 ignition[815]: INFO : Stage: umount Feb 9 19:28:24.649419 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:28:24.649419 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:28:24.649419 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:28:24.649419 ignition[815]: INFO : umount: umount passed Feb 9 19:28:24.649419 ignition[815]: INFO : Ignition finished successfully Feb 9 19:28:24.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.631659 systemd[1]: Stopping ignition-mount.service... Feb 9 19:28:24.632626 systemd[1]: Stopping iscsid.service... Feb 9 19:28:24.637660 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:28:24.638171 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:28:24.638336 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:28:24.638916 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:28:24.639058 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:28:24.646205 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:28:24.646312 systemd[1]: Stopped iscsid.service. Feb 9 19:28:24.648809 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:28:24.648889 systemd[1]: Stopped ignition-mount.service. Feb 9 19:28:24.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.650413 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:28:24.650524 systemd[1]: Stopped ignition-disks.service. Feb 9 19:28:24.651541 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:28:24.651577 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:28:24.652064 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:28:24.652101 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:28:24.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.654420 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:28:24.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.654464 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:28:24.655569 systemd[1]: Stopped target paths.target. Feb 9 19:28:24.656366 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:28:24.659972 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:28:24.660540 systemd[1]: Stopped target slices.target. Feb 9 19:28:24.661391 systemd[1]: Stopped target sockets.target. Feb 9 19:28:24.662231 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:28:24.662260 systemd[1]: Closed iscsid.socket. Feb 9 19:28:24.662977 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:28:24.663013 systemd[1]: Stopped ignition-setup.service. Feb 9 19:28:24.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.666075 systemd[1]: Stopping iscsiuio.service... Feb 9 19:28:24.667147 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:28:24.667245 systemd[1]: Stopped iscsiuio.service. Feb 9 19:28:24.667885 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:28:24.667980 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:28:24.669210 systemd[1]: Stopped target network.target. Feb 9 19:28:24.670028 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:28:24.670056 systemd[1]: Closed iscsiuio.socket. Feb 9 19:28:24.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.671055 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:28:24.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.671880 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:28:24.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.673960 systemd-networkd[622]: eth0: DHCPv6 lease lost Feb 9 19:28:24.684000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:28:24.674827 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:28:24.674904 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:28:24.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.677567 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:28:24.677599 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:28:24.680526 systemd[1]: Stopping network-cleanup.service... Feb 9 19:28:24.694000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:28:24.681357 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:28:24.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.681411 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:28:24.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.683458 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:28:24.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.683508 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:28:24.684694 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:28:24.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.684747 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:28:24.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.685610 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:28:24.688274 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:28:24.688850 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:28:24.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.688964 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:28:24.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.691518 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:28:24.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.691678 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:28:24.694156 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:28:24.694193 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:28:24.694919 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:28:24.694960 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:28:24.695899 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:28:24.696001 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:28:24.696716 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:28:24.696763 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:28:24.697751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:28:24.697792 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:28:24.699382 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:28:24.700163 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:28:24.700220 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:28:24.703808 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:28:24.703858 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:28:24.707274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:28:24.707318 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:28:24.709327 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:28:24.709849 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:28:24.709940 systemd[1]: Stopped network-cleanup.service. Feb 9 19:28:24.712464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:28:24.712539 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:28:24.732076 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:28:24.782912 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:28:24.783163 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:28:24.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.785041 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:28:24.786192 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:28:24.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.786287 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:28:24.789288 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:28:24.825667 systemd[1]: Switching root. Feb 9 19:28:24.856407 systemd-journald[184]: Journal stopped Feb 9 19:28:30.593555 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 9 19:28:30.593610 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:28:30.593626 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:28:30.593643 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:28:30.593659 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:28:30.593675 kernel: SELinux: policy capability open_perms=1 Feb 9 19:28:30.593688 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:28:30.593701 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:28:30.593714 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:28:30.593726 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:28:30.593739 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:28:30.593751 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:28:30.593765 systemd[1]: Successfully loaded SELinux policy in 97.356ms. Feb 9 19:28:30.593783 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.753ms. Feb 9 19:28:30.593799 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:28:30.593813 systemd[1]: Detected virtualization kvm. Feb 9 19:28:30.593826 systemd[1]: Detected architecture x86-64. Feb 9 19:28:30.593841 systemd[1]: Detected first boot. Feb 9 19:28:30.593854 systemd[1]: Hostname set to . Feb 9 19:28:30.593868 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:28:30.593883 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:28:30.593897 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:28:30.593911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:28:30.593945 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:28:30.594133 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:28:30.594155 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 9 19:28:30.594171 kernel: audit: type=1334 audit(1707506910.363:90): prog-id=12 op=LOAD Feb 9 19:28:30.594184 kernel: audit: type=1334 audit(1707506910.363:91): prog-id=3 op=UNLOAD Feb 9 19:28:30.594199 kernel: audit: type=1334 audit(1707506910.366:92): prog-id=13 op=LOAD Feb 9 19:28:30.594211 kernel: audit: type=1334 audit(1707506910.369:93): prog-id=14 op=LOAD Feb 9 19:28:30.594223 kernel: audit: type=1334 audit(1707506910.369:94): prog-id=4 op=UNLOAD Feb 9 19:28:30.594236 kernel: audit: type=1334 audit(1707506910.369:95): prog-id=5 op=UNLOAD Feb 9 19:28:30.594248 kernel: audit: type=1334 audit(1707506910.371:96): prog-id=15 op=LOAD Feb 9 19:28:30.594261 kernel: audit: type=1334 audit(1707506910.371:97): prog-id=12 op=UNLOAD Feb 9 19:28:30.594273 kernel: audit: type=1334 audit(1707506910.374:98): prog-id=16 op=LOAD Feb 9 19:28:30.594286 kernel: audit: type=1334 audit(1707506910.377:99): prog-id=17 op=LOAD Feb 9 19:28:30.594301 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:28:30.594316 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:28:30.594335 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:28:30.594349 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:28:30.594363 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:28:30.594376 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:28:30.594392 systemd[1]: Created slice system-getty.slice. Feb 9 19:28:30.594405 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:28:30.594419 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:28:30.594433 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:28:30.594446 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:28:30.594460 systemd[1]: Created slice user.slice. Feb 9 19:28:30.594474 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:28:30.594488 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:28:30.594501 systemd[1]: Set up automount boot.automount. Feb 9 19:28:30.594517 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:28:30.594531 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:28:30.594545 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:28:30.594558 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:28:30.594572 systemd[1]: Reached target integritysetup.target. Feb 9 19:28:30.594585 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:28:30.594599 systemd[1]: Reached target remote-fs.target. Feb 9 19:28:30.594612 systemd[1]: Reached target slices.target. Feb 9 19:28:30.594626 systemd[1]: Reached target swap.target. Feb 9 19:28:30.594640 systemd[1]: Reached target torcx.target. Feb 9 19:28:30.594655 systemd[1]: Reached target veritysetup.target. Feb 9 19:28:30.594669 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:28:30.594682 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:28:30.594695 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:28:30.594709 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:28:30.594723 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:28:30.594736 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:28:30.594751 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:28:30.594765 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:28:30.594779 systemd[1]: Mounting media.mount... Feb 9 19:28:30.594792 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:28:30.594805 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:28:30.594818 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:28:30.594830 systemd[1]: Mounting tmp.mount... Feb 9 19:28:30.594843 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:28:30.594856 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:28:30.594868 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:28:30.594881 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:28:30.594895 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:28:30.594909 systemd[1]: Starting modprobe@drm.service... Feb 9 19:28:30.594921 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:28:30.594950 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:28:30.594963 systemd[1]: Starting modprobe@loop.service... Feb 9 19:28:30.594977 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:28:30.594989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:28:30.595002 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:28:30.595015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:28:30.595030 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:28:30.595043 systemd[1]: Stopped systemd-journald.service. Feb 9 19:28:30.595056 systemd[1]: Starting systemd-journald.service... Feb 9 19:28:30.595068 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:28:30.595081 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:28:30.597224 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:28:30.597245 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:28:30.597259 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:28:30.597273 systemd[1]: Stopped verity-setup.service. Feb 9 19:28:30.597291 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:28:30.597321 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:28:30.597337 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:28:30.597350 kernel: loop: module loaded Feb 9 19:28:30.597364 systemd[1]: Mounted media.mount. Feb 9 19:28:30.597377 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:28:30.597391 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:28:30.597404 systemd[1]: Mounted tmp.mount. Feb 9 19:28:30.597418 kernel: fuse: init (API version 7.34) Feb 9 19:28:30.597434 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:28:30.597447 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:28:30.597461 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:28:30.597474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:28:30.597487 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:28:30.597501 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:28:30.597517 systemd[1]: Finished modprobe@drm.service. Feb 9 19:28:30.597532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:28:30.597545 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:28:30.597562 systemd-journald[917]: Journal started Feb 9 19:28:30.597616 systemd-journald[917]: Runtime Journal (/run/log/journal/06641ebdb265489da0004e3ea24610a3) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:28:26.001000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:28:26.155000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:28:26.155000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:28:26.156000 audit: BPF prog-id=10 op=LOAD Feb 9 19:28:26.156000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:28:26.156000 audit: BPF prog-id=11 op=LOAD Feb 9 19:28:26.156000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:28:26.368000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:28:26.368000 audit[848]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.368000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:28:26.371000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:28:26.371000 audit[848]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000024105 a2=1ed a3=0 items=2 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.371000 audit: CWD cwd="/" Feb 9 19:28:26.371000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:26.371000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:26.371000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:28:30.363000 audit: BPF prog-id=12 op=LOAD Feb 9 19:28:30.363000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:28:30.366000 audit: BPF prog-id=13 op=LOAD Feb 9 19:28:30.369000 audit: BPF prog-id=14 op=LOAD Feb 9 19:28:30.369000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:28:30.369000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:28:30.371000 audit: BPF prog-id=15 op=LOAD Feb 9 19:28:30.371000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:28:30.374000 audit: BPF prog-id=16 op=LOAD Feb 9 19:28:30.377000 audit: BPF prog-id=17 op=LOAD Feb 9 19:28:30.377000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:28:30.377000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:28:30.379000 audit: BPF prog-id=18 op=LOAD Feb 9 19:28:30.380000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:28:30.382000 audit: BPF prog-id=19 op=LOAD Feb 9 19:28:30.385000 audit: BPF prog-id=20 op=LOAD Feb 9 19:28:30.385000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:28:30.385000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:28:30.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.399000 audit: BPF prog-id=18 op=UNLOAD Feb 9 19:28:30.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.518000 audit: BPF prog-id=21 op=LOAD Feb 9 19:28:30.518000 audit: BPF prog-id=22 op=LOAD Feb 9 19:28:30.518000 audit: BPF prog-id=23 op=LOAD Feb 9 19:28:30.518000 audit: BPF prog-id=19 op=UNLOAD Feb 9 19:28:30.518000 audit: BPF prog-id=20 op=UNLOAD Feb 9 19:28:30.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.590000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:28:30.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.590000 audit[917]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd8dddea80 a2=4000 a3=7ffd8dddeb1c items=0 ppid=1 pid=917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:30.590000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:28:30.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:26.365548 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:28:30.363244 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:28:26.366735 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:28:30.363255 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:28:26.366765 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:28:30.386472 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:28:26.366800 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:28:26.366813 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:28:30.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:26.366847 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:28:26.366864 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:28:26.367105 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:28:26.367151 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:28:26.367167 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:28:26.368019 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:28:26.368060 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:28:26.368082 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:28:26.368100 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:28:30.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:26.368120 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:28:26.368137 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:28:29.976825 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:28:30.601953 systemd[1]: Started systemd-journald.service. Feb 9 19:28:29.977108 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:28:29.977227 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:28:30.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.602064 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:28:29.977419 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:28:30.602170 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:28:29.977479 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:28:30.602808 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:28:29.977546 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-09T19:28:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:28:30.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.603881 systemd[1]: Finished modprobe@loop.service. Feb 9 19:28:30.604689 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:28:30.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.606091 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:28:30.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.608113 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:28:30.609005 systemd[1]: Reached target network-pre.target. Feb 9 19:28:30.610675 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:28:30.612080 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:28:30.614653 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:28:30.617496 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:28:30.623729 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:28:30.624263 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:28:30.627650 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:28:30.628206 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:28:30.629337 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:28:30.631162 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:28:30.631694 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:28:30.635724 systemd-journald[917]: Time spent on flushing to /var/log/journal/06641ebdb265489da0004e3ea24610a3 is 47.444ms for 1139 entries. Feb 9 19:28:30.635724 systemd-journald[917]: System Journal (/var/log/journal/06641ebdb265489da0004e3ea24610a3) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:28:30.700475 systemd-journald[917]: Received client request to flush runtime journal. Feb 9 19:28:30.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.648957 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:28:30.649589 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:28:30.701665 udevadm[958]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:28:30.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.658396 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:28:30.667279 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:28:30.669070 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:28:30.679266 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:28:30.681076 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:28:30.701372 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:28:30.709727 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:28:30.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:30.711373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:28:30.745908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:28:30.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:31.403669 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:28:31.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:31.406000 audit: BPF prog-id=24 op=LOAD Feb 9 19:28:31.406000 audit: BPF prog-id=25 op=LOAD Feb 9 19:28:31.406000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:28:31.406000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:28:31.409590 systemd[1]: Starting systemd-udevd.service... Feb 9 19:28:31.452616 systemd-udevd[962]: Using default interface naming scheme 'v252'. Feb 9 19:28:31.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:31.591420 systemd[1]: Started systemd-udevd.service. Feb 9 19:28:31.599000 audit: BPF prog-id=26 op=LOAD Feb 9 19:28:31.603310 systemd[1]: Starting systemd-networkd.service... Feb 9 19:28:31.635000 audit: BPF prog-id=27 op=LOAD Feb 9 19:28:31.636000 audit: BPF prog-id=28 op=LOAD Feb 9 19:28:31.636000 audit: BPF prog-id=29 op=LOAD Feb 9 19:28:31.639523 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:28:31.693001 systemd[1]: Started systemd-userdbd.service. Feb 9 19:28:31.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:31.695075 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:28:31.746989 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:28:31.765058 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:28:31.814055 systemd-networkd[973]: lo: Link UP Feb 9 19:28:31.814073 systemd-networkd[973]: lo: Gained carrier Feb 9 19:28:31.815534 systemd-networkd[973]: Enumeration completed Feb 9 19:28:31.815647 systemd[1]: Started systemd-networkd.service. Feb 9 19:28:31.815698 systemd-networkd[973]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:28:31.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:31.818095 systemd-networkd[973]: eth0: Link UP Feb 9 19:28:31.818103 systemd-networkd[973]: eth0: Gained carrier Feb 9 19:28:31.831110 systemd-networkd[973]: eth0: DHCPv4 address 172.24.4.205/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:28:31.806000 audit[982]: AVC avc: denied { confidentiality } for pid=982 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:28:31.806000 audit[982]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ebaf748dd0 a1=32194 a2=7f9483055bc5 a3=5 items=108 ppid=962 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:31.806000 audit: CWD cwd="/" Feb 9 19:28:31.806000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=1 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=2 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=3 name=(null) inode=13823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=4 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=5 name=(null) inode=13824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=6 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=7 name=(null) inode=13825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=8 name=(null) inode=13825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=9 name=(null) inode=13826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=10 name=(null) inode=13825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=11 name=(null) inode=13827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=12 name=(null) inode=13825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=13 name=(null) inode=13828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=14 name=(null) inode=13825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=15 name=(null) inode=13829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=16 name=(null) inode=13825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=17 name=(null) inode=13830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=18 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=19 name=(null) inode=13831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=20 name=(null) inode=13831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=21 name=(null) inode=13832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=22 name=(null) inode=13831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=23 name=(null) inode=13833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=24 name=(null) inode=13831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=25 name=(null) inode=13834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=26 name=(null) inode=13831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=27 name=(null) inode=13835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=28 name=(null) inode=13831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=29 name=(null) inode=13836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=30 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=31 name=(null) inode=13837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=32 name=(null) inode=13837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=33 name=(null) inode=13838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=34 name=(null) inode=13837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=35 name=(null) inode=13839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=36 name=(null) inode=13837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=37 name=(null) inode=13840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=38 name=(null) inode=13837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=39 name=(null) inode=13841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=40 name=(null) inode=13837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=41 name=(null) inode=13842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=42 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=43 name=(null) inode=13843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=44 name=(null) inode=13843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=45 name=(null) inode=13844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=46 name=(null) inode=13843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=47 name=(null) inode=13845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=48 name=(null) inode=13843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=49 name=(null) inode=13846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=50 name=(null) inode=13843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=51 name=(null) inode=13847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=52 name=(null) inode=13843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=53 name=(null) inode=13848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=55 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=56 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=57 name=(null) inode=13850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=58 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=59 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=60 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=61 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=62 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=63 name=(null) inode=13853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=64 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=65 name=(null) inode=13854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=66 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=67 name=(null) inode=13855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=68 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=69 name=(null) inode=13856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=70 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=71 name=(null) inode=13857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=72 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=73 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=74 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=75 name=(null) inode=13859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=76 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=77 name=(null) inode=13860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=78 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=79 name=(null) inode=13861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=80 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=81 name=(null) inode=13862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=82 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=83 name=(null) inode=13863 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=84 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=85 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=86 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=87 name=(null) inode=13865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=88 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=89 name=(null) inode=13866 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=90 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=91 name=(null) inode=13867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=92 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=93 name=(null) inode=13868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=94 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=95 name=(null) inode=13869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=96 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=97 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=98 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=99 name=(null) inode=13871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=100 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=101 name=(null) inode=13872 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=102 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=103 name=(null) inode=13873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=104 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=105 name=(null) inode=13874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=106 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PATH item=107 name=(null) inode=13875 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:31.806000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:28:31.849614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:28:31.893958 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:28:31.924971 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 19:28:31.937011 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:28:31.990739 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:28:31.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:31.994635 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:28:32.070296 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:28:32.113756 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:28:32.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.115228 systemd[1]: Reached target cryptsetup.target. Feb 9 19:28:32.118743 systemd[1]: Starting lvm2-activation.service... Feb 9 19:28:32.127831 lvm[992]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:28:32.165905 systemd[1]: Finished lvm2-activation.service. Feb 9 19:28:32.167331 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:28:32.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.168486 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:28:32.168547 systemd[1]: Reached target local-fs.target. Feb 9 19:28:32.169629 systemd[1]: Reached target machines.target. Feb 9 19:28:32.173080 systemd[1]: Starting ldconfig.service... Feb 9 19:28:32.175691 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:28:32.175793 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:28:32.178324 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:28:32.182382 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:28:32.191017 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:28:32.192495 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:28:32.192589 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:28:32.198552 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:28:32.222515 systemd[1]: boot.automount: Got automount request for /boot, triggered by 994 (bootctl) Feb 9 19:28:32.223970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:28:32.224897 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:28:32.244474 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:28:32.252096 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:28:32.259564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:28:32.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.655164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:28:32.656555 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:28:32.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.831874 systemd-fsck[1003]: fsck.fat 4.2 (2021-01-31) Feb 9 19:28:32.831874 systemd-fsck[1003]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 19:28:32.836362 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:28:32.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.840437 systemd[1]: Mounting boot.mount... Feb 9 19:28:32.876179 systemd[1]: Mounted boot.mount. Feb 9 19:28:32.924266 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:28:32.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.998112 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:28:32.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:33.001898 systemd[1]: Starting audit-rules.service... Feb 9 19:28:33.005872 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:28:33.012229 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:28:33.017000 audit: BPF prog-id=30 op=LOAD Feb 9 19:28:33.022000 audit: BPF prog-id=31 op=LOAD Feb 9 19:28:33.022104 systemd[1]: Starting systemd-resolved.service... Feb 9 19:28:33.025912 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:28:33.028760 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:28:33.043000 audit[1013]: SYSTEM_BOOT pid=1013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:28:33.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:33.044823 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:28:33.047253 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:28:33.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:33.050795 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:28:33.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:33.083448 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:28:33.115000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:28:33.115000 audit[1027]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffac5cbcc0 a2=420 a3=0 items=0 ppid=1006 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:33.115000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:28:33.117203 augenrules[1027]: No rules Feb 9 19:28:33.117709 systemd[1]: Finished audit-rules.service. Feb 9 19:28:33.142326 systemd-resolved[1010]: Positive Trust Anchors: Feb 9 19:28:33.142728 systemd-resolved[1010]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:28:33.142844 systemd-resolved[1010]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:28:33.150117 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:28:33.150842 systemd[1]: Reached target time-set.target. Feb 9 19:28:33.154794 systemd-resolved[1010]: Using system hostname 'ci-3510-3-2-0-71772ab2c7.novalocal'. Feb 9 19:28:33.157736 systemd[1]: Started systemd-resolved.service. Feb 9 19:28:33.158736 systemd[1]: Reached target network.target. Feb 9 19:28:33.159603 systemd[1]: Reached target nss-lookup.target. Feb 9 19:28:33.639540 systemd-timesyncd[1011]: Contacted time server 95.81.173.8:123 (0.flatcar.pool.ntp.org). Feb 9 19:28:33.639897 systemd-timesyncd[1011]: Initial clock synchronization to Fri 2024-02-09 19:28:33.639456 UTC. Feb 9 19:28:33.639985 systemd-resolved[1010]: Clock change detected. Flushing caches. Feb 9 19:28:33.753855 systemd-networkd[973]: eth0: Gained IPv6LL Feb 9 19:28:33.900773 ldconfig[993]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:28:33.918429 systemd[1]: Finished ldconfig.service. Feb 9 19:28:33.922014 systemd[1]: Starting systemd-update-done.service... Feb 9 19:28:33.935202 systemd[1]: Finished systemd-update-done.service. Feb 9 19:28:33.936462 systemd[1]: Reached target sysinit.target. Feb 9 19:28:33.937630 systemd[1]: Started motdgen.path. Feb 9 19:28:33.938679 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:28:33.940359 systemd[1]: Started logrotate.timer. Feb 9 19:28:33.941508 systemd[1]: Started mdadm.timer. Feb 9 19:28:33.942501 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:28:33.943562 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:28:33.943634 systemd[1]: Reached target paths.target. Feb 9 19:28:33.944643 systemd[1]: Reached target timers.target. Feb 9 19:28:33.946561 systemd[1]: Listening on dbus.socket. Feb 9 19:28:33.949574 systemd[1]: Starting docker.socket... Feb 9 19:28:33.956967 systemd[1]: Listening on sshd.socket. Feb 9 19:28:33.958211 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:28:33.959130 systemd[1]: Listening on docker.socket. Feb 9 19:28:33.960281 systemd[1]: Reached target sockets.target. Feb 9 19:28:33.961311 systemd[1]: Reached target basic.target. Feb 9 19:28:33.962506 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:28:33.962571 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:28:33.964504 systemd[1]: Starting containerd.service... Feb 9 19:28:33.967440 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:28:33.970829 systemd[1]: Starting dbus.service... Feb 9 19:28:33.974802 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:28:33.983630 systemd[1]: Starting extend-filesystems.service... Feb 9 19:28:33.985136 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:28:33.993115 systemd[1]: Starting motdgen.service... Feb 9 19:28:34.000203 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:28:34.003877 systemd[1]: Starting prepare-critools.service... Feb 9 19:28:34.006383 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:28:34.008789 systemd[1]: Starting sshd-keygen.service... Feb 9 19:28:34.014882 systemd[1]: Starting systemd-logind.service... Feb 9 19:28:34.015377 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:28:34.015442 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:28:34.017963 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:28:34.018801 systemd[1]: Starting update-engine.service... Feb 9 19:28:34.020471 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:28:34.031849 systemd[1]: Created slice system-sshd.slice. Feb 9 19:28:34.049600 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:28:34.049793 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:28:34.054772 jq[1052]: true Feb 9 19:28:34.061593 jq[1041]: false Feb 9 19:28:34.063352 extend-filesystems[1042]: Found vda Feb 9 19:28:34.064705 tar[1054]: ./ Feb 9 19:28:34.064705 tar[1054]: ./loopback Feb 9 19:28:34.065016 extend-filesystems[1042]: Found vda1 Feb 9 19:28:34.065586 extend-filesystems[1042]: Found vda2 Feb 9 19:28:34.066472 extend-filesystems[1042]: Found vda3 Feb 9 19:28:34.067393 extend-filesystems[1042]: Found usr Feb 9 19:28:34.068074 extend-filesystems[1042]: Found vda4 Feb 9 19:28:34.069271 tar[1055]: crictl Feb 9 19:28:34.069297 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:28:34.069478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:28:34.069818 extend-filesystems[1042]: Found vda6 Feb 9 19:28:34.070850 dbus-daemon[1038]: [system] SELinux support is enabled Feb 9 19:28:34.071000 systemd[1]: Started dbus.service. Feb 9 19:28:34.077411 extend-filesystems[1042]: Found vda7 Feb 9 19:28:34.077411 extend-filesystems[1042]: Found vda9 Feb 9 19:28:34.077411 extend-filesystems[1042]: Checking size of /dev/vda9 Feb 9 19:28:34.079341 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:28:34.079365 systemd[1]: Reached target system-config.target. Feb 9 19:28:34.081278 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:28:34.081303 systemd[1]: Reached target user-config.target. Feb 9 19:28:34.093190 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:28:34.093357 systemd[1]: Finished motdgen.service. Feb 9 19:28:34.097498 jq[1068]: true Feb 9 19:28:34.106048 extend-filesystems[1042]: Resized partition /dev/vda9 Feb 9 19:28:34.116410 extend-filesystems[1086]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:28:34.151763 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 9 19:28:34.219967 update_engine[1051]: I0209 19:28:34.218914 1051 main.cc:92] Flatcar Update Engine starting Feb 9 19:28:34.262084 update_engine[1051]: I0209 19:28:34.230645 1051 update_check_scheduler.cc:74] Next update check in 8m47s Feb 9 19:28:34.262215 coreos-metadata[1037]: Feb 09 19:28:34.227 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 9 19:28:34.224606 systemd[1]: Started update-engine.service. Feb 9 19:28:34.227687 systemd[1]: Started locksmithd.service. Feb 9 19:28:34.263545 systemd-logind[1050]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:28:34.263567 systemd-logind[1050]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:28:34.265455 systemd-logind[1050]: New seat seat0. Feb 9 19:28:34.268684 systemd[1]: Started systemd-logind.service. Feb 9 19:28:34.276272 env[1057]: time="2024-02-09T19:28:34.276178847Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:28:34.300746 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 9 19:28:34.414388 env[1057]: time="2024-02-09T19:28:34.309693213Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:28:34.414256 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:28:34.417392 extend-filesystems[1086]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:28:34.417392 extend-filesystems[1086]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 9 19:28:34.417392 extend-filesystems[1086]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 9 19:28:34.437585 bash[1095]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:28:34.414431 systemd[1]: Finished extend-filesystems.service. Feb 9 19:28:34.438136 tar[1054]: ./bandwidth Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.424422427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.428125232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.428234968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.435130550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.435227142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.435276414Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.435340093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.435642270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438229 env[1057]: time="2024-02-09T19:28:34.436514516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:28:34.438690 extend-filesystems[1042]: Resized filesystem in /dev/vda9 Feb 9 19:28:34.417334 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:28:34.443345 env[1057]: time="2024-02-09T19:28:34.442274579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:28:34.443345 env[1057]: time="2024-02-09T19:28:34.442330073Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:28:34.443345 env[1057]: time="2024-02-09T19:28:34.442466028Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:28:34.443345 env[1057]: time="2024-02-09T19:28:34.442502777Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.452562524Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.452633377Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.452669815Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.452808385Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.452929573Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.452971792Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453007098Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453041172Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453074144Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453107396Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453139967Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453179040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453402309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:28:34.454863 env[1057]: time="2024-02-09T19:28:34.453656245Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:28:34.455708 env[1057]: time="2024-02-09T19:28:34.454619602Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:28:34.455708 env[1057]: time="2024-02-09T19:28:34.454685486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.455708 env[1057]: time="2024-02-09T19:28:34.454756088Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.455824592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.455978871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456015771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456048542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456079610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456110709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456140735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456169699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456207771Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456501101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456546406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456578536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456610887Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:28:34.457161 env[1057]: time="2024-02-09T19:28:34.456652996Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:28:34.459614 env[1057]: time="2024-02-09T19:28:34.456683463Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:28:34.459614 env[1057]: time="2024-02-09T19:28:34.457799777Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:28:34.459614 env[1057]: time="2024-02-09T19:28:34.457907589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:28:34.459860 env[1057]: time="2024-02-09T19:28:34.458387158Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:28:34.459860 env[1057]: time="2024-02-09T19:28:34.458540205Z" level=info msg="Connect containerd service" Feb 9 19:28:34.459860 env[1057]: time="2024-02-09T19:28:34.458603454Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:28:34.466498 env[1057]: time="2024-02-09T19:28:34.461103312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:28:34.466498 env[1057]: time="2024-02-09T19:28:34.462867120Z" level=info msg="Start subscribing containerd event" Feb 9 19:28:34.466498 env[1057]: time="2024-02-09T19:28:34.463107141Z" level=info msg="Start recovering state" Feb 9 19:28:34.466498 env[1057]: time="2024-02-09T19:28:34.464110883Z" level=info msg="Start event monitor" Feb 9 19:28:34.466498 env[1057]: time="2024-02-09T19:28:34.464164874Z" level=info msg="Start snapshots syncer" Feb 9 19:28:34.466498 env[1057]: time="2024-02-09T19:28:34.464196724Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:28:34.466498 env[1057]: time="2024-02-09T19:28:34.464216070Z" level=info msg="Start streaming server" Feb 9 19:28:34.467788 env[1057]: time="2024-02-09T19:28:34.467706316Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:28:34.468193 env[1057]: time="2024-02-09T19:28:34.468153836Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:28:34.468545 env[1057]: time="2024-02-09T19:28:34.468507679Z" level=info msg="containerd successfully booted in 0.236937s" Feb 9 19:28:34.468576 systemd[1]: Started containerd.service. Feb 9 19:28:34.521922 tar[1054]: ./ptp Feb 9 19:28:34.554623 coreos-metadata[1037]: Feb 09 19:28:34.554 INFO Fetch successful Feb 9 19:28:34.554623 coreos-metadata[1037]: Feb 09 19:28:34.554 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:28:34.572285 coreos-metadata[1037]: Feb 09 19:28:34.572 INFO Fetch successful Feb 9 19:28:34.602620 unknown[1037]: wrote ssh authorized keys file for user: core Feb 9 19:28:34.628147 tar[1054]: ./vlan Feb 9 19:28:34.671439 update-ssh-keys[1105]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:28:34.671820 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:28:34.720848 tar[1054]: ./host-device Feb 9 19:28:34.797889 tar[1054]: ./tuning Feb 9 19:28:34.862616 tar[1054]: ./vrf Feb 9 19:28:34.924361 tar[1054]: ./sbr Feb 9 19:28:34.963663 tar[1054]: ./tap Feb 9 19:28:35.045849 tar[1054]: ./dhcp Feb 9 19:28:35.179851 tar[1054]: ./static Feb 9 19:28:35.196510 systemd[1]: Finished prepare-critools.service. Feb 9 19:28:35.215458 tar[1054]: ./firewall Feb 9 19:28:35.261089 tar[1054]: ./macvlan Feb 9 19:28:35.302620 tar[1054]: ./dummy Feb 9 19:28:35.342955 tar[1054]: ./bridge Feb 9 19:28:35.387063 tar[1054]: ./ipvlan Feb 9 19:28:35.388385 sshd_keygen[1077]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:28:35.410798 locksmithd[1097]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:28:35.411593 systemd[1]: Finished sshd-keygen.service. Feb 9 19:28:35.413522 systemd[1]: Starting issuegen.service... Feb 9 19:28:35.414946 systemd[1]: Started sshd@0-172.24.4.205:22-172.24.4.1:46046.service. Feb 9 19:28:35.422511 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:28:35.422667 systemd[1]: Finished issuegen.service. Feb 9 19:28:35.424439 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:28:35.432175 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:28:35.434117 systemd[1]: Started getty@tty1.service. Feb 9 19:28:35.435782 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:28:35.436403 systemd[1]: Reached target getty.target. Feb 9 19:28:35.444618 tar[1054]: ./portmap Feb 9 19:28:35.482846 tar[1054]: ./host-local Feb 9 19:28:35.531340 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:28:35.533557 systemd[1]: Reached target multi-user.target. Feb 9 19:28:35.538874 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:28:35.550413 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:28:35.550865 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:28:35.553313 systemd[1]: Startup finished in 948ms (kernel) + 13.133s (initrd) + 9.230s (userspace) = 23.311s. Feb 9 19:28:36.644954 sshd[1117]: Accepted publickey for core from 172.24.4.1 port 46046 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:36.649700 sshd[1117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:36.676568 systemd-logind[1050]: New session 1 of user core. Feb 9 19:28:36.680019 systemd[1]: Created slice user-500.slice. Feb 9 19:28:36.682445 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:28:36.702839 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:28:36.706054 systemd[1]: Starting user@500.service... Feb 9 19:28:36.713203 (systemd)[1128]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:36.840067 systemd[1128]: Queued start job for default target default.target. Feb 9 19:28:36.840627 systemd[1128]: Reached target paths.target. Feb 9 19:28:36.840647 systemd[1128]: Reached target sockets.target. Feb 9 19:28:36.840665 systemd[1128]: Reached target timers.target. Feb 9 19:28:36.840679 systemd[1128]: Reached target basic.target. Feb 9 19:28:36.840748 systemd[1128]: Reached target default.target. Feb 9 19:28:36.840777 systemd[1128]: Startup finished in 115ms. Feb 9 19:28:36.840898 systemd[1]: Started user@500.service. Feb 9 19:28:36.842012 systemd[1]: Started session-1.scope. Feb 9 19:28:37.280482 systemd[1]: Started sshd@1-172.24.4.205:22-172.24.4.1:42736.service. Feb 9 19:28:39.047221 sshd[1137]: Accepted publickey for core from 172.24.4.1 port 42736 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:39.049997 sshd[1137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:39.060796 systemd-logind[1050]: New session 2 of user core. Feb 9 19:28:39.062265 systemd[1]: Started session-2.scope. Feb 9 19:28:39.841626 systemd[1]: Started sshd@2-172.24.4.205:22-172.24.4.1:42748.service. Feb 9 19:28:40.186818 sshd[1137]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:40.196217 systemd[1]: sshd@1-172.24.4.205:22-172.24.4.1:42736.service: Deactivated successfully. Feb 9 19:28:40.198387 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:28:40.199654 systemd-logind[1050]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:28:40.203148 systemd-logind[1050]: Removed session 2. Feb 9 19:28:41.154958 sshd[1142]: Accepted publickey for core from 172.24.4.1 port 42748 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:41.157634 sshd[1142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:41.167842 systemd-logind[1050]: New session 3 of user core. Feb 9 19:28:41.168692 systemd[1]: Started session-3.scope. Feb 9 19:28:41.834335 sshd[1142]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:41.837340 systemd[1]: Started sshd@3-172.24.4.205:22-172.24.4.1:42764.service. Feb 9 19:28:41.842391 systemd[1]: sshd@2-172.24.4.205:22-172.24.4.1:42748.service: Deactivated successfully. Feb 9 19:28:41.843663 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:28:41.846706 systemd-logind[1050]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:28:41.848596 systemd-logind[1050]: Removed session 3. Feb 9 19:28:43.323046 sshd[1148]: Accepted publickey for core from 172.24.4.1 port 42764 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:43.327512 sshd[1148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:43.341211 systemd-logind[1050]: New session 4 of user core. Feb 9 19:28:43.342364 systemd[1]: Started session-4.scope. Feb 9 19:28:43.987860 sshd[1148]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:43.994509 systemd[1]: Started sshd@4-172.24.4.205:22-172.24.4.1:42778.service. Feb 9 19:28:43.995617 systemd[1]: sshd@3-172.24.4.205:22-172.24.4.1:42764.service: Deactivated successfully. Feb 9 19:28:43.997265 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:28:43.999554 systemd-logind[1050]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:28:44.002471 systemd-logind[1050]: Removed session 4. Feb 9 19:28:45.257514 sshd[1154]: Accepted publickey for core from 172.24.4.1 port 42778 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:45.260238 sshd[1154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:45.270879 systemd-logind[1050]: New session 5 of user core. Feb 9 19:28:45.271699 systemd[1]: Started session-5.scope. Feb 9 19:28:45.744343 sudo[1158]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:28:45.745517 sudo[1158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:28:46.399643 systemd[1]: Reloading. Feb 9 19:28:46.531065 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2024-02-09T19:28:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:28:46.531097 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2024-02-09T19:28:46Z" level=info msg="torcx already run" Feb 9 19:28:46.627500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:28:46.627670 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:28:46.652947 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:28:46.738112 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:28:46.750590 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:28:46.751359 systemd[1]: Reached target network-online.target. Feb 9 19:28:46.753019 systemd[1]: Started kubelet.service. Feb 9 19:28:46.765257 systemd[1]: Starting coreos-metadata.service... Feb 9 19:28:46.810890 coreos-metadata[1243]: Feb 09 19:28:46.810 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:28:46.829415 coreos-metadata[1243]: Feb 09 19:28:46.829 INFO Fetch successful Feb 9 19:28:46.829415 coreos-metadata[1243]: Feb 09 19:28:46.829 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 9 19:28:46.837563 kubelet[1235]: E0209 19:28:46.837511 1235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:28:46.839619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:28:46.839777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:28:46.846429 coreos-metadata[1243]: Feb 09 19:28:46.846 INFO Fetch successful Feb 9 19:28:46.846429 coreos-metadata[1243]: Feb 09 19:28:46.846 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 9 19:28:46.860869 coreos-metadata[1243]: Feb 09 19:28:46.860 INFO Fetch successful Feb 9 19:28:46.860869 coreos-metadata[1243]: Feb 09 19:28:46.860 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 9 19:28:46.875883 coreos-metadata[1243]: Feb 09 19:28:46.875 INFO Fetch successful Feb 9 19:28:46.875949 coreos-metadata[1243]: Feb 09 19:28:46.875 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 9 19:28:46.890845 coreos-metadata[1243]: Feb 09 19:28:46.890 INFO Fetch successful Feb 9 19:28:46.905950 systemd[1]: Finished coreos-metadata.service. Feb 9 19:28:47.585405 systemd[1]: Stopped kubelet.service. Feb 9 19:28:47.624904 systemd[1]: Reloading. Feb 9 19:28:47.753005 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2024-02-09T19:28:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:28:47.753336 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2024-02-09T19:28:47Z" level=info msg="torcx already run" Feb 9 19:28:47.826407 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:28:47.826569 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:28:47.853891 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:28:47.953252 systemd[1]: Started kubelet.service. Feb 9 19:28:48.019252 kubelet[1345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:28:48.019252 kubelet[1345]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:28:48.019252 kubelet[1345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:28:48.019650 kubelet[1345]: I0209 19:28:48.019310 1345 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:28:48.713226 kubelet[1345]: I0209 19:28:48.713175 1345 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:28:48.713524 kubelet[1345]: I0209 19:28:48.713497 1345 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:28:48.714233 kubelet[1345]: I0209 19:28:48.714198 1345 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:28:48.718950 kubelet[1345]: I0209 19:28:48.718911 1345 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:28:48.744621 kubelet[1345]: I0209 19:28:48.744572 1345 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:28:48.745188 kubelet[1345]: I0209 19:28:48.745149 1345 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:28:48.746353 kubelet[1345]: I0209 19:28:48.746312 1345 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:28:48.746462 kubelet[1345]: I0209 19:28:48.746378 1345 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:28:48.746462 kubelet[1345]: I0209 19:28:48.746404 1345 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:28:48.746675 kubelet[1345]: I0209 19:28:48.746640 1345 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:28:48.746959 kubelet[1345]: I0209 19:28:48.746934 1345 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:28:48.747024 kubelet[1345]: I0209 19:28:48.746978 1345 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:28:48.747061 kubelet[1345]: I0209 19:28:48.747024 1345 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:28:48.747061 kubelet[1345]: I0209 19:28:48.747057 1345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:28:48.747546 kubelet[1345]: E0209 19:28:48.747510 1345 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:48.747622 kubelet[1345]: E0209 19:28:48.747577 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:48.748591 kubelet[1345]: I0209 19:28:48.748577 1345 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:28:48.749003 kubelet[1345]: W0209 19:28:48.748989 1345 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:28:48.749714 kubelet[1345]: I0209 19:28:48.749699 1345 server.go:1232] "Started kubelet" Feb 9 19:28:48.749992 kubelet[1345]: I0209 19:28:48.749952 1345 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:28:48.750127 kubelet[1345]: I0209 19:28:48.750114 1345 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:28:48.750477 kubelet[1345]: I0209 19:28:48.750462 1345 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:28:48.752316 kubelet[1345]: I0209 19:28:48.752276 1345 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:28:48.753958 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:28:48.754120 kubelet[1345]: I0209 19:28:48.754105 1345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:28:48.757171 kubelet[1345]: E0209 19:28:48.757133 1345 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:28:48.757238 kubelet[1345]: E0209 19:28:48.757192 1345 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:28:48.764400 kubelet[1345]: E0209 19:28:48.764364 1345 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.205\" not found" Feb 9 19:28:48.764571 kubelet[1345]: I0209 19:28:48.764559 1345 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:28:48.764804 kubelet[1345]: I0209 19:28:48.764788 1345 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:28:48.764917 kubelet[1345]: I0209 19:28:48.764907 1345 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:28:48.784402 kubelet[1345]: W0209 19:28:48.784380 1345 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.205" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:48.784558 kubelet[1345]: E0209 19:28:48.784546 1345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.205" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:48.784662 kubelet[1345]: W0209 19:28:48.784649 1345 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:48.784768 kubelet[1345]: E0209 19:28:48.784742 1345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:48.784881 kubelet[1345]: W0209 19:28:48.784867 1345 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:48.784967 kubelet[1345]: E0209 19:28:48.784956 1345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:48.785117 kubelet[1345]: E0209 19:28:48.785103 1345 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.205\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 19:28:48.785304 kubelet[1345]: E0209 19:28:48.785213 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b2488143a49050", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 749678672, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 749678672, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.789690 kubelet[1345]: E0209 19:28:48.789416 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b248814416de93", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 757169811, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 757169811, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.798810 kubelet[1345]: I0209 19:28:48.798037 1345 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:28:48.798810 kubelet[1345]: I0209 19:28:48.798056 1345 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:28:48.798810 kubelet[1345]: I0209 19:28:48.798070 1345 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:28:48.800357 kubelet[1345]: E0209 19:28:48.798835 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b24881467888ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.205 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797124847, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797124847, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.800450 kubelet[1345]: E0209 19:28:48.800024 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b2488146789f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.205 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797130568, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797130568, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.801623 kubelet[1345]: E0209 19:28:48.801035 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b248814678ab9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.205 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797133724, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797133724, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.803165 kubelet[1345]: I0209 19:28:48.802015 1345 policy_none.go:49] "None policy: Start" Feb 9 19:28:48.803165 kubelet[1345]: I0209 19:28:48.802581 1345 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:28:48.803165 kubelet[1345]: I0209 19:28:48.802619 1345 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:28:48.810055 systemd[1]: Created slice kubepods.slice. Feb 9 19:28:48.817716 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:28:48.822026 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:28:48.829820 kubelet[1345]: I0209 19:28:48.829791 1345 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:28:48.830479 kubelet[1345]: I0209 19:28:48.830465 1345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:28:48.831842 kubelet[1345]: E0209 19:28:48.831799 1345 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.205\" not found" Feb 9 19:28:48.836762 kubelet[1345]: E0209 19:28:48.836601 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b2488148add379", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 834171769, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 834171769, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.866383 kubelet[1345]: I0209 19:28:48.866325 1345 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.205" Feb 9 19:28:48.868656 kubelet[1345]: E0209 19:28:48.868625 1345 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.205" Feb 9 19:28:48.869674 kubelet[1345]: E0209 19:28:48.869578 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b24881467888ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.205 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797124847, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 866248759, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b24881467888ef" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.871233 kubelet[1345]: E0209 19:28:48.871147 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b2488146789f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.205 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797130568, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 866270309, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b2488146789f48" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.872960 kubelet[1345]: E0209 19:28:48.872881 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b248814678ab9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.205 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797133724, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 866275679, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b248814678ab9c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:48.896997 kubelet[1345]: I0209 19:28:48.896971 1345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:28:48.898299 kubelet[1345]: I0209 19:28:48.898241 1345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:28:48.898354 kubelet[1345]: I0209 19:28:48.898315 1345 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:28:48.898354 kubelet[1345]: I0209 19:28:48.898345 1345 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:28:48.898442 kubelet[1345]: E0209 19:28:48.898424 1345 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:28:48.900790 kubelet[1345]: W0209 19:28:48.900758 1345 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:48.900863 kubelet[1345]: E0209 19:28:48.900802 1345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:48.988370 kubelet[1345]: E0209 19:28:48.988181 1345 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.205\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 19:28:49.070243 kubelet[1345]: I0209 19:28:49.070174 1345 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.205" Feb 9 19:28:49.072190 kubelet[1345]: E0209 19:28:49.072149 1345 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.205" Feb 9 19:28:49.072799 kubelet[1345]: E0209 19:28:49.072597 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b24881467888ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.205 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797124847, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 49, 70091706, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b24881467888ef" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:49.074461 kubelet[1345]: E0209 19:28:49.074312 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b2488146789f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.205 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797130568, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 49, 70109159, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b2488146789f48" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:49.076537 kubelet[1345]: E0209 19:28:49.076407 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b248814678ab9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.205 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797133724, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 49, 70115561, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b248814678ab9c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:49.393208 kubelet[1345]: E0209 19:28:49.393152 1345 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.205\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 19:28:49.474384 kubelet[1345]: I0209 19:28:49.474156 1345 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.205" Feb 9 19:28:49.478634 kubelet[1345]: E0209 19:28:49.478043 1345 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.205" Feb 9 19:28:49.478970 kubelet[1345]: E0209 19:28:49.478368 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b24881467888ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.205 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797124847, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 49, 474081840, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b24881467888ef" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:49.481317 kubelet[1345]: E0209 19:28:49.481188 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b2488146789f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.205 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797130568, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 49, 474092820, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b2488146789f48" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:49.483294 kubelet[1345]: E0209 19:28:49.483185 1345 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.205.17b248814678ab9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.205", UID:"172.24.4.205", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.205 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.205"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 48, 797133724, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 49, 474099373, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.205"}': 'events "172.24.4.205.17b248814678ab9c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:49.718524 kubelet[1345]: I0209 19:28:49.718292 1345 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:28:49.748496 kubelet[1345]: I0209 19:28:49.748452 1345 apiserver.go:52] "Watching apiserver" Feb 9 19:28:49.749146 kubelet[1345]: E0209 19:28:49.749116 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:49.765271 kubelet[1345]: I0209 19:28:49.765224 1345 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:28:50.164885 kubelet[1345]: E0209 19:28:50.164795 1345 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.205" not found Feb 9 19:28:50.202590 kubelet[1345]: E0209 19:28:50.202516 1345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.205\" not found" node="172.24.4.205" Feb 9 19:28:50.280914 kubelet[1345]: I0209 19:28:50.280870 1345 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.205" Feb 9 19:28:50.290572 kubelet[1345]: I0209 19:28:50.290518 1345 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.205" Feb 9 19:28:50.318610 kubelet[1345]: I0209 19:28:50.318575 1345 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:28:50.320411 env[1057]: time="2024-02-09T19:28:50.320174206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:28:50.321547 kubelet[1345]: I0209 19:28:50.321504 1345 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:28:50.344547 kubelet[1345]: I0209 19:28:50.344491 1345 topology_manager.go:215] "Topology Admit Handler" podUID="1ca32581-b64e-4c0a-9723-abe2027d4f8e" podNamespace="kube-system" podName="kube-proxy-785kf" Feb 9 19:28:50.357213 systemd[1]: Created slice kubepods-besteffort-pod1ca32581_b64e_4c0a_9723_abe2027d4f8e.slice. Feb 9 19:28:50.358881 kubelet[1345]: I0209 19:28:50.358806 1345 topology_manager.go:215] "Topology Admit Handler" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" podNamespace="kube-system" podName="cilium-t55vh" Feb 9 19:28:50.375001 kubelet[1345]: I0209 19:28:50.374964 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ca32581-b64e-4c0a-9723-abe2027d4f8e-kube-proxy\") pod \"kube-proxy-785kf\" (UID: \"1ca32581-b64e-4c0a-9723-abe2027d4f8e\") " pod="kube-system/kube-proxy-785kf" Feb 9 19:28:50.375349 kubelet[1345]: I0209 19:28:50.375323 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-lib-modules\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.375649 kubelet[1345]: I0209 19:28:50.375625 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-xtables-lock\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.377087 kubelet[1345]: I0209 19:28:50.377025 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ca32581-b64e-4c0a-9723-abe2027d4f8e-lib-modules\") pod \"kube-proxy-785kf\" (UID: \"1ca32581-b64e-4c0a-9723-abe2027d4f8e\") " pod="kube-system/kube-proxy-785kf" Feb 9 19:28:50.377425 kubelet[1345]: I0209 19:28:50.377401 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-run\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.377825 kubelet[1345]: I0209 19:28:50.377770 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-bpf-maps\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.378642 systemd[1]: Created slice kubepods-burstable-podf2100bdf_e802_49ac_980d_bfb2abbe7f32.slice. Feb 9 19:28:50.381138 kubelet[1345]: I0209 19:28:50.381093 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-cgroup\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.381342 kubelet[1345]: I0209 19:28:50.381202 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-etc-cni-netd\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.381342 kubelet[1345]: I0209 19:28:50.381289 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-config-path\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.381817 kubelet[1345]: I0209 19:28:50.381374 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxt5\" (UniqueName: \"kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-kube-api-access-clxt5\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.381817 kubelet[1345]: I0209 19:28:50.381591 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ca32581-b64e-4c0a-9723-abe2027d4f8e-xtables-lock\") pod \"kube-proxy-785kf\" (UID: \"1ca32581-b64e-4c0a-9723-abe2027d4f8e\") " pod="kube-system/kube-proxy-785kf" Feb 9 19:28:50.381817 kubelet[1345]: I0209 19:28:50.381660 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hostproc\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.381817 kubelet[1345]: I0209 19:28:50.381719 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cni-path\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.382126 kubelet[1345]: I0209 19:28:50.381844 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfj9\" (UniqueName: \"kubernetes.io/projected/1ca32581-b64e-4c0a-9723-abe2027d4f8e-kube-api-access-zqfj9\") pod \"kube-proxy-785kf\" (UID: \"1ca32581-b64e-4c0a-9723-abe2027d4f8e\") " pod="kube-system/kube-proxy-785kf" Feb 9 19:28:50.382126 kubelet[1345]: I0209 19:28:50.381905 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2100bdf-e802-49ac-980d-bfb2abbe7f32-clustermesh-secrets\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.382126 kubelet[1345]: I0209 19:28:50.381966 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-net\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.382126 kubelet[1345]: I0209 19:28:50.382024 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-kernel\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.382126 kubelet[1345]: I0209 19:28:50.382081 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hubble-tls\") pod \"cilium-t55vh\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " pod="kube-system/cilium-t55vh" Feb 9 19:28:50.480585 sudo[1158]: pam_unix(sudo:session): session closed for user root Feb 9 19:28:50.677128 env[1057]: time="2024-02-09T19:28:50.676183391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-785kf,Uid:1ca32581-b64e-4c0a-9723-abe2027d4f8e,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:50.691216 env[1057]: time="2024-02-09T19:28:50.691147536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t55vh,Uid:f2100bdf-e802-49ac-980d-bfb2abbe7f32,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:50.703172 sshd[1154]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:50.709860 systemd[1]: sshd@4-172.24.4.205:22-172.24.4.1:42778.service: Deactivated successfully. Feb 9 19:28:50.711677 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:28:50.713471 systemd-logind[1050]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:28:50.716701 systemd-logind[1050]: Removed session 5. Feb 9 19:28:50.750340 kubelet[1345]: E0209 19:28:50.750169 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:51.549790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562513439.mount: Deactivated successfully. Feb 9 19:28:51.564548 env[1057]: time="2024-02-09T19:28:51.564478117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.570513 env[1057]: time="2024-02-09T19:28:51.570356222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.574908 env[1057]: time="2024-02-09T19:28:51.574845572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.587379 env[1057]: time="2024-02-09T19:28:51.587249776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.596611 env[1057]: time="2024-02-09T19:28:51.596530472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.604227 env[1057]: time="2024-02-09T19:28:51.604144091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.608033 env[1057]: time="2024-02-09T19:28:51.607914272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.612764 env[1057]: time="2024-02-09T19:28:51.612591855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:51.683809 env[1057]: time="2024-02-09T19:28:51.682632035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:51.683809 env[1057]: time="2024-02-09T19:28:51.682736992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:51.683809 env[1057]: time="2024-02-09T19:28:51.682772278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:51.683809 env[1057]: time="2024-02-09T19:28:51.682944782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6 pid=1399 runtime=io.containerd.runc.v2 Feb 9 19:28:51.698801 env[1057]: time="2024-02-09T19:28:51.698613318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:51.698801 env[1057]: time="2024-02-09T19:28:51.698660887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:51.699090 env[1057]: time="2024-02-09T19:28:51.698683359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:51.699302 env[1057]: time="2024-02-09T19:28:51.699149263Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df519ebe444a2580cd6e5ed7db818e66a86dab1dd05f0e2076cf84f1222817f4 pid=1417 runtime=io.containerd.runc.v2 Feb 9 19:28:51.715582 systemd[1]: Started cri-containerd-555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6.scope. Feb 9 19:28:51.745016 systemd[1]: Started cri-containerd-df519ebe444a2580cd6e5ed7db818e66a86dab1dd05f0e2076cf84f1222817f4.scope. Feb 9 19:28:51.750764 kubelet[1345]: E0209 19:28:51.750408 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:51.773529 env[1057]: time="2024-02-09T19:28:51.773463069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t55vh,Uid:f2100bdf-e802-49ac-980d-bfb2abbe7f32,Namespace:kube-system,Attempt:0,} returns sandbox id \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\"" Feb 9 19:28:51.777143 env[1057]: time="2024-02-09T19:28:51.777112042Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:28:51.786172 env[1057]: time="2024-02-09T19:28:51.786050727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-785kf,Uid:1ca32581-b64e-4c0a-9723-abe2027d4f8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"df519ebe444a2580cd6e5ed7db818e66a86dab1dd05f0e2076cf84f1222817f4\"" Feb 9 19:28:52.751337 kubelet[1345]: E0209 19:28:52.751292 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:53.752558 kubelet[1345]: E0209 19:28:53.752500 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:54.753139 kubelet[1345]: E0209 19:28:54.753035 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:55.754065 kubelet[1345]: E0209 19:28:55.754008 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:56.754971 kubelet[1345]: E0209 19:28:56.754907 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:57.755547 kubelet[1345]: E0209 19:28:57.755485 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:58.755910 kubelet[1345]: E0209 19:28:58.755820 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:59.110382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228923585.mount: Deactivated successfully. Feb 9 19:28:59.756445 kubelet[1345]: E0209 19:28:59.756403 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:00.756821 kubelet[1345]: E0209 19:29:00.756765 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:01.756994 kubelet[1345]: E0209 19:29:01.756891 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:02.757504 kubelet[1345]: E0209 19:29:02.757390 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:03.758514 kubelet[1345]: E0209 19:29:03.758386 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:04.020150 env[1057]: time="2024-02-09T19:29:04.019556554Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:04.025044 env[1057]: time="2024-02-09T19:29:04.024987329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:04.028791 env[1057]: time="2024-02-09T19:29:04.028661210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:04.030713 env[1057]: time="2024-02-09T19:29:04.030650871Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:29:04.033312 env[1057]: time="2024-02-09T19:29:04.033238324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 19:29:04.037796 env[1057]: time="2024-02-09T19:29:04.037634850Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:29:04.071531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254211522.mount: Deactivated successfully. Feb 9 19:29:04.083243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649424080.mount: Deactivated successfully. Feb 9 19:29:04.099693 env[1057]: time="2024-02-09T19:29:04.099612178Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\"" Feb 9 19:29:04.101533 env[1057]: time="2024-02-09T19:29:04.101397076Z" level=info msg="StartContainer for \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\"" Feb 9 19:29:04.133117 systemd[1]: Started cri-containerd-94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21.scope. Feb 9 19:29:04.171642 env[1057]: time="2024-02-09T19:29:04.171580034Z" level=info msg="StartContainer for \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\" returns successfully" Feb 9 19:29:04.178917 systemd[1]: cri-containerd-94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21.scope: Deactivated successfully. Feb 9 19:29:04.755915 env[1057]: time="2024-02-09T19:29:04.755800950Z" level=info msg="shim disconnected" id=94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21 Feb 9 19:29:04.756372 env[1057]: time="2024-02-09T19:29:04.756321556Z" level=warning msg="cleaning up after shim disconnected" id=94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21 namespace=k8s.io Feb 9 19:29:04.756544 env[1057]: time="2024-02-09T19:29:04.756507515Z" level=info msg="cleaning up dead shim" Feb 9 19:29:04.759531 kubelet[1345]: E0209 19:29:04.759453 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:04.772781 env[1057]: time="2024-02-09T19:29:04.772656453Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1525 runtime=io.containerd.runc.v2\n" Feb 9 19:29:04.965168 env[1057]: time="2024-02-09T19:29:04.965093137Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:29:05.002326 env[1057]: time="2024-02-09T19:29:05.002245206Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\"" Feb 9 19:29:05.004195 env[1057]: time="2024-02-09T19:29:05.004141673Z" level=info msg="StartContainer for \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\"" Feb 9 19:29:05.045518 systemd[1]: Started cri-containerd-2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15.scope. Feb 9 19:29:05.067890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21-rootfs.mount: Deactivated successfully. Feb 9 19:29:05.111563 env[1057]: time="2024-02-09T19:29:05.111517927Z" level=info msg="StartContainer for \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\" returns successfully" Feb 9 19:29:05.115354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:29:05.115588 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:29:05.115789 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:29:05.117477 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:29:05.119925 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:29:05.130808 systemd[1]: cri-containerd-2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15.scope: Deactivated successfully. Feb 9 19:29:05.131833 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:29:05.150964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15-rootfs.mount: Deactivated successfully. Feb 9 19:29:05.173943 env[1057]: time="2024-02-09T19:29:05.173892992Z" level=info msg="shim disconnected" id=2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15 Feb 9 19:29:05.174104 env[1057]: time="2024-02-09T19:29:05.173944578Z" level=warning msg="cleaning up after shim disconnected" id=2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15 namespace=k8s.io Feb 9 19:29:05.174104 env[1057]: time="2024-02-09T19:29:05.173955529Z" level=info msg="cleaning up dead shim" Feb 9 19:29:05.182505 env[1057]: time="2024-02-09T19:29:05.182447596Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1593 runtime=io.containerd.runc.v2\n" Feb 9 19:29:05.760399 kubelet[1345]: E0209 19:29:05.760359 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:05.968679 env[1057]: time="2024-02-09T19:29:05.968612127Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:29:06.006370 env[1057]: time="2024-02-09T19:29:06.006320734Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\"" Feb 9 19:29:06.007383 env[1057]: time="2024-02-09T19:29:06.007355938Z" level=info msg="StartContainer for \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\"" Feb 9 19:29:06.048945 systemd[1]: Started cri-containerd-8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5.scope. Feb 9 19:29:06.103029 systemd[1]: cri-containerd-8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5.scope: Deactivated successfully. Feb 9 19:29:06.111548 env[1057]: time="2024-02-09T19:29:06.111443463Z" level=info msg="StartContainer for \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\" returns successfully" Feb 9 19:29:06.142998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5-rootfs.mount: Deactivated successfully. Feb 9 19:29:06.331656 env[1057]: time="2024-02-09T19:29:06.331459907Z" level=info msg="shim disconnected" id=8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5 Feb 9 19:29:06.332464 env[1057]: time="2024-02-09T19:29:06.332417429Z" level=warning msg="cleaning up after shim disconnected" id=8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5 namespace=k8s.io Feb 9 19:29:06.332714 env[1057]: time="2024-02-09T19:29:06.332673215Z" level=info msg="cleaning up dead shim" Feb 9 19:29:06.359499 env[1057]: time="2024-02-09T19:29:06.359412451Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1648 runtime=io.containerd.runc.v2\n" Feb 9 19:29:06.444846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164907200.mount: Deactivated successfully. Feb 9 19:29:06.760848 kubelet[1345]: E0209 19:29:06.760790 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:06.975927 env[1057]: time="2024-02-09T19:29:06.975867783Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:29:07.001294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886115490.mount: Deactivated successfully. Feb 9 19:29:07.027742 env[1057]: time="2024-02-09T19:29:07.027458815Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\"" Feb 9 19:29:07.028992 env[1057]: time="2024-02-09T19:29:07.028944343Z" level=info msg="StartContainer for \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\"" Feb 9 19:29:07.059166 systemd[1]: Started cri-containerd-1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882.scope. Feb 9 19:29:07.102912 systemd[1]: cri-containerd-1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882.scope: Deactivated successfully. Feb 9 19:29:07.104962 env[1057]: time="2024-02-09T19:29:07.104900637Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2100bdf_e802_49ac_980d_bfb2abbe7f32.slice/cri-containerd-1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882.scope/memory.events\": no such file or directory" Feb 9 19:29:07.119632 env[1057]: time="2024-02-09T19:29:07.119536886Z" level=info msg="StartContainer for \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\" returns successfully" Feb 9 19:29:07.136615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882-rootfs.mount: Deactivated successfully. Feb 9 19:29:07.415258 env[1057]: time="2024-02-09T19:29:07.415164067Z" level=info msg="shim disconnected" id=1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882 Feb 9 19:29:07.415621 env[1057]: time="2024-02-09T19:29:07.415262797Z" level=warning msg="cleaning up after shim disconnected" id=1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882 namespace=k8s.io Feb 9 19:29:07.415621 env[1057]: time="2024-02-09T19:29:07.415289575Z" level=info msg="cleaning up dead shim" Feb 9 19:29:07.437438 env[1057]: time="2024-02-09T19:29:07.437314201Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1706 runtime=io.containerd.runc.v2\n" Feb 9 19:29:07.679591 env[1057]: time="2024-02-09T19:29:07.679294344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:07.682014 env[1057]: time="2024-02-09T19:29:07.681931972Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:07.684086 env[1057]: time="2024-02-09T19:29:07.684034374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:07.686019 env[1057]: time="2024-02-09T19:29:07.685954834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:07.686798 env[1057]: time="2024-02-09T19:29:07.686766525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 19:29:07.689835 env[1057]: time="2024-02-09T19:29:07.689799152Z" level=info msg="CreateContainer within sandbox \"df519ebe444a2580cd6e5ed7db818e66a86dab1dd05f0e2076cf84f1222817f4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:29:07.707783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713232236.mount: Deactivated successfully. Feb 9 19:29:07.723892 env[1057]: time="2024-02-09T19:29:07.723856472Z" level=info msg="CreateContainer within sandbox \"df519ebe444a2580cd6e5ed7db818e66a86dab1dd05f0e2076cf84f1222817f4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a2c1dad41bd411c5b382b991d4603ea9edcb7023c9b8cfb95a191172e15fee1\"" Feb 9 19:29:07.725049 env[1057]: time="2024-02-09T19:29:07.725028940Z" level=info msg="StartContainer for \"0a2c1dad41bd411c5b382b991d4603ea9edcb7023c9b8cfb95a191172e15fee1\"" Feb 9 19:29:07.753550 systemd[1]: Started cri-containerd-0a2c1dad41bd411c5b382b991d4603ea9edcb7023c9b8cfb95a191172e15fee1.scope. Feb 9 19:29:07.761931 kubelet[1345]: E0209 19:29:07.761900 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:07.810259 env[1057]: time="2024-02-09T19:29:07.810217311Z" level=info msg="StartContainer for \"0a2c1dad41bd411c5b382b991d4603ea9edcb7023c9b8cfb95a191172e15fee1\" returns successfully" Feb 9 19:29:08.004608 env[1057]: time="2024-02-09T19:29:08.003482815Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:29:08.058316 env[1057]: time="2024-02-09T19:29:08.058235720Z" level=info msg="CreateContainer within sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\"" Feb 9 19:29:08.060342 env[1057]: time="2024-02-09T19:29:08.060223068Z" level=info msg="StartContainer for \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\"" Feb 9 19:29:08.067958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110659958.mount: Deactivated successfully. Feb 9 19:29:08.105286 systemd[1]: run-containerd-runc-k8s.io-f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad-runc.z8ms0r.mount: Deactivated successfully. Feb 9 19:29:08.112449 systemd[1]: Started cri-containerd-f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad.scope. Feb 9 19:29:08.164993 env[1057]: time="2024-02-09T19:29:08.164906711Z" level=info msg="StartContainer for \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\" returns successfully" Feb 9 19:29:08.283315 kubelet[1345]: I0209 19:29:08.283185 1345 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:29:08.713918 kernel: Initializing XFRM netlink socket Feb 9 19:29:08.748910 kubelet[1345]: E0209 19:29:08.748700 1345 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:08.763466 kubelet[1345]: E0209 19:29:08.763319 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:09.068414 kubelet[1345]: I0209 19:29:09.067870 1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-785kf" podStartSLOduration=3.168066573 podCreationTimestamp="2024-02-09 19:28:50 +0000 UTC" firstStartedPulling="2024-02-09 19:28:51.787402953 +0000 UTC m=+3.830445811" lastFinishedPulling="2024-02-09 19:29:07.687118055 +0000 UTC m=+19.730160963" observedRunningTime="2024-02-09 19:29:08.085454373 +0000 UTC m=+20.128497241" watchObservedRunningTime="2024-02-09 19:29:09.067781725 +0000 UTC m=+21.110824633" Feb 9 19:29:09.764156 kubelet[1345]: E0209 19:29:09.764101 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:10.698907 systemd-networkd[973]: cilium_host: Link UP Feb 9 19:29:10.701028 systemd-networkd[973]: cilium_net: Link UP Feb 9 19:29:10.703130 systemd-networkd[973]: cilium_net: Gained carrier Feb 9 19:29:10.705600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:29:10.705711 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:29:10.708318 systemd-networkd[973]: cilium_host: Gained carrier Feb 9 19:29:10.767156 kubelet[1345]: E0209 19:29:10.766141 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:10.833302 systemd-networkd[973]: cilium_vxlan: Link UP Feb 9 19:29:10.833316 systemd-networkd[973]: cilium_vxlan: Gained carrier Feb 9 19:29:11.111759 kernel: NET: Registered PF_ALG protocol family Feb 9 19:29:11.112965 systemd-networkd[973]: cilium_host: Gained IPv6LL Feb 9 19:29:11.385040 systemd-networkd[973]: cilium_net: Gained IPv6LL Feb 9 19:29:11.766473 kubelet[1345]: E0209 19:29:11.766396 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:11.896869 systemd-networkd[973]: cilium_vxlan: Gained IPv6LL Feb 9 19:29:11.962616 systemd-networkd[973]: lxc_health: Link UP Feb 9 19:29:11.975508 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:29:11.976086 systemd-networkd[973]: lxc_health: Gained carrier Feb 9 19:29:12.311915 kubelet[1345]: I0209 19:29:12.311851 1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t55vh" podStartSLOduration=10.056346134 podCreationTimestamp="2024-02-09 19:28:50 +0000 UTC" firstStartedPulling="2024-02-09 19:28:51.776379779 +0000 UTC m=+3.819422647" lastFinishedPulling="2024-02-09 19:29:04.031690521 +0000 UTC m=+16.074733429" observedRunningTime="2024-02-09 19:29:09.069297668 +0000 UTC m=+21.112340566" watchObservedRunningTime="2024-02-09 19:29:12.311656916 +0000 UTC m=+24.354699824" Feb 9 19:29:12.767136 kubelet[1345]: E0209 19:29:12.767089 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:13.177140 systemd-networkd[973]: lxc_health: Gained IPv6LL Feb 9 19:29:13.770296 kubelet[1345]: E0209 19:29:13.770229 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:14.770822 kubelet[1345]: E0209 19:29:14.770782 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:15.772091 kubelet[1345]: E0209 19:29:15.772029 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:16.772703 kubelet[1345]: E0209 19:29:16.772656 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:17.773910 kubelet[1345]: E0209 19:29:17.773763 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:18.127910 kubelet[1345]: I0209 19:29:18.127855 1345 topology_manager.go:215] "Topology Admit Handler" podUID="320d3ce1-4c3d-4ea6-9cf5-0de77144a65a" podNamespace="default" podName="nginx-deployment-6d5f899847-ph2bp" Feb 9 19:29:18.139622 systemd[1]: Created slice kubepods-besteffort-pod320d3ce1_4c3d_4ea6_9cf5_0de77144a65a.slice. Feb 9 19:29:18.288372 kubelet[1345]: I0209 19:29:18.288324 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q9jp\" (UniqueName: \"kubernetes.io/projected/320d3ce1-4c3d-4ea6-9cf5-0de77144a65a-kube-api-access-7q9jp\") pod \"nginx-deployment-6d5f899847-ph2bp\" (UID: \"320d3ce1-4c3d-4ea6-9cf5-0de77144a65a\") " pod="default/nginx-deployment-6d5f899847-ph2bp" Feb 9 19:29:18.448521 env[1057]: time="2024-02-09T19:29:18.447792549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-ph2bp,Uid:320d3ce1-4c3d-4ea6-9cf5-0de77144a65a,Namespace:default,Attempt:0,}" Feb 9 19:29:18.533787 systemd-networkd[973]: lxcccfa91ef05aa: Link UP Feb 9 19:29:18.541796 kernel: eth0: renamed from tmpe64a2 Feb 9 19:29:18.551927 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:29:18.552086 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcccfa91ef05aa: link becomes ready Feb 9 19:29:18.552225 systemd-networkd[973]: lxcccfa91ef05aa: Gained carrier Feb 9 19:29:18.774902 kubelet[1345]: E0209 19:29:18.774707 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:18.867952 env[1057]: time="2024-02-09T19:29:18.867812985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:18.868298 env[1057]: time="2024-02-09T19:29:18.868211592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:18.868531 env[1057]: time="2024-02-09T19:29:18.868467596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:18.869253 env[1057]: time="2024-02-09T19:29:18.869202055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e64a249d540a2f7648a9c82a12db16813567922613862fb3782e5e58733dbeae pid=2389 runtime=io.containerd.runc.v2 Feb 9 19:29:18.906395 systemd[1]: Started cri-containerd-e64a249d540a2f7648a9c82a12db16813567922613862fb3782e5e58733dbeae.scope. Feb 9 19:29:18.969982 env[1057]: time="2024-02-09T19:29:18.969931963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-ph2bp,Uid:320d3ce1-4c3d-4ea6-9cf5-0de77144a65a,Namespace:default,Attempt:0,} returns sandbox id \"e64a249d540a2f7648a9c82a12db16813567922613862fb3782e5e58733dbeae\"" Feb 9 19:29:18.972288 env[1057]: time="2024-02-09T19:29:18.972237618Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:29:19.243788 update_engine[1051]: I0209 19:29:19.242813 1051 update_attempter.cc:509] Updating boot flags... Feb 9 19:29:19.775393 kubelet[1345]: E0209 19:29:19.775282 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:19.833464 systemd-networkd[973]: lxcccfa91ef05aa: Gained IPv6LL Feb 9 19:29:20.242826 kubelet[1345]: I0209 19:29:20.242531 1345 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:29:20.776203 kubelet[1345]: E0209 19:29:20.776145 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:21.777346 kubelet[1345]: E0209 19:29:21.777274 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:22.778429 kubelet[1345]: E0209 19:29:22.778383 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:23.262327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087201698.mount: Deactivated successfully. Feb 9 19:29:23.779318 kubelet[1345]: E0209 19:29:23.779241 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:24.737447 env[1057]: time="2024-02-09T19:29:24.737368080Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:24.740714 env[1057]: time="2024-02-09T19:29:24.740660076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:24.745583 env[1057]: time="2024-02-09T19:29:24.745532839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:24.749760 env[1057]: time="2024-02-09T19:29:24.749609753Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:24.751967 env[1057]: time="2024-02-09T19:29:24.751860595Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:29:24.757708 env[1057]: time="2024-02-09T19:29:24.757646655Z" level=info msg="CreateContainer within sandbox \"e64a249d540a2f7648a9c82a12db16813567922613862fb3782e5e58733dbeae\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:29:24.787420 kubelet[1345]: E0209 19:29:24.787295 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:24.790256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953336089.mount: Deactivated successfully. Feb 9 19:29:24.804936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912718189.mount: Deactivated successfully. Feb 9 19:29:24.809134 env[1057]: time="2024-02-09T19:29:24.809057251Z" level=info msg="CreateContainer within sandbox \"e64a249d540a2f7648a9c82a12db16813567922613862fb3782e5e58733dbeae\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0ab0a28107a930eaa9e285e6aaad1df219596709b6158d47cda2673f77c67254\"" Feb 9 19:29:24.810817 env[1057]: time="2024-02-09T19:29:24.810701396Z" level=info msg="StartContainer for \"0ab0a28107a930eaa9e285e6aaad1df219596709b6158d47cda2673f77c67254\"" Feb 9 19:29:24.861007 systemd[1]: Started cri-containerd-0ab0a28107a930eaa9e285e6aaad1df219596709b6158d47cda2673f77c67254.scope. Feb 9 19:29:25.157257 env[1057]: time="2024-02-09T19:29:25.157102294Z" level=info msg="StartContainer for \"0ab0a28107a930eaa9e285e6aaad1df219596709b6158d47cda2673f77c67254\" returns successfully" Feb 9 19:29:25.787968 kubelet[1345]: E0209 19:29:25.787909 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:26.186396 kubelet[1345]: I0209 19:29:26.186331 1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-ph2bp" podStartSLOduration=2.405340191 podCreationTimestamp="2024-02-09 19:29:18 +0000 UTC" firstStartedPulling="2024-02-09 19:29:18.971549185 +0000 UTC m=+31.014592043" lastFinishedPulling="2024-02-09 19:29:24.752461632 +0000 UTC m=+36.795504540" observedRunningTime="2024-02-09 19:29:26.185213476 +0000 UTC m=+38.228256565" watchObservedRunningTime="2024-02-09 19:29:26.186252688 +0000 UTC m=+38.229295597" Feb 9 19:29:26.789558 kubelet[1345]: E0209 19:29:26.789499 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:27.791227 kubelet[1345]: E0209 19:29:27.791154 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:28.747607 kubelet[1345]: E0209 19:29:28.747548 1345 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:28.792434 kubelet[1345]: E0209 19:29:28.792356 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:29.793611 kubelet[1345]: E0209 19:29:29.793547 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:30.794778 kubelet[1345]: E0209 19:29:30.794676 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:31.795130 kubelet[1345]: E0209 19:29:31.795058 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:32.795934 kubelet[1345]: E0209 19:29:32.795815 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:33.455260 kubelet[1345]: I0209 19:29:33.455132 1345 topology_manager.go:215] "Topology Admit Handler" podUID="10f4e623-4ac9-4f93-bfee-322c4908e163" podNamespace="default" podName="nfs-server-provisioner-0" Feb 9 19:29:33.467006 systemd[1]: Created slice kubepods-besteffort-pod10f4e623_4ac9_4f93_bfee_322c4908e163.slice. Feb 9 19:29:33.495748 kubelet[1345]: I0209 19:29:33.495694 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/10f4e623-4ac9-4f93-bfee-322c4908e163-data\") pod \"nfs-server-provisioner-0\" (UID: \"10f4e623-4ac9-4f93-bfee-322c4908e163\") " pod="default/nfs-server-provisioner-0" Feb 9 19:29:33.495960 kubelet[1345]: I0209 19:29:33.495948 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb46k\" (UniqueName: \"kubernetes.io/projected/10f4e623-4ac9-4f93-bfee-322c4908e163-kube-api-access-tb46k\") pod \"nfs-server-provisioner-0\" (UID: \"10f4e623-4ac9-4f93-bfee-322c4908e163\") " pod="default/nfs-server-provisioner-0" Feb 9 19:29:33.774351 env[1057]: time="2024-02-09T19:29:33.773648751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:10f4e623-4ac9-4f93-bfee-322c4908e163,Namespace:default,Attempt:0,}" Feb 9 19:29:33.796947 kubelet[1345]: E0209 19:29:33.796902 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:33.858218 systemd-networkd[973]: lxc468e039b78a7: Link UP Feb 9 19:29:33.875813 kernel: eth0: renamed from tmp52b46 Feb 9 19:29:33.883276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:29:33.888807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc468e039b78a7: link becomes ready Feb 9 19:29:33.889264 systemd-networkd[973]: lxc468e039b78a7: Gained carrier Feb 9 19:29:34.242069 env[1057]: time="2024-02-09T19:29:34.241926596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:34.242426 env[1057]: time="2024-02-09T19:29:34.242086464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:34.242426 env[1057]: time="2024-02-09T19:29:34.242122943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:34.242790 env[1057]: time="2024-02-09T19:29:34.242684150Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b46b935e7ba4762b0e019f473a824fb3fa982ec4daf25104872a461328eadd pid=2522 runtime=io.containerd.runc.v2 Feb 9 19:29:34.271716 systemd[1]: Started cri-containerd-52b46b935e7ba4762b0e019f473a824fb3fa982ec4daf25104872a461328eadd.scope. Feb 9 19:29:34.323810 env[1057]: time="2024-02-09T19:29:34.323751634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:10f4e623-4ac9-4f93-bfee-322c4908e163,Namespace:default,Attempt:0,} returns sandbox id \"52b46b935e7ba4762b0e019f473a824fb3fa982ec4daf25104872a461328eadd\"" Feb 9 19:29:34.325479 env[1057]: time="2024-02-09T19:29:34.325455915Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:29:34.622957 systemd[1]: run-containerd-runc-k8s.io-52b46b935e7ba4762b0e019f473a824fb3fa982ec4daf25104872a461328eadd-runc.m9lRsR.mount: Deactivated successfully. Feb 9 19:29:34.798702 kubelet[1345]: E0209 19:29:34.798603 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:35.799351 kubelet[1345]: E0209 19:29:35.799308 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:35.898010 systemd-networkd[973]: lxc468e039b78a7: Gained IPv6LL Feb 9 19:29:36.800392 kubelet[1345]: E0209 19:29:36.800277 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:37.800872 kubelet[1345]: E0209 19:29:37.800828 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:38.252510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921227827.mount: Deactivated successfully. Feb 9 19:29:38.801807 kubelet[1345]: E0209 19:29:38.801759 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:39.802224 kubelet[1345]: E0209 19:29:39.802177 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:40.803107 kubelet[1345]: E0209 19:29:40.803048 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:41.188963 env[1057]: time="2024-02-09T19:29:41.188814789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:41.193436 env[1057]: time="2024-02-09T19:29:41.193366589Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:41.197452 env[1057]: time="2024-02-09T19:29:41.197382929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:41.201821 env[1057]: time="2024-02-09T19:29:41.201702424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:41.204159 env[1057]: time="2024-02-09T19:29:41.204059601Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:29:41.211218 env[1057]: time="2024-02-09T19:29:41.211134519Z" level=info msg="CreateContainer within sandbox \"52b46b935e7ba4762b0e019f473a824fb3fa982ec4daf25104872a461328eadd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:29:41.232691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170106888.mount: Deactivated successfully. Feb 9 19:29:41.249315 env[1057]: time="2024-02-09T19:29:41.249223141Z" level=info msg="CreateContainer within sandbox \"52b46b935e7ba4762b0e019f473a824fb3fa982ec4daf25104872a461328eadd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a8bfd338708a468cd4f8265e5ff2f42630d7e4347886c681c194664ff0ea96a1\"" Feb 9 19:29:41.250788 env[1057]: time="2024-02-09T19:29:41.250680575Z" level=info msg="StartContainer for \"a8bfd338708a468cd4f8265e5ff2f42630d7e4347886c681c194664ff0ea96a1\"" Feb 9 19:29:41.297798 systemd[1]: Started cri-containerd-a8bfd338708a468cd4f8265e5ff2f42630d7e4347886c681c194664ff0ea96a1.scope. Feb 9 19:29:41.342244 env[1057]: time="2024-02-09T19:29:41.342172832Z" level=info msg="StartContainer for \"a8bfd338708a468cd4f8265e5ff2f42630d7e4347886c681c194664ff0ea96a1\" returns successfully" Feb 9 19:29:41.803958 kubelet[1345]: E0209 19:29:41.803874 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:42.804453 kubelet[1345]: E0209 19:29:42.804281 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:43.805104 kubelet[1345]: E0209 19:29:43.805039 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:44.806596 kubelet[1345]: E0209 19:29:44.806514 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:45.808081 kubelet[1345]: E0209 19:29:45.808016 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:46.810019 kubelet[1345]: E0209 19:29:46.809941 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:47.811186 kubelet[1345]: E0209 19:29:47.811131 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:48.748240 kubelet[1345]: E0209 19:29:48.748147 1345 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:48.812287 kubelet[1345]: E0209 19:29:48.812228 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:49.813418 kubelet[1345]: E0209 19:29:49.813389 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:50.814327 kubelet[1345]: E0209 19:29:50.814276 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:51.348302 kubelet[1345]: I0209 19:29:51.348155 1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.468720197 podCreationTimestamp="2024-02-09 19:29:33 +0000 UTC" firstStartedPulling="2024-02-09 19:29:34.325232067 +0000 UTC m=+46.368274935" lastFinishedPulling="2024-02-09 19:29:41.20455634 +0000 UTC m=+53.247599248" observedRunningTime="2024-02-09 19:29:42.333292737 +0000 UTC m=+54.376335645" watchObservedRunningTime="2024-02-09 19:29:51.34804451 +0000 UTC m=+63.391087418" Feb 9 19:29:51.348938 kubelet[1345]: I0209 19:29:51.348526 1345 topology_manager.go:215] "Topology Admit Handler" podUID="ababd878-b06b-4859-ae79-8d2ebe95bb6b" podNamespace="default" podName="test-pod-1" Feb 9 19:29:51.364129 systemd[1]: Created slice kubepods-besteffort-podababd878_b06b_4859_ae79_8d2ebe95bb6b.slice. Feb 9 19:29:51.516042 kubelet[1345]: I0209 19:29:51.515973 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9l2\" (UniqueName: \"kubernetes.io/projected/ababd878-b06b-4859-ae79-8d2ebe95bb6b-kube-api-access-qs9l2\") pod \"test-pod-1\" (UID: \"ababd878-b06b-4859-ae79-8d2ebe95bb6b\") " pod="default/test-pod-1" Feb 9 19:29:51.516508 kubelet[1345]: I0209 19:29:51.516454 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ec82d9f-d315-40c3-b5d7-987b807209eb\" (UniqueName: \"kubernetes.io/nfs/ababd878-b06b-4859-ae79-8d2ebe95bb6b-pvc-6ec82d9f-d315-40c3-b5d7-987b807209eb\") pod \"test-pod-1\" (UID: \"ababd878-b06b-4859-ae79-8d2ebe95bb6b\") " pod="default/test-pod-1" Feb 9 19:29:51.707232 kernel: FS-Cache: Loaded Feb 9 19:29:51.767273 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:29:51.767554 kernel: RPC: Registered udp transport module. Feb 9 19:29:51.767612 kernel: RPC: Registered tcp transport module. Feb 9 19:29:51.767958 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:29:51.815826 kubelet[1345]: E0209 19:29:51.815761 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:51.830766 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:29:52.069299 kernel: NFS: Registering the id_resolver key type Feb 9 19:29:52.069987 kernel: Key type id_resolver registered Feb 9 19:29:52.070039 kernel: Key type id_legacy registered Feb 9 19:29:52.131831 nfsidmap[2696]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 9 19:29:52.140014 nfsidmap[2697]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 9 19:29:52.273220 env[1057]: time="2024-02-09T19:29:52.272440598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ababd878-b06b-4859-ae79-8d2ebe95bb6b,Namespace:default,Attempt:0,}" Feb 9 19:29:52.341640 systemd-networkd[973]: lxc3230afafcc9e: Link UP Feb 9 19:29:52.351044 kernel: eth0: renamed from tmp3bebe Feb 9 19:29:52.364796 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:29:52.364929 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3230afafcc9e: link becomes ready Feb 9 19:29:52.365202 systemd-networkd[973]: lxc3230afafcc9e: Gained carrier Feb 9 19:29:52.685576 env[1057]: time="2024-02-09T19:29:52.685279215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:52.685576 env[1057]: time="2024-02-09T19:29:52.685343064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:52.685576 env[1057]: time="2024-02-09T19:29:52.685368352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:52.686060 env[1057]: time="2024-02-09T19:29:52.686007268Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bebe51b1399f62b6aa1d3104035b5c34df9928fa20b4012621b392c3d3f2311 pid=2721 runtime=io.containerd.runc.v2 Feb 9 19:29:52.719060 systemd[1]: Started cri-containerd-3bebe51b1399f62b6aa1d3104035b5c34df9928fa20b4012621b392c3d3f2311.scope. Feb 9 19:29:52.767377 env[1057]: time="2024-02-09T19:29:52.767328792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ababd878-b06b-4859-ae79-8d2ebe95bb6b,Namespace:default,Attempt:0,} returns sandbox id \"3bebe51b1399f62b6aa1d3104035b5c34df9928fa20b4012621b392c3d3f2311\"" Feb 9 19:29:52.769824 env[1057]: time="2024-02-09T19:29:52.769702667Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:29:52.818042 kubelet[1345]: E0209 19:29:52.817894 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:53.209422 env[1057]: time="2024-02-09T19:29:53.209275645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:53.212687 env[1057]: time="2024-02-09T19:29:53.212616431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:53.216973 env[1057]: time="2024-02-09T19:29:53.216910884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:53.221329 env[1057]: time="2024-02-09T19:29:53.221271290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:53.224868 env[1057]: time="2024-02-09T19:29:53.224789679Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:29:53.230571 env[1057]: time="2024-02-09T19:29:53.230498661Z" level=info msg="CreateContainer within sandbox \"3bebe51b1399f62b6aa1d3104035b5c34df9928fa20b4012621b392c3d3f2311\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:29:53.258117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873859118.mount: Deactivated successfully. Feb 9 19:29:53.277910 env[1057]: time="2024-02-09T19:29:53.277700761Z" level=info msg="CreateContainer within sandbox \"3bebe51b1399f62b6aa1d3104035b5c34df9928fa20b4012621b392c3d3f2311\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2f13163912af25c814497a29de49d9510ee6405c069bb6a458f8e9ee5d7adfb9\"" Feb 9 19:29:53.279487 env[1057]: time="2024-02-09T19:29:53.279367412Z" level=info msg="StartContainer for \"2f13163912af25c814497a29de49d9510ee6405c069bb6a458f8e9ee5d7adfb9\"" Feb 9 19:29:53.316043 systemd[1]: Started cri-containerd-2f13163912af25c814497a29de49d9510ee6405c069bb6a458f8e9ee5d7adfb9.scope. Feb 9 19:29:53.368890 env[1057]: time="2024-02-09T19:29:53.368841689Z" level=info msg="StartContainer for \"2f13163912af25c814497a29de49d9510ee6405c069bb6a458f8e9ee5d7adfb9\" returns successfully" Feb 9 19:29:53.696699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626407954.mount: Deactivated successfully. Feb 9 19:29:53.818617 kubelet[1345]: E0209 19:29:53.818563 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:53.945392 systemd-networkd[973]: lxc3230afafcc9e: Gained IPv6LL Feb 9 19:29:54.368670 kubelet[1345]: I0209 19:29:54.368582 1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.912281 podCreationTimestamp="2024-02-09 19:29:36 +0000 UTC" firstStartedPulling="2024-02-09 19:29:52.769073609 +0000 UTC m=+64.812116477" lastFinishedPulling="2024-02-09 19:29:53.225224363 +0000 UTC m=+65.268267272" observedRunningTime="2024-02-09 19:29:54.365863044 +0000 UTC m=+66.408905992" watchObservedRunningTime="2024-02-09 19:29:54.368431795 +0000 UTC m=+66.411474704" Feb 9 19:29:54.820260 kubelet[1345]: E0209 19:29:54.819622 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:55.822078 kubelet[1345]: E0209 19:29:55.822013 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:56.823643 kubelet[1345]: E0209 19:29:56.823531 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:57.824804 kubelet[1345]: E0209 19:29:57.824704 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:58.825538 kubelet[1345]: E0209 19:29:58.825484 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:59.826368 kubelet[1345]: E0209 19:29:59.826299 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:00.828251 kubelet[1345]: E0209 19:30:00.828187 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:01.829183 kubelet[1345]: E0209 19:30:01.829137 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:02.830689 kubelet[1345]: E0209 19:30:02.830649 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:03.832129 kubelet[1345]: E0209 19:30:03.832093 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:04.779525 systemd[1]: run-containerd-runc-k8s.io-f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad-runc.kopOHm.mount: Deactivated successfully. Feb 9 19:30:04.817045 env[1057]: time="2024-02-09T19:30:04.816931527Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:30:04.827466 env[1057]: time="2024-02-09T19:30:04.827395578Z" level=info msg="StopContainer for \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\" with timeout 2 (s)" Feb 9 19:30:04.828069 env[1057]: time="2024-02-09T19:30:04.828020499Z" level=info msg="Stop container \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\" with signal terminated" Feb 9 19:30:04.833270 kubelet[1345]: E0209 19:30:04.833209 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:04.840804 systemd-networkd[973]: lxc_health: Link DOWN Feb 9 19:30:04.840823 systemd-networkd[973]: lxc_health: Lost carrier Feb 9 19:30:04.894235 systemd[1]: cri-containerd-f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad.scope: Deactivated successfully. Feb 9 19:30:04.894487 systemd[1]: cri-containerd-f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad.scope: Consumed 8.864s CPU time. Feb 9 19:30:04.916678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad-rootfs.mount: Deactivated successfully. Feb 9 19:30:04.926084 env[1057]: time="2024-02-09T19:30:04.926036367Z" level=info msg="shim disconnected" id=f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad Feb 9 19:30:04.926278 env[1057]: time="2024-02-09T19:30:04.926257983Z" level=warning msg="cleaning up after shim disconnected" id=f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad namespace=k8s.io Feb 9 19:30:04.926371 env[1057]: time="2024-02-09T19:30:04.926355646Z" level=info msg="cleaning up dead shim" Feb 9 19:30:04.936833 env[1057]: time="2024-02-09T19:30:04.936802415Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2856 runtime=io.containerd.runc.v2\n" Feb 9 19:30:04.938631 env[1057]: time="2024-02-09T19:30:04.938603771Z" level=info msg="StopContainer for \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\" returns successfully" Feb 9 19:30:04.939449 env[1057]: time="2024-02-09T19:30:04.939424719Z" level=info msg="StopPodSandbox for \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\"" Feb 9 19:30:04.939702 env[1057]: time="2024-02-09T19:30:04.939650142Z" level=info msg="Container to stop \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:30:04.939828 env[1057]: time="2024-02-09T19:30:04.939807396Z" level=info msg="Container to stop \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:30:04.939899 env[1057]: time="2024-02-09T19:30:04.939881886Z" level=info msg="Container to stop \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:30:04.939970 env[1057]: time="2024-02-09T19:30:04.939952559Z" level=info msg="Container to stop \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:30:04.940036 env[1057]: time="2024-02-09T19:30:04.940019314Z" level=info msg="Container to stop \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:30:04.943626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6-shm.mount: Deactivated successfully. Feb 9 19:30:04.950829 systemd[1]: cri-containerd-555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6.scope: Deactivated successfully. Feb 9 19:30:04.979404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6-rootfs.mount: Deactivated successfully. Feb 9 19:30:04.987268 env[1057]: time="2024-02-09T19:30:04.987215037Z" level=info msg="shim disconnected" id=555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6 Feb 9 19:30:04.987458 env[1057]: time="2024-02-09T19:30:04.987437473Z" level=warning msg="cleaning up after shim disconnected" id=555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6 namespace=k8s.io Feb 9 19:30:04.987556 env[1057]: time="2024-02-09T19:30:04.987539274Z" level=info msg="cleaning up dead shim" Feb 9 19:30:04.998244 env[1057]: time="2024-02-09T19:30:04.998212588Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2890 runtime=io.containerd.runc.v2\n" Feb 9 19:30:04.998674 env[1057]: time="2024-02-09T19:30:04.998646752Z" level=info msg="TearDown network for sandbox \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" successfully" Feb 9 19:30:04.998793 env[1057]: time="2024-02-09T19:30:04.998772016Z" level=info msg="StopPodSandbox for \"555286a0f31737db907ef0d259226aa0a7a1fcec1e95d65536f7e0cd657a62d6\" returns successfully" Feb 9 19:30:05.117924 kubelet[1345]: I0209 19:30:05.117877 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hostproc\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118189 kubelet[1345]: I0209 19:30:05.117954 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cni-path\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118189 kubelet[1345]: I0209 19:30:05.118011 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-kernel\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118189 kubelet[1345]: I0209 19:30:05.118057 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-bpf-maps\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118189 kubelet[1345]: I0209 19:30:05.118101 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-cgroup\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118189 kubelet[1345]: I0209 19:30:05.118157 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-config-path\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118535 kubelet[1345]: I0209 19:30:05.118207 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-etc-cni-netd\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118535 kubelet[1345]: I0209 19:30:05.118257 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-net\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118535 kubelet[1345]: I0209 19:30:05.118303 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-lib-modules\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118535 kubelet[1345]: I0209 19:30:05.118355 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2100bdf-e802-49ac-980d-bfb2abbe7f32-clustermesh-secrets\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118535 kubelet[1345]: I0209 19:30:05.118407 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clxt5\" (UniqueName: \"kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-kube-api-access-clxt5\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.118535 kubelet[1345]: I0209 19:30:05.118454 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-xtables-lock\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.119105 kubelet[1345]: I0209 19:30:05.118498 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-run\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.119105 kubelet[1345]: I0209 19:30:05.118545 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hubble-tls\") pod \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\" (UID: \"f2100bdf-e802-49ac-980d-bfb2abbe7f32\") " Feb 9 19:30:05.119794 kubelet[1345]: I0209 19:30:05.119424 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.119794 kubelet[1345]: I0209 19:30:05.119532 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.120139 kubelet[1345]: I0209 19:30:05.120095 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.124500 kubelet[1345]: I0209 19:30:05.121109 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hostproc" (OuterVolumeSpecName: "hostproc") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.128314 kubelet[1345]: I0209 19:30:05.121312 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cni-path" (OuterVolumeSpecName: "cni-path") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.128460 kubelet[1345]: I0209 19:30:05.121504 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.128955 kubelet[1345]: I0209 19:30:05.121538 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.129107 kubelet[1345]: I0209 19:30:05.124417 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.129244 kubelet[1345]: I0209 19:30:05.126131 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.129369 kubelet[1345]: I0209 19:30:05.126172 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:05.129491 kubelet[1345]: I0209 19:30:05.127643 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:30:05.130961 kubelet[1345]: I0209 19:30:05.130922 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:30:05.132359 kubelet[1345]: I0209 19:30:05.132291 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-kube-api-access-clxt5" (OuterVolumeSpecName: "kube-api-access-clxt5") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "kube-api-access-clxt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:30:05.134130 kubelet[1345]: I0209 19:30:05.134056 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2100bdf-e802-49ac-980d-bfb2abbe7f32-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f2100bdf-e802-49ac-980d-bfb2abbe7f32" (UID: "f2100bdf-e802-49ac-980d-bfb2abbe7f32"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:30:05.219695 kubelet[1345]: I0209 19:30:05.219591 1345 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-xtables-lock\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.219695 kubelet[1345]: I0209 19:30:05.219652 1345 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-run\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.219695 kubelet[1345]: I0209 19:30:05.219682 1345 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hubble-tls\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.219695 kubelet[1345]: I0209 19:30:05.219714 1345 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-clxt5\" (UniqueName: \"kubernetes.io/projected/f2100bdf-e802-49ac-980d-bfb2abbe7f32-kube-api-access-clxt5\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219783 1345 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cni-path\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219819 1345 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-kernel\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219846 1345 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-bpf-maps\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219874 1345 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-cgroup\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219901 1345 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2100bdf-e802-49ac-980d-bfb2abbe7f32-cilium-config-path\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219930 1345 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-hostproc\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219957 1345 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-etc-cni-netd\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220217 kubelet[1345]: I0209 19:30:05.219986 1345 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-host-proc-sys-net\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220806 kubelet[1345]: I0209 19:30:05.220013 1345 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2100bdf-e802-49ac-980d-bfb2abbe7f32-lib-modules\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.220806 kubelet[1345]: I0209 19:30:05.220043 1345 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2100bdf-e802-49ac-980d-bfb2abbe7f32-clustermesh-secrets\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:05.393423 kubelet[1345]: I0209 19:30:05.389483 1345 scope.go:117] "RemoveContainer" containerID="f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad" Feb 9 19:30:05.397659 env[1057]: time="2024-02-09T19:30:05.397589731Z" level=info msg="RemoveContainer for \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\"" Feb 9 19:30:05.403228 env[1057]: time="2024-02-09T19:30:05.403158128Z" level=info msg="RemoveContainer for \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\" returns successfully" Feb 9 19:30:05.403787 kubelet[1345]: I0209 19:30:05.403702 1345 scope.go:117] "RemoveContainer" containerID="1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882" Feb 9 19:30:05.408402 systemd[1]: Removed slice kubepods-burstable-podf2100bdf_e802_49ac_980d_bfb2abbe7f32.slice. Feb 9 19:30:05.408653 systemd[1]: kubepods-burstable-podf2100bdf_e802_49ac_980d_bfb2abbe7f32.slice: Consumed 9.001s CPU time. Feb 9 19:30:05.411359 env[1057]: time="2024-02-09T19:30:05.410495750Z" level=info msg="RemoveContainer for \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\"" Feb 9 19:30:05.416744 env[1057]: time="2024-02-09T19:30:05.416630719Z" level=info msg="RemoveContainer for \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\" returns successfully" Feb 9 19:30:05.417555 kubelet[1345]: I0209 19:30:05.417500 1345 scope.go:117] "RemoveContainer" containerID="8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5" Feb 9 19:30:05.421608 env[1057]: time="2024-02-09T19:30:05.421523920Z" level=info msg="RemoveContainer for \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\"" Feb 9 19:30:05.427466 env[1057]: time="2024-02-09T19:30:05.427377692Z" level=info msg="RemoveContainer for \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\" returns successfully" Feb 9 19:30:05.427789 kubelet[1345]: I0209 19:30:05.427708 1345 scope.go:117] "RemoveContainer" containerID="2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15" Feb 9 19:30:05.434103 env[1057]: time="2024-02-09T19:30:05.432625156Z" level=info msg="RemoveContainer for \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\"" Feb 9 19:30:05.441699 env[1057]: time="2024-02-09T19:30:05.441635424Z" level=info msg="RemoveContainer for \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\" returns successfully" Feb 9 19:30:05.442458 kubelet[1345]: I0209 19:30:05.442359 1345 scope.go:117] "RemoveContainer" containerID="94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21" Feb 9 19:30:05.445796 env[1057]: time="2024-02-09T19:30:05.445019317Z" level=info msg="RemoveContainer for \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\"" Feb 9 19:30:05.453682 env[1057]: time="2024-02-09T19:30:05.453615247Z" level=info msg="RemoveContainer for \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\" returns successfully" Feb 9 19:30:05.454282 kubelet[1345]: I0209 19:30:05.454248 1345 scope.go:117] "RemoveContainer" containerID="f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad" Feb 9 19:30:05.455631 env[1057]: time="2024-02-09T19:30:05.455383972Z" level=error msg="ContainerStatus for \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\": not found" Feb 9 19:30:05.456020 kubelet[1345]: E0209 19:30:05.455943 1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\": not found" containerID="f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad" Feb 9 19:30:05.456182 kubelet[1345]: I0209 19:30:05.456138 1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad"} err="failed to get container status \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\": rpc error: code = NotFound desc = an error occurred when try to find container \"f32920c5b107f387ed0a678d5d5ab4575e2d660272d64034610aad40be740dad\": not found" Feb 9 19:30:05.456182 kubelet[1345]: I0209 19:30:05.456178 1345 scope.go:117] "RemoveContainer" containerID="1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882" Feb 9 19:30:05.456684 env[1057]: time="2024-02-09T19:30:05.456509041Z" level=error msg="ContainerStatus for \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\": not found" Feb 9 19:30:05.457168 kubelet[1345]: E0209 19:30:05.457140 1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\": not found" containerID="1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882" Feb 9 19:30:05.457468 kubelet[1345]: I0209 19:30:05.457406 1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882"} err="failed to get container status \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\": rpc error: code = NotFound desc = an error occurred when try to find container \"1916217bc6acdc477a0e61df394184ddc915324df85a41c1090bd1c41af46882\": not found" Feb 9 19:30:05.457675 kubelet[1345]: I0209 19:30:05.457652 1345 scope.go:117] "RemoveContainer" containerID="8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5" Feb 9 19:30:05.458425 env[1057]: time="2024-02-09T19:30:05.458311820Z" level=error msg="ContainerStatus for \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\": not found" Feb 9 19:30:05.458648 kubelet[1345]: E0209 19:30:05.458613 1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\": not found" containerID="8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5" Feb 9 19:30:05.458818 kubelet[1345]: I0209 19:30:05.458678 1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5"} err="failed to get container status \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8626fc8b9c7c1c9484c37cae441def3c69eb14e7586e9214198d6d35a7f7c6e5\": not found" Feb 9 19:30:05.458818 kubelet[1345]: I0209 19:30:05.458700 1345 scope.go:117] "RemoveContainer" containerID="2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15" Feb 9 19:30:05.459369 env[1057]: time="2024-02-09T19:30:05.459258244Z" level=error msg="ContainerStatus for \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\": not found" Feb 9 19:30:05.459586 kubelet[1345]: E0209 19:30:05.459548 1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\": not found" containerID="2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15" Feb 9 19:30:05.459717 kubelet[1345]: I0209 19:30:05.459619 1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15"} err="failed to get container status \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a9528b9d605751632a241531b8373945b3a8236299a5c425f5be39b4d052a15\": not found" Feb 9 19:30:05.459717 kubelet[1345]: I0209 19:30:05.459641 1345 scope.go:117] "RemoveContainer" containerID="94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21" Feb 9 19:30:05.460172 env[1057]: time="2024-02-09T19:30:05.460036322Z" level=error msg="ContainerStatus for \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\": not found" Feb 9 19:30:05.460770 kubelet[1345]: E0209 19:30:05.460675 1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\": not found" containerID="94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21" Feb 9 19:30:05.461048 kubelet[1345]: I0209 19:30:05.460985 1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21"} err="failed to get container status \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\": rpc error: code = NotFound desc = an error occurred when try to find container \"94efa8a68a28b8ef5523e78a9053328c203a79c2554db45f373d95b3a845bd21\": not found" Feb 9 19:30:05.769885 systemd[1]: var-lib-kubelet-pods-f2100bdf\x2de802\x2d49ac\x2d980d\x2dbfb2abbe7f32-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:30:05.770861 systemd[1]: var-lib-kubelet-pods-f2100bdf\x2de802\x2d49ac\x2d980d\x2dbfb2abbe7f32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dclxt5.mount: Deactivated successfully. Feb 9 19:30:05.770965 systemd[1]: var-lib-kubelet-pods-f2100bdf\x2de802\x2d49ac\x2d980d\x2dbfb2abbe7f32-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:30:05.834075 kubelet[1345]: E0209 19:30:05.834034 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:06.835770 kubelet[1345]: E0209 19:30:06.835680 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:06.903557 kubelet[1345]: I0209 19:30:06.903527 1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" path="/var/lib/kubelet/pods/f2100bdf-e802-49ac-980d-bfb2abbe7f32/volumes" Feb 9 19:30:07.836844 kubelet[1345]: E0209 19:30:07.836809 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:08.748113 kubelet[1345]: E0209 19:30:08.748062 1345 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:08.837850 kubelet[1345]: E0209 19:30:08.837814 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:08.901310 kubelet[1345]: E0209 19:30:08.901257 1345 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:30:09.815806 kubelet[1345]: I0209 19:30:09.815771 1345 topology_manager.go:215] "Topology Admit Handler" podUID="8dc29eab-f7e7-49bb-bf59-9ab865897143" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-qghhw" Feb 9 19:30:09.816072 kubelet[1345]: E0209 19:30:09.816053 1345 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" containerName="apply-sysctl-overwrites" Feb 9 19:30:09.816207 kubelet[1345]: E0209 19:30:09.816193 1345 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" containerName="clean-cilium-state" Feb 9 19:30:09.816321 kubelet[1345]: E0209 19:30:09.816307 1345 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" containerName="cilium-agent" Feb 9 19:30:09.816437 kubelet[1345]: E0209 19:30:09.816423 1345 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" containerName="mount-cgroup" Feb 9 19:30:09.816573 kubelet[1345]: E0209 19:30:09.816559 1345 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" containerName="mount-bpf-fs" Feb 9 19:30:09.816706 kubelet[1345]: I0209 19:30:09.816693 1345 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2100bdf-e802-49ac-980d-bfb2abbe7f32" containerName="cilium-agent" Feb 9 19:30:09.824087 systemd[1]: Created slice kubepods-besteffort-pod8dc29eab_f7e7_49bb_bf59_9ab865897143.slice. Feb 9 19:30:09.838625 kubelet[1345]: E0209 19:30:09.838593 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:09.900763 kubelet[1345]: I0209 19:30:09.900702 1345 topology_manager.go:215] "Topology Admit Handler" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" podNamespace="kube-system" podName="cilium-hsk4n" Feb 9 19:30:09.918405 systemd[1]: Created slice kubepods-burstable-pod8116088d_0ebe_44a0_bf70_25f2950dc162.slice. Feb 9 19:30:09.957833 kubelet[1345]: I0209 19:30:09.957792 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dc29eab-f7e7-49bb-bf59-9ab865897143-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-qghhw\" (UID: \"8dc29eab-f7e7-49bb-bf59-9ab865897143\") " pod="kube-system/cilium-operator-6bc8ccdb58-qghhw" Feb 9 19:30:09.957989 kubelet[1345]: I0209 19:30:09.957889 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr72t\" (UniqueName: \"kubernetes.io/projected/8dc29eab-f7e7-49bb-bf59-9ab865897143-kube-api-access-nr72t\") pod \"cilium-operator-6bc8ccdb58-qghhw\" (UID: \"8dc29eab-f7e7-49bb-bf59-9ab865897143\") " pod="kube-system/cilium-operator-6bc8ccdb58-qghhw" Feb 9 19:30:10.058334 kubelet[1345]: I0209 19:30:10.058287 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-run\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060453 kubelet[1345]: I0209 19:30:10.058879 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-bpf-maps\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060453 kubelet[1345]: I0209 19:30:10.058974 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-cgroup\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060453 kubelet[1345]: I0209 19:30:10.059055 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cni-path\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060453 kubelet[1345]: I0209 19:30:10.059133 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-clustermesh-secrets\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060453 kubelet[1345]: I0209 19:30:10.059542 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-config-path\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060453 kubelet[1345]: I0209 19:30:10.059632 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-xtables-lock\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060787 kubelet[1345]: I0209 19:30:10.059880 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-kernel\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060787 kubelet[1345]: I0209 19:30:10.059917 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-hubble-tls\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060787 kubelet[1345]: I0209 19:30:10.059948 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-lib-modules\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060787 kubelet[1345]: I0209 19:30:10.059980 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-etc-cni-netd\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060787 kubelet[1345]: I0209 19:30:10.060122 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-ipsec-secrets\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.060787 kubelet[1345]: I0209 19:30:10.060199 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-net\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.061021 kubelet[1345]: I0209 19:30:10.060269 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwvqw\" (UniqueName: \"kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-kube-api-access-fwvqw\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.061021 kubelet[1345]: I0209 19:30:10.060379 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-hostproc\") pod \"cilium-hsk4n\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " pod="kube-system/cilium-hsk4n" Feb 9 19:30:10.130134 env[1057]: time="2024-02-09T19:30:10.130030223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qghhw,Uid:8dc29eab-f7e7-49bb-bf59-9ab865897143,Namespace:kube-system,Attempt:0,}" Feb 9 19:30:10.188181 env[1057]: time="2024-02-09T19:30:10.187935708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:30:10.192823 env[1057]: time="2024-02-09T19:30:10.188553896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:30:10.193366 env[1057]: time="2024-02-09T19:30:10.193296036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:30:10.194148 env[1057]: time="2024-02-09T19:30:10.194066630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc8bb84b553f70e09f30fcb6ddafe5a6c6acebd665bcc731b118c9a591613916 pid=2918 runtime=io.containerd.runc.v2 Feb 9 19:30:10.226812 systemd[1]: Started cri-containerd-cc8bb84b553f70e09f30fcb6ddafe5a6c6acebd665bcc731b118c9a591613916.scope. Feb 9 19:30:10.230651 env[1057]: time="2024-02-09T19:30:10.230601001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hsk4n,Uid:8116088d-0ebe-44a0-bf70-25f2950dc162,Namespace:kube-system,Attempt:0,}" Feb 9 19:30:10.261792 env[1057]: time="2024-02-09T19:30:10.261700032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:30:10.262146 env[1057]: time="2024-02-09T19:30:10.262105582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:30:10.262275 env[1057]: time="2024-02-09T19:30:10.262250994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:30:10.263808 env[1057]: time="2024-02-09T19:30:10.262674178Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1 pid=2956 runtime=io.containerd.runc.v2 Feb 9 19:30:10.290945 systemd[1]: Started cri-containerd-5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1.scope. Feb 9 19:30:10.298498 env[1057]: time="2024-02-09T19:30:10.298424338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qghhw,Uid:8dc29eab-f7e7-49bb-bf59-9ab865897143,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc8bb84b553f70e09f30fcb6ddafe5a6c6acebd665bcc731b118c9a591613916\"" Feb 9 19:30:10.300619 env[1057]: time="2024-02-09T19:30:10.300590459Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:30:10.327650 env[1057]: time="2024-02-09T19:30:10.327587339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hsk4n,Uid:8116088d-0ebe-44a0-bf70-25f2950dc162,Namespace:kube-system,Attempt:0,} returns sandbox id \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\"" Feb 9 19:30:10.330331 env[1057]: time="2024-02-09T19:30:10.330297590Z" level=info msg="CreateContainer within sandbox \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:30:10.361321 env[1057]: time="2024-02-09T19:30:10.361259904Z" level=info msg="CreateContainer within sandbox \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\"" Feb 9 19:30:10.362153 env[1057]: time="2024-02-09T19:30:10.362129145Z" level=info msg="StartContainer for \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\"" Feb 9 19:30:10.385148 systemd[1]: Started cri-containerd-0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47.scope. Feb 9 19:30:10.410092 systemd[1]: cri-containerd-0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47.scope: Deactivated successfully. Feb 9 19:30:10.453140 env[1057]: time="2024-02-09T19:30:10.453017341Z" level=info msg="shim disconnected" id=0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47 Feb 9 19:30:10.453140 env[1057]: time="2024-02-09T19:30:10.453126796Z" level=warning msg="cleaning up after shim disconnected" id=0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47 namespace=k8s.io Feb 9 19:30:10.453613 env[1057]: time="2024-02-09T19:30:10.453150320Z" level=info msg="cleaning up dead shim" Feb 9 19:30:10.471509 env[1057]: time="2024-02-09T19:30:10.471415982Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3021 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:30:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:30:10.472459 env[1057]: time="2024-02-09T19:30:10.472256237Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Feb 9 19:30:10.474049 env[1057]: time="2024-02-09T19:30:10.473934904Z" level=error msg="Failed to pipe stdout of container \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\"" error="reading from a closed fifo" Feb 9 19:30:10.474369 env[1057]: time="2024-02-09T19:30:10.474289840Z" level=error msg="Failed to pipe stderr of container \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\"" error="reading from a closed fifo" Feb 9 19:30:10.478160 env[1057]: time="2024-02-09T19:30:10.478040702Z" level=error msg="StartContainer for \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:30:10.478634 kubelet[1345]: E0209 19:30:10.478562 1345 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47" Feb 9 19:30:10.482301 kubelet[1345]: E0209 19:30:10.482227 1345 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:30:10.482301 kubelet[1345]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:30:10.482301 kubelet[1345]: rm /hostbin/cilium-mount Feb 9 19:30:10.482572 kubelet[1345]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fwvqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hsk4n_kube-system(8116088d-0ebe-44a0-bf70-25f2950dc162): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:30:10.482572 kubelet[1345]: E0209 19:30:10.482345 1345 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hsk4n" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" Feb 9 19:30:10.841811 kubelet[1345]: E0209 19:30:10.839797 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:11.422677 env[1057]: time="2024-02-09T19:30:11.422556203Z" level=info msg="CreateContainer within sandbox \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 19:30:11.449766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211856863.mount: Deactivated successfully. Feb 9 19:30:11.467406 env[1057]: time="2024-02-09T19:30:11.467278103Z" level=info msg="CreateContainer within sandbox \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\"" Feb 9 19:30:11.468780 env[1057]: time="2024-02-09T19:30:11.468567432Z" level=info msg="StartContainer for \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\"" Feb 9 19:30:11.515322 systemd[1]: Started cri-containerd-6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00.scope. Feb 9 19:30:11.542000 systemd[1]: cri-containerd-6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00.scope: Deactivated successfully. Feb 9 19:30:11.558858 env[1057]: time="2024-02-09T19:30:11.558812435Z" level=info msg="shim disconnected" id=6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00 Feb 9 19:30:11.559045 env[1057]: time="2024-02-09T19:30:11.559025745Z" level=warning msg="cleaning up after shim disconnected" id=6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00 namespace=k8s.io Feb 9 19:30:11.559113 env[1057]: time="2024-02-09T19:30:11.559099113Z" level=info msg="cleaning up dead shim" Feb 9 19:30:11.567080 env[1057]: time="2024-02-09T19:30:11.567039000Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3059 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:30:11Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:30:11.567456 env[1057]: time="2024-02-09T19:30:11.567406280Z" level=error msg="copy shim log" error="read /proc/self/fd/75: file already closed" Feb 9 19:30:11.571563 env[1057]: time="2024-02-09T19:30:11.571505511Z" level=error msg="Failed to pipe stdout of container \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\"" error="reading from a closed fifo" Feb 9 19:30:11.571827 env[1057]: time="2024-02-09T19:30:11.571790656Z" level=error msg="Failed to pipe stderr of container \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\"" error="reading from a closed fifo" Feb 9 19:30:11.575968 env[1057]: time="2024-02-09T19:30:11.575918731Z" level=error msg="StartContainer for \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:30:11.576158 kubelet[1345]: E0209 19:30:11.576143 1345 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00" Feb 9 19:30:11.576351 kubelet[1345]: E0209 19:30:11.576338 1345 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:30:11.576351 kubelet[1345]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:30:11.576351 kubelet[1345]: rm /hostbin/cilium-mount Feb 9 19:30:11.576351 kubelet[1345]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fwvqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hsk4n_kube-system(8116088d-0ebe-44a0-bf70-25f2950dc162): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:30:11.576801 kubelet[1345]: E0209 19:30:11.576787 1345 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hsk4n" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" Feb 9 19:30:11.842872 kubelet[1345]: E0209 19:30:11.840381 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:12.080096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00-rootfs.mount: Deactivated successfully. Feb 9 19:30:12.274861 kubelet[1345]: I0209 19:30:12.274816 1345 setters.go:552] "Node became not ready" node="172.24.4.205" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T19:30:12Z","lastTransitionTime":"2024-02-09T19:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 19:30:12.425366 kubelet[1345]: I0209 19:30:12.425266 1345 scope.go:117] "RemoveContainer" containerID="0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47" Feb 9 19:30:12.425995 kubelet[1345]: I0209 19:30:12.425939 1345 scope.go:117] "RemoveContainer" containerID="0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47" Feb 9 19:30:12.429110 env[1057]: time="2024-02-09T19:30:12.429058956Z" level=info msg="RemoveContainer for \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\"" Feb 9 19:30:12.429758 env[1057]: time="2024-02-09T19:30:12.429662538Z" level=info msg="RemoveContainer for \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\"" Feb 9 19:30:12.429953 env[1057]: time="2024-02-09T19:30:12.429902217Z" level=error msg="RemoveContainer for \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\" failed" error="failed to set removing state for container \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\": container is already in removing state" Feb 9 19:30:12.430500 kubelet[1345]: E0209 19:30:12.430265 1345 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\": container is already in removing state" containerID="0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47" Feb 9 19:30:12.430500 kubelet[1345]: E0209 19:30:12.430390 1345 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47": container is already in removing state; Skipping pod "cilium-hsk4n_kube-system(8116088d-0ebe-44a0-bf70-25f2950dc162)" Feb 9 19:30:12.431548 kubelet[1345]: E0209 19:30:12.431485 1345 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-hsk4n_kube-system(8116088d-0ebe-44a0-bf70-25f2950dc162)\"" pod="kube-system/cilium-hsk4n" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" Feb 9 19:30:12.567828 env[1057]: time="2024-02-09T19:30:12.566894829Z" level=info msg="RemoveContainer for \"0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47\" returns successfully" Feb 9 19:30:12.841285 kubelet[1345]: E0209 19:30:12.841147 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:13.430834 env[1057]: time="2024-02-09T19:30:13.430758195Z" level=info msg="StopPodSandbox for \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\"" Feb 9 19:30:13.431432 env[1057]: time="2024-02-09T19:30:13.431396923Z" level=info msg="Container to stop \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:30:13.435242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1-shm.mount: Deactivated successfully. Feb 9 19:30:13.448343 systemd[1]: cri-containerd-5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1.scope: Deactivated successfully. Feb 9 19:30:13.502443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1-rootfs.mount: Deactivated successfully. Feb 9 19:30:13.559361 kubelet[1345]: W0209 19:30:13.559255 1345 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8116088d_0ebe_44a0_bf70_25f2950dc162.slice/cri-containerd-0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47.scope WatchSource:0}: container "0a7460acac41deb266c390bd0014173d9e2e2145e89e8ae99e5f2f5d7bbf9c47" in namespace "k8s.io": not found Feb 9 19:30:13.842851 kubelet[1345]: E0209 19:30:13.841862 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:13.890827 env[1057]: time="2024-02-09T19:30:13.890707100Z" level=info msg="shim disconnected" id=5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1 Feb 9 19:30:13.891703 env[1057]: time="2024-02-09T19:30:13.891659256Z" level=warning msg="cleaning up after shim disconnected" id=5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1 namespace=k8s.io Feb 9 19:30:13.891934 env[1057]: time="2024-02-09T19:30:13.891896762Z" level=info msg="cleaning up dead shim" Feb 9 19:30:13.902941 kubelet[1345]: E0209 19:30:13.902883 1345 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:30:13.919384 env[1057]: time="2024-02-09T19:30:13.919318441Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3091 runtime=io.containerd.runc.v2\n" Feb 9 19:30:13.920172 env[1057]: time="2024-02-09T19:30:13.920116398Z" level=info msg="TearDown network for sandbox \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\" successfully" Feb 9 19:30:13.920367 env[1057]: time="2024-02-09T19:30:13.920323937Z" level=info msg="StopPodSandbox for \"5081fccd6c2aefd93b535817106353366239a0e9407d7196648258cd80c72dc1\" returns successfully" Feb 9 19:30:13.985948 env[1057]: time="2024-02-09T19:30:13.985869455Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:30:14.005533 env[1057]: time="2024-02-09T19:30:14.005471924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:30:14.010926 env[1057]: time="2024-02-09T19:30:14.010848190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:30:14.012651 env[1057]: time="2024-02-09T19:30:14.012552118Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:30:14.018448 env[1057]: time="2024-02-09T19:30:14.018374492Z" level=info msg="CreateContainer within sandbox \"cc8bb84b553f70e09f30fcb6ddafe5a6c6acebd665bcc731b118c9a591613916\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:30:14.066301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769653890.mount: Deactivated successfully. Feb 9 19:30:14.090478 env[1057]: time="2024-02-09T19:30:14.090312742Z" level=info msg="CreateContainer within sandbox \"cc8bb84b553f70e09f30fcb6ddafe5a6c6acebd665bcc731b118c9a591613916\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"61330314448623003dfb716e88cea811d5240cac0f583c061081690a8580e2ef\"" Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.090894 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-cgroup\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.090986 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-clustermesh-secrets\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091045 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-kernel\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091110 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-config-path\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091216 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-hostproc\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091299 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwvqw\" (UniqueName: \"kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-kube-api-access-fwvqw\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091363 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cni-path\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091424 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-xtables-lock\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091471 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091508 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-run\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091557 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091571 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-bpf-maps\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091610 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091647 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-hubble-tls\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091701 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-etc-cni-netd\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.093534 kubelet[1345]: I0209 19:30:14.091802 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-ipsec-secrets\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.095015 kubelet[1345]: I0209 19:30:14.091858 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-net\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.095015 kubelet[1345]: I0209 19:30:14.091906 1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-lib-modules\") pod \"8116088d-0ebe-44a0-bf70-25f2950dc162\" (UID: \"8116088d-0ebe-44a0-bf70-25f2950dc162\") " Feb 9 19:30:14.095015 kubelet[1345]: I0209 19:30:14.091965 1345 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-bpf-maps\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.095015 kubelet[1345]: I0209 19:30:14.091995 1345 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-cgroup\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.095015 kubelet[1345]: I0209 19:30:14.092024 1345 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-kernel\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.095015 kubelet[1345]: I0209 19:30:14.092058 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.095815 kubelet[1345]: I0209 19:30:14.095616 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-hostproc" (OuterVolumeSpecName: "hostproc") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.097762 kubelet[1345]: I0209 19:30:14.096289 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cni-path" (OuterVolumeSpecName: "cni-path") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.097762 kubelet[1345]: I0209 19:30:14.096368 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.097762 kubelet[1345]: I0209 19:30:14.096407 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.098493 kubelet[1345]: I0209 19:30:14.098446 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.098927 kubelet[1345]: I0209 19:30:14.098891 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:30:14.100829 kubelet[1345]: I0209 19:30:14.100653 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:30:14.101231 env[1057]: time="2024-02-09T19:30:14.101171475Z" level=info msg="StartContainer for \"61330314448623003dfb716e88cea811d5240cac0f583c061081690a8580e2ef\"" Feb 9 19:30:14.105848 kubelet[1345]: I0209 19:30:14.100921 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:30:14.106125 kubelet[1345]: I0209 19:30:14.106059 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:30:14.108058 kubelet[1345]: I0209 19:30:14.108014 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:30:14.110770 kubelet[1345]: I0209 19:30:14.110670 1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-kube-api-access-fwvqw" (OuterVolumeSpecName: "kube-api-access-fwvqw") pod "8116088d-0ebe-44a0-bf70-25f2950dc162" (UID: "8116088d-0ebe-44a0-bf70-25f2950dc162"). InnerVolumeSpecName "kube-api-access-fwvqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:30:14.132416 systemd[1]: Started cri-containerd-61330314448623003dfb716e88cea811d5240cac0f583c061081690a8580e2ef.scope. Feb 9 19:30:14.192635 kubelet[1345]: I0209 19:30:14.192480 1345 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-hostproc\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.192635 kubelet[1345]: I0209 19:30:14.192624 1345 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fwvqw\" (UniqueName: \"kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-kube-api-access-fwvqw\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.192635 kubelet[1345]: I0209 19:30:14.192661 1345 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cni-path\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.192635 kubelet[1345]: I0209 19:30:14.192692 1345 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-config-path\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192762 1345 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-xtables-lock\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192800 1345 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-run\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192830 1345 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8116088d-0ebe-44a0-bf70-25f2950dc162-hubble-tls\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192859 1345 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-etc-cni-netd\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192888 1345 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-host-proc-sys-net\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192917 1345 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8116088d-0ebe-44a0-bf70-25f2950dc162-lib-modules\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192946 1345 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-cilium-ipsec-secrets\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.193294 kubelet[1345]: I0209 19:30:14.192983 1345 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8116088d-0ebe-44a0-bf70-25f2950dc162-clustermesh-secrets\") on node \"172.24.4.205\" DevicePath \"\"" Feb 9 19:30:14.197931 env[1057]: time="2024-02-09T19:30:14.197848751Z" level=info msg="StartContainer for \"61330314448623003dfb716e88cea811d5240cac0f583c061081690a8580e2ef\" returns successfully" Feb 9 19:30:14.442500 systemd[1]: var-lib-kubelet-pods-8116088d\x2d0ebe\x2d44a0\x2dbf70\x2d25f2950dc162-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfwvqw.mount: Deactivated successfully. Feb 9 19:30:14.442808 systemd[1]: var-lib-kubelet-pods-8116088d\x2d0ebe\x2d44a0\x2dbf70\x2d25f2950dc162-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:30:14.442974 systemd[1]: var-lib-kubelet-pods-8116088d\x2d0ebe\x2d44a0\x2dbf70\x2d25f2950dc162-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:30:14.443115 systemd[1]: var-lib-kubelet-pods-8116088d\x2d0ebe\x2d44a0\x2dbf70\x2d25f2950dc162-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:30:14.455443 kubelet[1345]: I0209 19:30:14.455392 1345 scope.go:117] "RemoveContainer" containerID="6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00" Feb 9 19:30:14.464507 systemd[1]: Removed slice kubepods-burstable-pod8116088d_0ebe_44a0_bf70_25f2950dc162.slice. Feb 9 19:30:14.467921 env[1057]: time="2024-02-09T19:30:14.467434819Z" level=info msg="RemoveContainer for \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\"" Feb 9 19:30:14.473385 env[1057]: time="2024-02-09T19:30:14.473327454Z" level=info msg="RemoveContainer for \"6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00\" returns successfully" Feb 9 19:30:14.572584 kubelet[1345]: I0209 19:30:14.572507 1345 topology_manager.go:215] "Topology Admit Handler" podUID="6a36fd17-f52f-46a5-987f-284b4b575a27" podNamespace="kube-system" podName="cilium-x8r2r" Feb 9 19:30:14.573252 kubelet[1345]: E0209 19:30:14.573223 1345 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" containerName="mount-cgroup" Feb 9 19:30:14.573349 kubelet[1345]: I0209 19:30:14.573283 1345 memory_manager.go:346] "RemoveStaleState removing state" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" containerName="mount-cgroup" Feb 9 19:30:14.573349 kubelet[1345]: I0209 19:30:14.573302 1345 memory_manager.go:346] "RemoveStaleState removing state" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" containerName="mount-cgroup" Feb 9 19:30:14.573349 kubelet[1345]: E0209 19:30:14.573340 1345 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" containerName="mount-cgroup" Feb 9 19:30:14.585073 systemd[1]: Created slice kubepods-burstable-pod6a36fd17_f52f_46a5_987f_284b4b575a27.slice. Feb 9 19:30:14.635381 kubelet[1345]: I0209 19:30:14.635330 1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-qghhw" podStartSLOduration=1.922193321 podCreationTimestamp="2024-02-09 19:30:09 +0000 UTC" firstStartedPulling="2024-02-09 19:30:10.300037112 +0000 UTC m=+82.343079980" lastFinishedPulling="2024-02-09 19:30:14.013095647 +0000 UTC m=+86.056138555" observedRunningTime="2024-02-09 19:30:14.592426639 +0000 UTC m=+86.635469547" watchObservedRunningTime="2024-02-09 19:30:14.635251896 +0000 UTC m=+86.678294804" Feb 9 19:30:14.697234 kubelet[1345]: I0209 19:30:14.697007 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-bpf-maps\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.697234 kubelet[1345]: I0209 19:30:14.697175 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-hostproc\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.697907 kubelet[1345]: I0209 19:30:14.697847 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-cilium-cgroup\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.698458 kubelet[1345]: I0209 19:30:14.698322 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-xtables-lock\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.698889 kubelet[1345]: I0209 19:30:14.698794 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a36fd17-f52f-46a5-987f-284b4b575a27-clustermesh-secrets\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.699354 kubelet[1345]: I0209 19:30:14.699303 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a36fd17-f52f-46a5-987f-284b4b575a27-hubble-tls\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.699990 kubelet[1345]: I0209 19:30:14.699957 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvjm\" (UniqueName: \"kubernetes.io/projected/6a36fd17-f52f-46a5-987f-284b4b575a27-kube-api-access-dfvjm\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.700364 kubelet[1345]: I0209 19:30:14.700313 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-cni-path\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.700826 kubelet[1345]: I0209 19:30:14.700773 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-lib-modules\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.701143 kubelet[1345]: I0209 19:30:14.701116 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a36fd17-f52f-46a5-987f-284b4b575a27-cilium-config-path\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.701483 kubelet[1345]: I0209 19:30:14.701455 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a36fd17-f52f-46a5-987f-284b4b575a27-cilium-ipsec-secrets\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.701938 kubelet[1345]: I0209 19:30:14.701908 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-host-proc-sys-net\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.702220 kubelet[1345]: I0209 19:30:14.702196 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-host-proc-sys-kernel\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.702589 kubelet[1345]: I0209 19:30:14.702541 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-etc-cni-netd\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.702919 kubelet[1345]: I0209 19:30:14.702894 1345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a36fd17-f52f-46a5-987f-284b4b575a27-cilium-run\") pod \"cilium-x8r2r\" (UID: \"6a36fd17-f52f-46a5-987f-284b4b575a27\") " pod="kube-system/cilium-x8r2r" Feb 9 19:30:14.842524 kubelet[1345]: E0209 19:30:14.842483 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:14.899779 env[1057]: time="2024-02-09T19:30:14.898649633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8r2r,Uid:6a36fd17-f52f-46a5-987f-284b4b575a27,Namespace:kube-system,Attempt:0,}" Feb 9 19:30:14.904058 kubelet[1345]: I0209 19:30:14.904007 1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8116088d-0ebe-44a0-bf70-25f2950dc162" path="/var/lib/kubelet/pods/8116088d-0ebe-44a0-bf70-25f2950dc162/volumes" Feb 9 19:30:14.922733 env[1057]: time="2024-02-09T19:30:14.922549795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:30:14.922733 env[1057]: time="2024-02-09T19:30:14.922588267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:30:14.922733 env[1057]: time="2024-02-09T19:30:14.922601302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:30:14.923643 env[1057]: time="2024-02-09T19:30:14.922839498Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9 pid=3156 runtime=io.containerd.runc.v2 Feb 9 19:30:14.948642 systemd[1]: Started cri-containerd-c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9.scope. Feb 9 19:30:14.985672 env[1057]: time="2024-02-09T19:30:14.985608516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8r2r,Uid:6a36fd17-f52f-46a5-987f-284b4b575a27,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\"" Feb 9 19:30:14.988909 env[1057]: time="2024-02-09T19:30:14.988879082Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:30:15.013834 env[1057]: time="2024-02-09T19:30:15.013790661Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16\"" Feb 9 19:30:15.014671 env[1057]: time="2024-02-09T19:30:15.014633662Z" level=info msg="StartContainer for \"361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16\"" Feb 9 19:30:15.038425 systemd[1]: Started cri-containerd-361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16.scope. Feb 9 19:30:15.100308 env[1057]: time="2024-02-09T19:30:15.100195973Z" level=info msg="StartContainer for \"361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16\" returns successfully" Feb 9 19:30:15.128172 systemd[1]: cri-containerd-361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16.scope: Deactivated successfully. Feb 9 19:30:15.185982 env[1057]: time="2024-02-09T19:30:15.185872958Z" level=info msg="shim disconnected" id=361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16 Feb 9 19:30:15.185982 env[1057]: time="2024-02-09T19:30:15.185987934Z" level=warning msg="cleaning up after shim disconnected" id=361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16 namespace=k8s.io Feb 9 19:30:15.186472 env[1057]: time="2024-02-09T19:30:15.186012099Z" level=info msg="cleaning up dead shim" Feb 9 19:30:15.202498 env[1057]: time="2024-02-09T19:30:15.202282667Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3240 runtime=io.containerd.runc.v2\n" Feb 9 19:30:15.493915 env[1057]: time="2024-02-09T19:30:15.493698825Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:30:15.526869 env[1057]: time="2024-02-09T19:30:15.526705272Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd\"" Feb 9 19:30:15.528624 env[1057]: time="2024-02-09T19:30:15.528574518Z" level=info msg="StartContainer for \"9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd\"" Feb 9 19:30:15.572797 systemd[1]: Started cri-containerd-9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd.scope. Feb 9 19:30:15.630459 env[1057]: time="2024-02-09T19:30:15.630419477Z" level=info msg="StartContainer for \"9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd\" returns successfully" Feb 9 19:30:15.647560 systemd[1]: cri-containerd-9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd.scope: Deactivated successfully. Feb 9 19:30:15.675357 env[1057]: time="2024-02-09T19:30:15.675299280Z" level=info msg="shim disconnected" id=9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd Feb 9 19:30:15.675357 env[1057]: time="2024-02-09T19:30:15.675350186Z" level=warning msg="cleaning up after shim disconnected" id=9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd namespace=k8s.io Feb 9 19:30:15.675357 env[1057]: time="2024-02-09T19:30:15.675361637Z" level=info msg="cleaning up dead shim" Feb 9 19:30:15.682999 env[1057]: time="2024-02-09T19:30:15.682954062Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3305 runtime=io.containerd.runc.v2\n" Feb 9 19:30:15.844100 kubelet[1345]: E0209 19:30:15.843868 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:16.436362 systemd[1]: run-containerd-runc-k8s.io-9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd-runc.eUkjiE.mount: Deactivated successfully. Feb 9 19:30:16.436617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd-rootfs.mount: Deactivated successfully. Feb 9 19:30:16.500146 env[1057]: time="2024-02-09T19:30:16.500021318Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:30:16.535984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193412675.mount: Deactivated successfully. Feb 9 19:30:16.558532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249745029.mount: Deactivated successfully. Feb 9 19:30:16.564438 env[1057]: time="2024-02-09T19:30:16.562016773Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a\"" Feb 9 19:30:16.564438 env[1057]: time="2024-02-09T19:30:16.563891730Z" level=info msg="StartContainer for \"63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a\"" Feb 9 19:30:16.604192 systemd[1]: Started cri-containerd-63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a.scope. Feb 9 19:30:16.648433 env[1057]: time="2024-02-09T19:30:16.648377269Z" level=info msg="StartContainer for \"63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a\" returns successfully" Feb 9 19:30:16.664682 systemd[1]: cri-containerd-63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a.scope: Deactivated successfully. Feb 9 19:30:16.668756 kubelet[1345]: W0209 19:30:16.668689 1345 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8116088d_0ebe_44a0_bf70_25f2950dc162.slice/cri-containerd-6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00.scope WatchSource:0}: container "6062af9758856c8204edce9151897dc2d26124365c1d5edf3b6f9eb86608be00" in namespace "k8s.io": not found Feb 9 19:30:16.703768 env[1057]: time="2024-02-09T19:30:16.702895674Z" level=info msg="shim disconnected" id=63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a Feb 9 19:30:16.703958 env[1057]: time="2024-02-09T19:30:16.703803347Z" level=warning msg="cleaning up after shim disconnected" id=63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a namespace=k8s.io Feb 9 19:30:16.703958 env[1057]: time="2024-02-09T19:30:16.703821351Z" level=info msg="cleaning up dead shim" Feb 9 19:30:16.712209 env[1057]: time="2024-02-09T19:30:16.712155968Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3366 runtime=io.containerd.runc.v2\n" Feb 9 19:30:16.845067 kubelet[1345]: E0209 19:30:16.844967 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:17.508042 env[1057]: time="2024-02-09T19:30:17.507972880Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:30:17.545322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389980994.mount: Deactivated successfully. Feb 9 19:30:17.559027 env[1057]: time="2024-02-09T19:30:17.558926537Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf\"" Feb 9 19:30:17.563809 env[1057]: time="2024-02-09T19:30:17.562896696Z" level=info msg="StartContainer for \"d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf\"" Feb 9 19:30:17.602515 systemd[1]: Started cri-containerd-d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf.scope. Feb 9 19:30:17.634379 systemd[1]: cri-containerd-d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf.scope: Deactivated successfully. Feb 9 19:30:17.636794 env[1057]: time="2024-02-09T19:30:17.636574335Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a36fd17_f52f_46a5_987f_284b4b575a27.slice/cri-containerd-d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf.scope/memory.events\": no such file or directory" Feb 9 19:30:17.641932 env[1057]: time="2024-02-09T19:30:17.641834404Z" level=info msg="StartContainer for \"d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf\" returns successfully" Feb 9 19:30:17.672471 env[1057]: time="2024-02-09T19:30:17.672416450Z" level=info msg="shim disconnected" id=d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf Feb 9 19:30:17.672680 env[1057]: time="2024-02-09T19:30:17.672659726Z" level=warning msg="cleaning up after shim disconnected" id=d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf namespace=k8s.io Feb 9 19:30:17.672782 env[1057]: time="2024-02-09T19:30:17.672765566Z" level=info msg="cleaning up dead shim" Feb 9 19:30:17.681427 env[1057]: time="2024-02-09T19:30:17.681359328Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:30:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3424 runtime=io.containerd.runc.v2\n" Feb 9 19:30:17.845874 kubelet[1345]: E0209 19:30:17.845430 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:18.437953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf-rootfs.mount: Deactivated successfully. Feb 9 19:30:18.517689 env[1057]: time="2024-02-09T19:30:18.517587183Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:30:18.567830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662184150.mount: Deactivated successfully. Feb 9 19:30:18.584919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26671874.mount: Deactivated successfully. Feb 9 19:30:18.591361 env[1057]: time="2024-02-09T19:30:18.591268649Z" level=info msg="CreateContainer within sandbox \"c0094f733dc964d3792f4e4042a69399b1084485940e24e83dddedd640c6d1e9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db\"" Feb 9 19:30:18.592909 env[1057]: time="2024-02-09T19:30:18.592833945Z" level=info msg="StartContainer for \"2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db\"" Feb 9 19:30:18.625707 systemd[1]: Started cri-containerd-2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db.scope. Feb 9 19:30:18.697265 env[1057]: time="2024-02-09T19:30:18.696923951Z" level=info msg="StartContainer for \"2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db\" returns successfully" Feb 9 19:30:18.845689 kubelet[1345]: E0209 19:30:18.845579 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:19.713795 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:30:19.775776 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 9 19:30:19.783400 kubelet[1345]: W0209 19:30:19.783367 1345 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a36fd17_f52f_46a5_987f_284b4b575a27.slice/cri-containerd-361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16.scope WatchSource:0}: task 361b99e1e83ab63cdcf1e10eff79ff16ab603a6edfd831162b353a9701605f16 not found: not found Feb 9 19:30:19.846616 kubelet[1345]: E0209 19:30:19.846542 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:20.847869 kubelet[1345]: E0209 19:30:20.847754 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:21.006006 systemd[1]: run-containerd-runc-k8s.io-2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db-runc.4lukhw.mount: Deactivated successfully. Feb 9 19:30:21.848971 kubelet[1345]: E0209 19:30:21.848879 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:22.658009 systemd-networkd[973]: lxc_health: Link UP Feb 9 19:30:22.664134 systemd-networkd[973]: lxc_health: Gained carrier Feb 9 19:30:22.664787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:30:22.849523 kubelet[1345]: E0209 19:30:22.849454 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:22.895776 kubelet[1345]: W0209 19:30:22.895605 1345 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a36fd17_f52f_46a5_987f_284b4b575a27.slice/cri-containerd-9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd.scope WatchSource:0}: task 9ee41c5a71b6a5a763127ef8f2aa2943963f580f6e43a142161d9b35910aeafd not found: not found Feb 9 19:30:22.934792 kubelet[1345]: I0209 19:30:22.934680 1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x8r2r" podStartSLOduration=8.934642356 podCreationTimestamp="2024-02-09 19:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:30:19.55384792 +0000 UTC m=+91.596890798" watchObservedRunningTime="2024-02-09 19:30:22.934642356 +0000 UTC m=+94.977685214" Feb 9 19:30:23.404821 systemd[1]: run-containerd-runc-k8s.io-2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db-runc.1C72CT.mount: Deactivated successfully. Feb 9 19:30:23.850451 kubelet[1345]: E0209 19:30:23.850338 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:23.896871 systemd-networkd[973]: lxc_health: Gained IPv6LL Feb 9 19:30:24.851918 kubelet[1345]: E0209 19:30:24.851804 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:25.647008 systemd[1]: run-containerd-runc-k8s.io-2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db-runc.5Ui2dx.mount: Deactivated successfully. Feb 9 19:30:25.853543 kubelet[1345]: E0209 19:30:25.853482 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:26.007117 kubelet[1345]: W0209 19:30:26.007025 1345 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a36fd17_f52f_46a5_987f_284b4b575a27.slice/cri-containerd-63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a.scope WatchSource:0}: task 63a95aaab8aa2d96df91ce6b754535fe7ed623edb92fb244ec08888c7f4e970a not found: not found Feb 9 19:30:26.854151 kubelet[1345]: E0209 19:30:26.854115 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:27.845217 systemd[1]: run-containerd-runc-k8s.io-2100e5da24d9be21de03a49ad4339210961caece40163ca73f198711bdf537db-runc.Eg6x6u.mount: Deactivated successfully. Feb 9 19:30:27.855493 kubelet[1345]: E0209 19:30:27.855378 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:28.747559 kubelet[1345]: E0209 19:30:28.747511 1345 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:28.856196 kubelet[1345]: E0209 19:30:28.856153 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:29.124351 kubelet[1345]: W0209 19:30:29.124235 1345 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a36fd17_f52f_46a5_987f_284b4b575a27.slice/cri-containerd-d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf.scope WatchSource:0}: task d8ca478a86d9533d8ed97b253e324d8ced87b6509ae87b81d95446c4d562b6bf not found: not found Feb 9 19:30:29.857787 kubelet[1345]: E0209 19:30:29.857677 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:30.859043 kubelet[1345]: E0209 19:30:30.858911 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:31.859138 kubelet[1345]: E0209 19:30:31.859073 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:32.860401 kubelet[1345]: E0209 19:30:32.860323 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:33.861695 kubelet[1345]: E0209 19:30:33.861652 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:30:34.862920 kubelet[1345]: E0209 19:30:34.862856 1345 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"