Feb 12 20:30:24.048793 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:30:24.048840 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:30:24.048869 kernel: BIOS-provided physical RAM map: Feb 12 20:30:24.048886 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:30:24.048902 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:30:24.048918 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:30:24.048937 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 12 20:30:24.048954 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 12 20:30:24.048974 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:30:24.048990 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:30:24.049006 kernel: NX (Execute Disable) protection: active Feb 12 20:30:24.049022 kernel: SMBIOS 2.8 present. Feb 12 20:30:24.049038 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 12 20:30:24.049055 kernel: Hypervisor detected: KVM Feb 12 20:30:24.049075 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:30:24.049096 kernel: kvm-clock: cpu 0, msr 61faa001, primary cpu clock Feb 12 20:30:24.049113 kernel: kvm-clock: using sched offset of 5924260906 cycles Feb 12 20:30:24.049133 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:30:24.049152 kernel: tsc: Detected 1996.249 MHz processor Feb 12 20:30:24.049170 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:30:24.049189 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:30:24.049207 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 12 20:30:24.049225 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:30:24.049247 kernel: ACPI: Early table checksum verification disabled Feb 12 20:30:24.049265 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 12 20:30:24.049283 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:30:24.049302 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:30:24.049320 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:30:24.049338 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 20:30:24.049356 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:30:24.049374 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:30:24.049392 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 12 20:30:24.049413 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 12 20:30:24.049464 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 20:30:24.049482 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 12 20:30:24.049499 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 12 20:30:24.049517 kernel: No NUMA configuration found Feb 12 20:30:24.049535 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 12 20:30:24.049553 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 12 20:30:24.049572 kernel: Zone ranges: Feb 12 20:30:24.049601 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:30:24.049620 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 12 20:30:24.049639 kernel: Normal empty Feb 12 20:30:24.049658 kernel: Movable zone start for each node Feb 12 20:30:24.049676 kernel: Early memory node ranges Feb 12 20:30:24.049695 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:30:24.049717 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 12 20:30:24.049736 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 12 20:30:24.049754 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:30:24.049773 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:30:24.049791 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 12 20:30:24.049810 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:30:24.049828 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:30:24.049847 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:30:24.049866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:30:24.049888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:30:24.049907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:30:24.049926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:30:24.049944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:30:24.049963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:30:24.049982 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 20:30:24.050001 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 20:30:24.050019 kernel: Booting paravirtualized kernel on KVM Feb 12 20:30:24.050038 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:30:24.050057 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 20:30:24.050080 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 20:30:24.050098 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 20:30:24.050117 kernel: pcpu-alloc: [0] 0 1 Feb 12 20:30:24.050135 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 20:30:24.050153 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 20:30:24.050172 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 12 20:30:24.050191 kernel: Policy zone: DMA32 Feb 12 20:30:24.050212 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:30:24.050236 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:30:24.050256 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:30:24.050275 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 20:30:24.050294 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:30:24.050313 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 20:30:24.050332 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:30:24.050351 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:30:24.050370 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:30:24.050392 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:30:24.050412 kernel: rcu: RCU event tracing is enabled. Feb 12 20:30:24.054511 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:30:24.054537 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:30:24.054557 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:30:24.054577 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:30:24.054597 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:30:24.054616 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 20:30:24.054635 kernel: Console: colour VGA+ 80x25 Feb 12 20:30:24.054654 kernel: printk: console [tty0] enabled Feb 12 20:30:24.054681 kernel: printk: console [ttyS0] enabled Feb 12 20:30:24.054700 kernel: ACPI: Core revision 20210730 Feb 12 20:30:24.054719 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:30:24.054760 kernel: x2apic enabled Feb 12 20:30:24.054779 kernel: Switched APIC routing to physical x2apic. Feb 12 20:30:24.054798 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:30:24.054817 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:30:24.054837 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 12 20:30:24.054856 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 20:30:24.054879 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 20:30:24.054899 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:30:24.054918 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:30:24.054937 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:30:24.054956 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:30:24.054975 kernel: Speculative Store Bypass: Vulnerable Feb 12 20:30:24.054994 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 12 20:30:24.055013 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:30:24.055032 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:30:24.055054 kernel: LSM: Security Framework initializing Feb 12 20:30:24.055072 kernel: SELinux: Initializing. Feb 12 20:30:24.055091 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:30:24.055111 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:30:24.055130 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 12 20:30:24.055149 kernel: Performance Events: AMD PMU driver. Feb 12 20:30:24.055168 kernel: ... version: 0 Feb 12 20:30:24.055187 kernel: ... bit width: 48 Feb 12 20:30:24.055206 kernel: ... generic registers: 4 Feb 12 20:30:24.055238 kernel: ... value mask: 0000ffffffffffff Feb 12 20:30:24.055258 kernel: ... max period: 00007fffffffffff Feb 12 20:30:24.055277 kernel: ... fixed-purpose events: 0 Feb 12 20:30:24.055300 kernel: ... event mask: 000000000000000f Feb 12 20:30:24.055320 kernel: signal: max sigframe size: 1440 Feb 12 20:30:24.055339 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:30:24.055359 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:30:24.055379 kernel: x86: Booting SMP configuration: Feb 12 20:30:24.055402 kernel: .... node #0, CPUs: #1 Feb 12 20:30:24.055449 kernel: kvm-clock: cpu 1, msr 61faa041, secondary cpu clock Feb 12 20:30:24.055470 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 20:30:24.055489 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:30:24.055509 kernel: smpboot: Max logical packages: 2 Feb 12 20:30:24.055529 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 12 20:30:24.055549 kernel: devtmpfs: initialized Feb 12 20:30:24.055568 kernel: x86/mm: Memory block size: 128MB Feb 12 20:30:24.055588 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:30:24.055613 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:30:24.055633 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:30:24.055652 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:30:24.055672 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:30:24.055692 kernel: audit: type=2000 audit(1707769823.449:1): state=initialized audit_enabled=0 res=1 Feb 12 20:30:24.055712 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:30:24.055732 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:30:24.055752 kernel: cpuidle: using governor menu Feb 12 20:30:24.055771 kernel: ACPI: bus type PCI registered Feb 12 20:30:24.055794 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:30:24.055814 kernel: dca service started, version 1.12.1 Feb 12 20:30:24.055834 kernel: PCI: Using configuration type 1 for base access Feb 12 20:30:24.055854 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:30:24.055874 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:30:24.055894 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:30:24.055913 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:30:24.055933 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:30:24.055953 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:30:24.055976 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:30:24.055995 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:30:24.056016 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:30:24.056036 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:30:24.056055 kernel: ACPI: Interpreter enabled Feb 12 20:30:24.056075 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:30:24.056094 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:30:24.056115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:30:24.056134 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:30:24.056157 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:30:24.058582 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:30:24.058843 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 20:30:24.058876 kernel: acpiphp: Slot [3] registered Feb 12 20:30:24.058897 kernel: acpiphp: Slot [4] registered Feb 12 20:30:24.058917 kernel: acpiphp: Slot [5] registered Feb 12 20:30:24.058937 kernel: acpiphp: Slot [6] registered Feb 12 20:30:24.058956 kernel: acpiphp: Slot [7] registered Feb 12 20:30:24.058985 kernel: acpiphp: Slot [8] registered Feb 12 20:30:24.059004 kernel: acpiphp: Slot [9] registered Feb 12 20:30:24.059024 kernel: acpiphp: Slot [10] registered Feb 12 20:30:24.059043 kernel: acpiphp: Slot [11] registered Feb 12 20:30:24.059063 kernel: acpiphp: Slot [12] registered Feb 12 20:30:24.059083 kernel: acpiphp: Slot [13] registered Feb 12 20:30:24.059102 kernel: acpiphp: Slot [14] registered Feb 12 20:30:24.059122 kernel: acpiphp: Slot [15] registered Feb 12 20:30:24.059142 kernel: acpiphp: Slot [16] registered Feb 12 20:30:24.059165 kernel: acpiphp: Slot [17] registered Feb 12 20:30:24.059184 kernel: acpiphp: Slot [18] registered Feb 12 20:30:24.059204 kernel: acpiphp: Slot [19] registered Feb 12 20:30:24.059223 kernel: acpiphp: Slot [20] registered Feb 12 20:30:24.059243 kernel: acpiphp: Slot [21] registered Feb 12 20:30:24.059262 kernel: acpiphp: Slot [22] registered Feb 12 20:30:24.059281 kernel: acpiphp: Slot [23] registered Feb 12 20:30:24.059301 kernel: acpiphp: Slot [24] registered Feb 12 20:30:24.059320 kernel: acpiphp: Slot [25] registered Feb 12 20:30:24.059344 kernel: acpiphp: Slot [26] registered Feb 12 20:30:24.059363 kernel: acpiphp: Slot [27] registered Feb 12 20:30:24.059383 kernel: acpiphp: Slot [28] registered Feb 12 20:30:24.059402 kernel: acpiphp: Slot [29] registered Feb 12 20:30:24.059457 kernel: acpiphp: Slot [30] registered Feb 12 20:30:24.059478 kernel: acpiphp: Slot [31] registered Feb 12 20:30:24.059498 kernel: PCI host bridge to bus 0000:00 Feb 12 20:30:24.059722 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:30:24.059909 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:30:24.060121 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:30:24.060283 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 20:30:24.060417 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:30:24.060584 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:30:24.060773 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:30:24.060942 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:30:24.061147 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:30:24.061304 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 12 20:30:24.061491 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:30:24.061649 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:30:24.061802 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:30:24.061955 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:30:24.062119 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:30:24.062282 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:30:24.068457 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:30:24.068588 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 20:30:24.068676 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 20:30:24.068762 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 20:30:24.068845 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 12 20:30:24.068932 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 12 20:30:24.069014 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:30:24.069115 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:30:24.069199 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 12 20:30:24.069280 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 12 20:30:24.069361 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 20:30:24.069462 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 12 20:30:24.069551 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:30:24.069638 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:30:24.069719 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 12 20:30:24.069800 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 20:30:24.069886 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 20:30:24.069976 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 12 20:30:24.070059 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 20:30:24.070161 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:30:24.070244 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 12 20:30:24.070325 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 20:30:24.070337 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:30:24.070345 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:30:24.070353 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:30:24.070362 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:30:24.070370 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:30:24.070380 kernel: iommu: Default domain type: Translated Feb 12 20:30:24.070388 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:30:24.070499 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:30:24.070585 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:30:24.070668 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:30:24.070680 kernel: vgaarb: loaded Feb 12 20:30:24.070688 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:30:24.070696 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:30:24.070704 kernel: PTP clock support registered Feb 12 20:30:24.070715 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:30:24.070735 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:30:24.070743 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:30:24.070751 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 12 20:30:24.070759 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:30:24.070767 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:30:24.070774 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:30:24.070782 kernel: pnp: PnP ACPI init Feb 12 20:30:24.070868 kernel: pnp 00:03: [dma 2] Feb 12 20:30:24.070884 kernel: pnp: PnP ACPI: found 5 devices Feb 12 20:30:24.070892 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:30:24.070900 kernel: NET: Registered PF_INET protocol family Feb 12 20:30:24.070908 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:30:24.070916 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 20:30:24.070924 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:30:24.070932 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 20:30:24.070940 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 20:30:24.070950 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 20:30:24.070958 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:30:24.070966 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:30:24.070974 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:30:24.070982 kernel: NET: Registered PF_XDP protocol family Feb 12 20:30:24.071054 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:30:24.071129 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:30:24.071199 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:30:24.071270 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 20:30:24.071355 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:30:24.071487 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:30:24.071573 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:30:24.071655 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:30:24.071666 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:30:24.071675 kernel: Initialise system trusted keyrings Feb 12 20:30:24.071683 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 20:30:24.071691 kernel: Key type asymmetric registered Feb 12 20:30:24.071702 kernel: Asymmetric key parser 'x509' registered Feb 12 20:30:24.071710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:30:24.071718 kernel: io scheduler mq-deadline registered Feb 12 20:30:24.071726 kernel: io scheduler kyber registered Feb 12 20:30:24.071733 kernel: io scheduler bfq registered Feb 12 20:30:24.071741 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:30:24.071750 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 20:30:24.071758 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:30:24.071766 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 20:30:24.071775 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:30:24.071783 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:30:24.071792 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:30:24.071799 kernel: random: crng init done Feb 12 20:30:24.071807 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:30:24.071815 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:30:24.071823 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:30:24.071929 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 12 20:30:24.071945 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:30:24.072020 kernel: rtc_cmos 00:04: registered as rtc0 Feb 12 20:30:24.072094 kernel: rtc_cmos 00:04: setting system clock to 2024-02-12T20:30:23 UTC (1707769823) Feb 12 20:30:24.072166 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 12 20:30:24.072177 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:30:24.072186 kernel: Segment Routing with IPv6 Feb 12 20:30:24.072194 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:30:24.072202 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:30:24.072210 kernel: Key type dns_resolver registered Feb 12 20:30:24.072220 kernel: IPI shorthand broadcast: enabled Feb 12 20:30:24.072228 kernel: sched_clock: Marking stable (690383420, 116912732)->(832179793, -24883641) Feb 12 20:30:24.072236 kernel: registered taskstats version 1 Feb 12 20:30:24.072244 kernel: Loading compiled-in X.509 certificates Feb 12 20:30:24.072253 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:30:24.072261 kernel: Key type .fscrypt registered Feb 12 20:30:24.072269 kernel: Key type fscrypt-provisioning registered Feb 12 20:30:24.072277 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:30:24.072286 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:30:24.072294 kernel: ima: No architecture policies found Feb 12 20:30:24.072302 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:30:24.072310 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:30:24.072318 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:30:24.072326 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:30:24.072334 kernel: Run /init as init process Feb 12 20:30:24.072342 kernel: with arguments: Feb 12 20:30:24.072350 kernel: /init Feb 12 20:30:24.072357 kernel: with environment: Feb 12 20:30:24.072366 kernel: HOME=/ Feb 12 20:30:24.072374 kernel: TERM=linux Feb 12 20:30:24.072382 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:30:24.072392 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:30:24.072403 systemd[1]: Detected virtualization kvm. Feb 12 20:30:24.072412 systemd[1]: Detected architecture x86-64. Feb 12 20:30:24.073521 systemd[1]: Running in initrd. Feb 12 20:30:24.073548 systemd[1]: No hostname configured, using default hostname. Feb 12 20:30:24.073556 systemd[1]: Hostname set to . Feb 12 20:30:24.073566 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:30:24.073574 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:30:24.073583 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:30:24.073591 systemd[1]: Reached target cryptsetup.target. Feb 12 20:30:24.073600 systemd[1]: Reached target paths.target. Feb 12 20:30:24.073608 systemd[1]: Reached target slices.target. Feb 12 20:30:24.073618 systemd[1]: Reached target swap.target. Feb 12 20:30:24.073626 systemd[1]: Reached target timers.target. Feb 12 20:30:24.073635 systemd[1]: Listening on iscsid.socket. Feb 12 20:30:24.073644 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:30:24.073652 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:30:24.073661 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:30:24.073669 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:30:24.073678 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:30:24.073688 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:30:24.073697 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:30:24.073705 systemd[1]: Reached target sockets.target. Feb 12 20:30:24.073714 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:30:24.073729 systemd[1]: Finished network-cleanup.service. Feb 12 20:30:24.073739 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:30:24.073749 systemd[1]: Starting systemd-journald.service... Feb 12 20:30:24.073758 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:30:24.073767 systemd[1]: Starting systemd-resolved.service... Feb 12 20:30:24.073776 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:30:24.073784 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:30:24.073793 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:30:24.073805 systemd-journald[184]: Journal started Feb 12 20:30:24.073875 systemd-journald[184]: Runtime Journal (/run/log/journal/ea0097807cab411b9f17d6cf7311bd54) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:30:24.032726 systemd-modules-load[185]: Inserted module 'overlay' Feb 12 20:30:24.093833 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:30:24.093854 kernel: Bridge firewalling registered Feb 12 20:30:24.083833 systemd-resolved[186]: Positive Trust Anchors: Feb 12 20:30:24.098968 systemd[1]: Started systemd-journald.service. Feb 12 20:30:24.098986 kernel: audit: type=1130 audit(1707769824.093:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.083843 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:30:24.104746 kernel: audit: type=1130 audit(1707769824.098:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.083878 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:30:24.110341 kernel: audit: type=1130 audit(1707769824.104:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.086470 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 12 20:30:24.114459 kernel: audit: type=1130 audit(1707769824.110:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.087570 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 12 20:30:24.099505 systemd[1]: Started systemd-resolved.service. Feb 12 20:30:24.105449 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:30:24.124920 kernel: SCSI subsystem initialized Feb 12 20:30:24.110909 systemd[1]: Reached target nss-lookup.target. Feb 12 20:30:24.115524 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:30:24.135454 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:30:24.135473 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:30:24.135484 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:30:24.116604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:30:24.142799 kernel: audit: type=1130 audit(1707769824.136:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.127617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:30:24.136935 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:30:24.138055 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:30:24.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.148543 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 12 20:30:24.149970 kernel: audit: type=1130 audit(1707769824.136:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.149229 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:30:24.152463 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:30:24.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.159460 kernel: audit: type=1130 audit(1707769824.151:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.159928 dracut-cmdline[200]: dracut-dracut-053 Feb 12 20:30:24.161139 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:30:24.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.165459 kernel: audit: type=1130 audit(1707769824.161:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.166022 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:30:24.222466 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:30:24.236456 kernel: iscsi: registered transport (tcp) Feb 12 20:30:24.260214 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:30:24.260297 kernel: QLogic iSCSI HBA Driver Feb 12 20:30:24.315015 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:30:24.326311 kernel: audit: type=1130 audit(1707769824.315:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.318300 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:30:24.420557 kernel: raid6: sse2x4 gen() 7574 MB/s Feb 12 20:30:24.437505 kernel: raid6: sse2x4 xor() 7083 MB/s Feb 12 20:30:24.454503 kernel: raid6: sse2x2 gen() 14677 MB/s Feb 12 20:30:24.471506 kernel: raid6: sse2x2 xor() 8742 MB/s Feb 12 20:30:24.488504 kernel: raid6: sse2x1 gen() 10741 MB/s Feb 12 20:30:24.506815 kernel: raid6: sse2x1 xor() 6604 MB/s Feb 12 20:30:24.506885 kernel: raid6: using algorithm sse2x2 gen() 14677 MB/s Feb 12 20:30:24.506913 kernel: raid6: .... xor() 8742 MB/s, rmw enabled Feb 12 20:30:24.507601 kernel: raid6: using ssse3x2 recovery algorithm Feb 12 20:30:24.523257 kernel: xor: measuring software checksum speed Feb 12 20:30:24.523316 kernel: prefetch64-sse : 18464 MB/sec Feb 12 20:30:24.525798 kernel: generic_sse : 16819 MB/sec Feb 12 20:30:24.525857 kernel: xor: using function: prefetch64-sse (18464 MB/sec) Feb 12 20:30:24.638944 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:30:24.652925 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:30:24.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.655000 audit: BPF prog-id=7 op=LOAD Feb 12 20:30:24.655000 audit: BPF prog-id=8 op=LOAD Feb 12 20:30:24.657277 systemd[1]: Starting systemd-udevd.service... Feb 12 20:30:24.693501 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 20:30:24.699233 systemd[1]: Started systemd-udevd.service. Feb 12 20:30:24.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.700658 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:30:24.724789 dracut-pre-trigger[385]: rd.md=0: removing MD RAID activation Feb 12 20:30:24.763658 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:30:24.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.764953 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:30:24.821645 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:30:24.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:24.874457 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 12 20:30:24.886435 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:30:24.886479 kernel: GPT:17805311 != 41943039 Feb 12 20:30:24.886492 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:30:24.886503 kernel: GPT:17805311 != 41943039 Feb 12 20:30:24.886519 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:30:24.886530 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:30:24.914459 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) Feb 12 20:30:24.922732 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:30:24.960019 kernel: libata version 3.00 loaded. Feb 12 20:30:24.960040 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:30:24.960184 kernel: scsi host0: ata_piix Feb 12 20:30:24.960299 kernel: scsi host1: ata_piix Feb 12 20:30:24.960408 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 12 20:30:24.960435 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 12 20:30:24.963319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:30:24.963894 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:30:24.968564 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:30:24.972556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:30:24.973772 systemd[1]: Starting disk-uuid.service... Feb 12 20:30:24.986311 disk-uuid[460]: Primary Header is updated. Feb 12 20:30:24.986311 disk-uuid[460]: Secondary Entries is updated. Feb 12 20:30:24.986311 disk-uuid[460]: Secondary Header is updated. Feb 12 20:30:24.991448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:30:24.996436 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:30:26.009306 disk-uuid[461]: The operation has completed successfully. Feb 12 20:30:26.011020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:30:26.071403 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:30:26.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.071648 systemd[1]: Finished disk-uuid.service. Feb 12 20:30:26.092137 systemd[1]: Starting verity-setup.service... Feb 12 20:30:26.109141 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 12 20:30:26.199095 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:30:26.201842 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:30:26.203414 systemd[1]: Finished verity-setup.service. Feb 12 20:30:26.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.339188 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:30:26.340645 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:30:26.339757 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:30:26.340384 systemd[1]: Starting ignition-setup.service... Feb 12 20:30:26.343687 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:30:26.368271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:30:26.368342 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:30:26.368367 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:30:26.386245 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:30:26.400056 systemd[1]: Finished ignition-setup.service. Feb 12 20:30:26.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.401340 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:30:26.428779 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:30:26.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.429000 audit: BPF prog-id=9 op=LOAD Feb 12 20:30:26.430843 systemd[1]: Starting systemd-networkd.service... Feb 12 20:30:26.463316 systemd-networkd[632]: lo: Link UP Feb 12 20:30:26.463333 systemd-networkd[632]: lo: Gained carrier Feb 12 20:30:26.463851 systemd-networkd[632]: Enumeration completed Feb 12 20:30:26.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.464087 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:30:26.465138 systemd[1]: Started systemd-networkd.service. Feb 12 20:30:26.466195 systemd-networkd[632]: eth0: Link UP Feb 12 20:30:26.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.466199 systemd-networkd[632]: eth0: Gained carrier Feb 12 20:30:26.467101 systemd[1]: Reached target network.target. Feb 12 20:30:26.471123 systemd[1]: Starting iscsiuio.service... Feb 12 20:30:26.477192 systemd[1]: Started iscsiuio.service. Feb 12 20:30:26.479796 systemd[1]: Starting iscsid.service... Feb 12 20:30:26.483799 iscsid[637]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:30:26.483799 iscsid[637]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:30:26.483799 iscsid[637]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:30:26.483799 iscsid[637]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:30:26.483799 iscsid[637]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:30:26.483799 iscsid[637]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:30:26.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.484409 systemd[1]: Started iscsid.service. Feb 12 20:30:26.485993 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:30:26.487895 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.19/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:30:26.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.499864 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:30:26.500579 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:30:26.501005 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:30:26.501454 systemd[1]: Reached target remote-fs.target. Feb 12 20:30:26.502699 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:30:26.512157 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:30:26.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.751290 ignition[616]: Ignition 2.14.0 Feb 12 20:30:26.752572 ignition[616]: Stage: fetch-offline Feb 12 20:30:26.752734 ignition[616]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:30:26.752777 ignition[616]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:30:26.755714 ignition[616]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:30:26.756011 ignition[616]: parsed url from cmdline: "" Feb 12 20:30:26.756020 ignition[616]: no config URL provided Feb 12 20:30:26.756033 ignition[616]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:30:26.758585 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:30:26.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:26.756052 ignition[616]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:30:26.761774 systemd[1]: Starting ignition-fetch.service... Feb 12 20:30:26.756064 ignition[616]: failed to fetch config: resource requires networking Feb 12 20:30:26.756923 ignition[616]: Ignition finished successfully Feb 12 20:30:26.781917 ignition[655]: Ignition 2.14.0 Feb 12 20:30:26.783609 ignition[655]: Stage: fetch Feb 12 20:30:26.784978 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:30:26.786819 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:30:26.789302 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:30:26.791340 ignition[655]: parsed url from cmdline: "" Feb 12 20:30:26.791513 ignition[655]: no config URL provided Feb 12 20:30:26.792695 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:30:26.794337 ignition[655]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:30:26.799858 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 12 20:30:26.799934 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 12 20:30:26.802031 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 12 20:30:27.145730 ignition[655]: GET result: OK Feb 12 20:30:27.145900 ignition[655]: parsing config with SHA512: 842b5076b57039f4c201512b3f7ab12f9d5b06b8f3905698bac27332ef5948c65f198247316772aebd15992d4cc044661efd5b069de3ab0773aed9cd17c25dcc Feb 12 20:30:27.216378 unknown[655]: fetched base config from "system" Feb 12 20:30:27.216415 unknown[655]: fetched base config from "system" Feb 12 20:30:27.217634 ignition[655]: fetch: fetch complete Feb 12 20:30:27.216466 unknown[655]: fetched user config from "openstack" Feb 12 20:30:27.217647 ignition[655]: fetch: fetch passed Feb 12 20:30:27.221254 systemd[1]: Finished ignition-fetch.service. Feb 12 20:30:27.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.217741 ignition[655]: Ignition finished successfully Feb 12 20:30:27.224768 systemd[1]: Starting ignition-kargs.service... Feb 12 20:30:27.244832 ignition[661]: Ignition 2.14.0 Feb 12 20:30:27.244860 ignition[661]: Stage: kargs Feb 12 20:30:27.245087 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:30:27.245128 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:30:27.258527 systemd[1]: Finished ignition-kargs.service. Feb 12 20:30:27.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.247372 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:30:27.261389 systemd[1]: Starting ignition-disks.service... Feb 12 20:30:27.250101 ignition[661]: kargs: kargs passed Feb 12 20:30:27.250189 ignition[661]: Ignition finished successfully Feb 12 20:30:27.279229 ignition[667]: Ignition 2.14.0 Feb 12 20:30:27.279256 ignition[667]: Stage: disks Feb 12 20:30:27.279529 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:30:27.279571 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:30:27.281819 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:30:27.284509 ignition[667]: disks: disks passed Feb 12 20:30:27.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.286097 systemd[1]: Finished ignition-disks.service. Feb 12 20:30:27.284598 ignition[667]: Ignition finished successfully Feb 12 20:30:27.287509 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:30:27.289108 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:30:27.291041 systemd[1]: Reached target local-fs.target. Feb 12 20:30:27.292500 systemd[1]: Reached target sysinit.target. Feb 12 20:30:27.293941 systemd[1]: Reached target basic.target. Feb 12 20:30:27.297149 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:30:27.321788 systemd-fsck[675]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 20:30:27.331692 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:30:27.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.334632 systemd[1]: Mounting sysroot.mount... Feb 12 20:30:27.351493 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:30:27.353053 systemd[1]: Mounted sysroot.mount. Feb 12 20:30:27.355390 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:30:27.359554 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:30:27.361009 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:30:27.362155 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 12 20:30:27.367146 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:30:27.367194 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:30:27.376789 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:30:27.386535 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:30:27.389800 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:30:27.403973 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:30:27.413390 initrd-setup-root[695]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:30:27.427469 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Feb 12 20:30:27.431228 initrd-setup-root[703]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:30:27.437302 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:30:27.437459 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:30:27.437531 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:30:27.447781 initrd-setup-root[727]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:30:27.455466 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:30:27.547847 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:30:27.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.550771 systemd[1]: Starting ignition-mount.service... Feb 12 20:30:27.553192 systemd[1]: Starting sysroot-boot.service... Feb 12 20:30:27.571343 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:30:27.571475 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:30:27.607747 ignition[749]: INFO : Ignition 2.14.0 Feb 12 20:30:27.608536 ignition[749]: INFO : Stage: mount Feb 12 20:30:27.609112 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:30:27.609843 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:30:27.611961 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:30:27.613619 coreos-metadata[681]: Feb 12 20:30:27.613 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 12 20:30:27.615141 ignition[749]: INFO : mount: mount passed Feb 12 20:30:27.615660 ignition[749]: INFO : Ignition finished successfully Feb 12 20:30:27.616906 systemd[1]: Finished ignition-mount.service. Feb 12 20:30:27.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.622307 systemd[1]: Finished sysroot-boot.service. Feb 12 20:30:27.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.633973 coreos-metadata[681]: Feb 12 20:30:27.633 INFO Fetch successful Feb 12 20:30:27.634644 coreos-metadata[681]: Feb 12 20:30:27.634 INFO wrote hostname ci-3510-3-2-7-712d402420.novalocal to /sysroot/etc/hostname Feb 12 20:30:27.638016 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 12 20:30:27.638125 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 12 20:30:27.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:27.640203 systemd[1]: Starting ignition-files.service... Feb 12 20:30:27.647193 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:30:27.656453 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Feb 12 20:30:27.659829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:30:27.659857 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:30:27.659869 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:30:27.668032 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:30:27.678163 ignition[777]: INFO : Ignition 2.14.0 Feb 12 20:30:27.678163 ignition[777]: INFO : Stage: files Feb 12 20:30:27.679266 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:30:27.679266 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:30:27.679266 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:30:27.681774 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:30:27.682608 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:30:27.682608 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:30:27.685352 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:30:27.686313 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:30:27.687724 unknown[777]: wrote ssh authorized keys file for user: core Feb 12 20:30:27.689353 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:30:27.690568 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:30:27.690568 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 20:30:28.148098 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:30:28.198496 systemd-networkd[632]: eth0: Gained IPv6LL Feb 12 20:30:28.994585 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 20:30:28.998692 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:30:28.998692 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:30:28.998692 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:30:29.326353 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:30:29.787165 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 20:30:29.788703 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:30:29.790264 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:30:29.791068 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:30:29.931867 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:30:30.851038 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 12 20:30:30.851038 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:30:30.851038 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:30:30.858782 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:30:30.981673 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 20:30:33.067924 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 12 20:30:33.069592 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:30:33.069592 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:30:33.071204 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:30:33.071204 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:30:33.071204 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:30:33.074946 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:30:33.075828 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:30:33.075828 ignition[777]: INFO : files: op(a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(a): op(b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(c): [started] processing unit "coreos-metadata.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(c): op(d): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(c): [finished] processing unit "coreos-metadata.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(e): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(e): op(f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(e): op(f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(e): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 12 20:30:33.077695 ignition[777]: INFO : files: op(12): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:30:33.091578 ignition[777]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:30:33.091578 ignition[777]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:30:33.091578 ignition[777]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:30:33.091578 ignition[777]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:30:33.091578 ignition[777]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:30:33.091578 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:30:33.091578 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:30:33.091578 ignition[777]: INFO : files: files passed Feb 12 20:30:33.091578 ignition[777]: INFO : Ignition finished successfully Feb 12 20:30:33.117772 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 12 20:30:33.117798 kernel: audit: type=1130 audit(1707769833.096:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.117812 kernel: audit: type=1130 audit(1707769833.109:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.117829 kernel: audit: type=1131 audit(1707769833.109:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.092378 systemd[1]: Finished ignition-files.service. Feb 12 20:30:33.101841 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:30:33.103244 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:30:33.104231 systemd[1]: Starting ignition-quench.service... Feb 12 20:30:33.120806 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:30:33.109165 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:30:33.109259 systemd[1]: Finished ignition-quench.service. Feb 12 20:30:33.127288 kernel: audit: type=1130 audit(1707769833.123:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.121780 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:30:33.123714 systemd[1]: Reached target ignition-complete.target. Feb 12 20:30:33.129970 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:30:33.149615 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:30:33.150616 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:30:33.168487 kernel: audit: type=1130 audit(1707769833.151:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.168543 kernel: audit: type=1131 audit(1707769833.151:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.151703 systemd[1]: Reached target initrd-fs.target. Feb 12 20:30:33.168792 systemd[1]: Reached target initrd.target. Feb 12 20:30:33.170362 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:30:33.171177 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:30:33.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.189795 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:30:33.194497 kernel: audit: type=1130 audit(1707769833.189:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.193590 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:30:33.204111 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:30:33.205161 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:30:33.206213 systemd[1]: Stopped target timers.target. Feb 12 20:30:33.207211 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:30:33.207882 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:30:33.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.209052 systemd[1]: Stopped target initrd.target. Feb 12 20:30:33.217469 kernel: audit: type=1131 audit(1707769833.208:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.217886 systemd[1]: Stopped target basic.target. Feb 12 20:30:33.218914 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:30:33.219983 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:30:33.221056 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:30:33.222140 systemd[1]: Stopped target remote-fs.target. Feb 12 20:30:33.223186 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:30:33.224235 systemd[1]: Stopped target sysinit.target. Feb 12 20:30:33.225259 systemd[1]: Stopped target local-fs.target. Feb 12 20:30:33.227252 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:30:33.229297 systemd[1]: Stopped target swap.target. Feb 12 20:30:33.234616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:30:33.234989 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:30:33.240086 kernel: audit: type=1131 audit(1707769833.236:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.236861 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:30:33.241359 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:30:33.241717 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:30:33.247023 kernel: audit: type=1131 audit(1707769833.243:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.243843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:30:33.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.244124 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:30:33.248577 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:30:33.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.248863 systemd[1]: Stopped ignition-files.service. Feb 12 20:30:33.252832 systemd[1]: Stopping ignition-mount.service... Feb 12 20:30:33.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.257002 systemd[1]: Stopping iscsiuio.service... Feb 12 20:30:33.258390 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:30:33.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.260635 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:30:33.260811 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:30:33.263720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:30:33.263892 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:30:33.266544 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:30:33.266655 systemd[1]: Stopped iscsiuio.service. Feb 12 20:30:33.269023 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:30:33.269106 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:30:33.278635 ignition[815]: INFO : Ignition 2.14.0 Feb 12 20:30:33.279317 ignition[815]: INFO : Stage: umount Feb 12 20:30:33.279982 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:30:33.280785 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:30:33.282823 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:30:33.284702 ignition[815]: INFO : umount: umount passed Feb 12 20:30:33.285227 ignition[815]: INFO : Ignition finished successfully Feb 12 20:30:33.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.287161 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:30:33.287263 systemd[1]: Stopped ignition-mount.service. Feb 12 20:30:33.287862 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:30:33.287905 systemd[1]: Stopped ignition-disks.service. Feb 12 20:30:33.288364 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:30:33.288400 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:30:33.288847 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:30:33.288883 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:30:33.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.289311 systemd[1]: Stopped target network.target. Feb 12 20:30:33.289713 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:30:33.289752 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:30:33.290278 systemd[1]: Stopped target paths.target. Feb 12 20:30:33.295024 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:30:33.296613 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:30:33.297337 systemd[1]: Stopped target slices.target. Feb 12 20:30:33.297760 systemd[1]: Stopped target sockets.target. Feb 12 20:30:33.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.298200 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:30:33.298231 systemd[1]: Closed iscsid.socket. Feb 12 20:30:33.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.298638 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:30:33.298666 systemd[1]: Closed iscsiuio.socket. Feb 12 20:30:33.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.299277 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:30:33.299317 systemd[1]: Stopped ignition-setup.service. Feb 12 20:30:33.314000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:30:33.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.300666 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:30:33.301842 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:30:33.302472 systemd-networkd[632]: eth0: DHCPv6 lease lost Feb 12 20:30:33.305643 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:30:33.316000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:30:33.306196 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:30:33.306307 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:30:33.310004 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:30:33.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.310123 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:30:33.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.312134 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:30:33.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.312245 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:30:33.312985 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:30:33.313031 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:30:33.313690 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:30:33.313741 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:30:33.315535 systemd[1]: Stopping network-cleanup.service... Feb 12 20:30:33.320286 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:30:33.320515 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:30:33.321404 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:30:33.321580 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:30:33.322712 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:30:33.322783 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:30:33.323852 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:30:33.326367 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:30:33.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.330829 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:30:33.331027 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:30:33.333824 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:30:33.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.333991 systemd[1]: Stopped network-cleanup.service. Feb 12 20:30:33.335413 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:30:33.335501 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:30:33.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.336271 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:30:33.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.336312 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:30:33.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.337370 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:30:33.337474 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:30:33.338600 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:30:33.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.338659 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:30:33.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:33.339656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:30:33.339709 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:30:33.341863 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:30:33.343153 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:30:33.343207 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:30:33.343861 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:30:33.343902 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:30:33.344369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:30:33.344463 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:30:33.346131 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:30:33.348862 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:30:33.348941 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:30:33.349793 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:30:33.351208 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:30:33.370265 systemd[1]: Switching root. Feb 12 20:30:33.389748 iscsid[637]: iscsid shutting down. Feb 12 20:30:33.390244 systemd-journald[184]: Journal stopped Feb 12 20:30:40.046867 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Feb 12 20:30:40.046955 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:30:40.046971 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:30:40.046984 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:30:40.046999 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:30:40.047013 kernel: SELinux: policy capability open_perms=1 Feb 12 20:30:40.047027 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:30:40.047038 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:30:40.047049 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:30:40.047059 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:30:40.047069 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:30:40.047080 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:30:40.047091 systemd[1]: Successfully loaded SELinux policy in 97.013ms. Feb 12 20:30:40.047107 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.698ms. Feb 12 20:30:40.047121 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:30:40.047135 systemd[1]: Detected virtualization kvm. Feb 12 20:30:40.047147 systemd[1]: Detected architecture x86-64. Feb 12 20:30:40.047159 systemd[1]: Detected first boot. Feb 12 20:30:40.047172 systemd[1]: Hostname set to . Feb 12 20:30:40.047184 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:30:40.047196 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:30:40.047207 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:30:40.047221 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:30:40.047234 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:30:40.047247 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:30:40.047259 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 12 20:30:40.047270 kernel: audit: type=1334 audit(1707769839.706:89): prog-id=12 op=LOAD Feb 12 20:30:40.047281 kernel: audit: type=1334 audit(1707769839.706:90): prog-id=3 op=UNLOAD Feb 12 20:30:40.047292 kernel: audit: type=1334 audit(1707769839.707:91): prog-id=13 op=LOAD Feb 12 20:30:40.047304 kernel: audit: type=1334 audit(1707769839.712:92): prog-id=14 op=LOAD Feb 12 20:30:40.047319 kernel: audit: type=1334 audit(1707769839.712:93): prog-id=4 op=UNLOAD Feb 12 20:30:40.047329 kernel: audit: type=1334 audit(1707769839.712:94): prog-id=5 op=UNLOAD Feb 12 20:30:40.047340 kernel: audit: type=1334 audit(1707769839.715:95): prog-id=15 op=LOAD Feb 12 20:30:40.047351 kernel: audit: type=1334 audit(1707769839.715:96): prog-id=12 op=UNLOAD Feb 12 20:30:40.047363 kernel: audit: type=1334 audit(1707769839.721:97): prog-id=16 op=LOAD Feb 12 20:30:40.047373 kernel: audit: type=1334 audit(1707769839.724:98): prog-id=17 op=LOAD Feb 12 20:30:40.047385 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:30:40.047396 systemd[1]: Stopped iscsid.service. Feb 12 20:30:40.047409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:30:40.047443 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:30:40.047458 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:30:40.047470 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:30:40.047485 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:30:40.047497 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:30:40.047512 systemd[1]: Created slice system-getty.slice. Feb 12 20:30:40.047525 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:30:40.047537 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:30:40.047549 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:30:40.047563 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:30:40.047576 systemd[1]: Created slice user.slice. Feb 12 20:30:40.047588 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:30:40.047600 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:30:40.047612 systemd[1]: Set up automount boot.automount. Feb 12 20:30:40.047627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:30:40.047639 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:30:40.047652 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:30:40.047664 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:30:40.047681 systemd[1]: Reached target integritysetup.target. Feb 12 20:30:40.047693 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:30:40.047705 systemd[1]: Reached target remote-fs.target. Feb 12 20:30:40.047717 systemd[1]: Reached target slices.target. Feb 12 20:30:40.047729 systemd[1]: Reached target swap.target. Feb 12 20:30:40.047742 systemd[1]: Reached target torcx.target. Feb 12 20:30:40.047754 systemd[1]: Reached target veritysetup.target. Feb 12 20:30:40.047767 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:30:40.047778 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:30:40.047790 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:30:40.047802 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:30:40.047813 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:30:40.047825 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:30:40.047837 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:30:40.047849 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:30:40.047862 systemd[1]: Mounting media.mount... Feb 12 20:30:40.047875 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:30:40.047886 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:30:40.047898 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:30:40.047909 systemd[1]: Mounting tmp.mount... Feb 12 20:30:40.047921 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:30:40.047933 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:30:40.047944 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:30:40.047956 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:30:40.047969 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:30:40.047981 systemd[1]: Starting modprobe@drm.service... Feb 12 20:30:40.047992 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:30:40.048004 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:30:40.048018 systemd[1]: Starting modprobe@loop.service... Feb 12 20:30:40.048030 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:30:40.048042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:30:40.048054 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:30:40.048067 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:30:40.048079 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:30:40.048091 systemd[1]: Stopped systemd-journald.service. Feb 12 20:30:40.048102 systemd[1]: Starting systemd-journald.service... Feb 12 20:30:40.048114 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:30:40.048126 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:30:40.048138 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:30:40.048149 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:30:40.048161 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:30:40.048173 systemd[1]: Stopped verity-setup.service. Feb 12 20:30:40.048186 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:30:40.048198 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:30:40.048209 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:30:40.048221 systemd[1]: Mounted media.mount. Feb 12 20:30:40.048232 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:30:40.048244 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:30:40.048255 systemd[1]: Mounted tmp.mount. Feb 12 20:30:40.048268 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:30:40.048280 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:30:40.048294 systemd[1]: Finished modprobe@drm.service. Feb 12 20:30:40.048305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:30:40.048317 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:30:40.048328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:30:40.048340 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:30:40.048353 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:30:40.048365 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:30:40.048377 systemd[1]: Reached target network-pre.target. Feb 12 20:30:40.048388 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:30:40.048400 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:30:40.048414 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:30:40.053513 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:30:40.053544 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:30:40.053557 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:30:40.053583 systemd-journald[909]: Journal started Feb 12 20:30:40.053649 systemd-journald[909]: Runtime Journal (/run/log/journal/ea0097807cab411b9f17d6cf7311bd54) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:30:33.677000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:30:34.043000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:30:34.043000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:30:34.043000 audit: BPF prog-id=10 op=LOAD Feb 12 20:30:34.043000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:30:34.044000 audit: BPF prog-id=11 op=LOAD Feb 12 20:30:34.044000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:30:34.513000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:30:34.513000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178cc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:30:34.513000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:30:34.518000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:30:34.518000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a5 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:30:34.518000 audit: CWD cwd="/" Feb 12 20:30:34.518000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:34.518000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:34.518000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:30:39.706000 audit: BPF prog-id=12 op=LOAD Feb 12 20:30:39.706000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:30:39.707000 audit: BPF prog-id=13 op=LOAD Feb 12 20:30:39.712000 audit: BPF prog-id=14 op=LOAD Feb 12 20:30:39.712000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:30:39.712000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:30:39.715000 audit: BPF prog-id=15 op=LOAD Feb 12 20:30:39.715000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:30:39.721000 audit: BPF prog-id=16 op=LOAD Feb 12 20:30:39.724000 audit: BPF prog-id=17 op=LOAD Feb 12 20:30:39.724000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:30:39.724000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:30:39.727000 audit: BPF prog-id=18 op=LOAD Feb 12 20:30:39.727000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:30:39.730000 audit: BPF prog-id=19 op=LOAD Feb 12 20:30:39.733000 audit: BPF prog-id=20 op=LOAD Feb 12 20:30:39.733000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:30:39.733000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:30:39.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.741000 audit: BPF prog-id=18 op=UNLOAD Feb 12 20:30:39.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.860000 audit: BPF prog-id=21 op=LOAD Feb 12 20:30:39.860000 audit: BPF prog-id=22 op=LOAD Feb 12 20:30:39.861000 audit: BPF prog-id=23 op=LOAD Feb 12 20:30:39.861000 audit: BPF prog-id=19 op=UNLOAD Feb 12 20:30:39.861000 audit: BPF prog-id=20 op=UNLOAD Feb 12 20:30:39.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:39.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.036000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:30:40.036000 audit[909]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd5f6142b0 a2=4000 a3=7ffd5f61434c items=0 ppid=1 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:30:40.055439 systemd[1]: Started systemd-journald.service. Feb 12 20:30:40.036000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:30:34.505150 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:30:39.704564 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:30:34.506985 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:30:39.704579 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:30:34.507086 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:30:39.734853 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:30:40.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.075830 systemd-journald[909]: Time spent on flushing to /var/log/journal/ea0097807cab411b9f17d6cf7311bd54 is 46.009ms for 1091 entries. Feb 12 20:30:40.075830 systemd-journald[909]: System Journal (/var/log/journal/ea0097807cab411b9f17d6cf7311bd54) is 8.0M, max 584.8M, 576.8M free. Feb 12 20:30:40.160118 systemd-journald[909]: Received client request to flush runtime journal. Feb 12 20:30:40.160207 kernel: fuse: init (API version 7.34) Feb 12 20:30:40.160236 kernel: loop: module loaded Feb 12 20:30:40.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:34.507194 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:30:40.057722 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:30:34.507230 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:30:40.059534 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:30:34.507341 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:30:40.060312 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:30:40.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:34.507386 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:30:40.066031 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:30:34.508018 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:30:40.066624 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:30:34.508139 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:30:40.068213 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:30:34.508183 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:30:40.073123 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:30:34.509924 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:30:40.078551 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:30:34.510038 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:30:40.104379 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:30:34.510099 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:30:40.104544 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:30:34.510151 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:30:40.105875 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:30:34.510208 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:30:40.105986 systemd[1]: Finished modprobe@loop.service. Feb 12 20:30:34.510256 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:30:40.110631 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:30:38.926854 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:30:40.111197 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:30:38.927190 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:30:40.114549 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:30:38.932107 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:30:40.161338 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:30:38.932333 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:30:38.932407 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:30:38.932503 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-12T20:30:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:30:40.174075 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:30:40.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.175796 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:30:40.176528 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:30:40.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.177989 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:30:40.191643 udevadm[956]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:30:40.220267 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:30:40.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:40.221814 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:30:40.264636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:30:40.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.406617 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:30:41.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.409000 audit: BPF prog-id=24 op=LOAD Feb 12 20:30:41.410000 audit: BPF prog-id=25 op=LOAD Feb 12 20:30:41.410000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:30:41.410000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:30:41.411854 systemd[1]: Starting systemd-udevd.service... Feb 12 20:30:41.455732 systemd-udevd[961]: Using default interface naming scheme 'v252'. Feb 12 20:30:41.527150 systemd[1]: Started systemd-udevd.service. Feb 12 20:30:41.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.529000 audit: BPF prog-id=26 op=LOAD Feb 12 20:30:41.535071 systemd[1]: Starting systemd-networkd.service... Feb 12 20:30:41.550000 audit: BPF prog-id=27 op=LOAD Feb 12 20:30:41.550000 audit: BPF prog-id=28 op=LOAD Feb 12 20:30:41.550000 audit: BPF prog-id=29 op=LOAD Feb 12 20:30:41.551662 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:30:41.608684 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:30:41.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.617611 systemd[1]: Started systemd-userdbd.service. Feb 12 20:30:41.651776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:30:41.678461 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 12 20:30:41.706493 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:30:41.734262 systemd-networkd[971]: lo: Link UP Feb 12 20:30:41.735396 systemd-networkd[971]: lo: Gained carrier Feb 12 20:30:41.736701 systemd-networkd[971]: Enumeration completed Feb 12 20:30:41.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.737012 systemd[1]: Started systemd-networkd.service. Feb 12 20:30:41.738565 systemd-networkd[971]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:30:41.740941 systemd-networkd[971]: eth0: Link UP Feb 12 20:30:41.741008 systemd-networkd[971]: eth0: Gained carrier Feb 12 20:30:41.718000 audit[968]: AVC avc: denied { confidentiality } for pid=968 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:30:41.718000 audit[968]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558a67e684b0 a1=32194 a2=7fddff3d5bc5 a3=5 items=108 ppid=961 pid=968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:30:41.718000 audit: CWD cwd="/" Feb 12 20:30:41.718000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=1 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=2 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=3 name=(null) inode=13873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=4 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=5 name=(null) inode=13874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=6 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=7 name=(null) inode=13875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=8 name=(null) inode=13875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=9 name=(null) inode=13876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=10 name=(null) inode=13875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=11 name=(null) inode=13877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=12 name=(null) inode=13875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=13 name=(null) inode=13878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=14 name=(null) inode=13875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=15 name=(null) inode=13879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=16 name=(null) inode=13875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=17 name=(null) inode=13880 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=18 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=19 name=(null) inode=13881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=20 name=(null) inode=13881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=21 name=(null) inode=13882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=22 name=(null) inode=13881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=23 name=(null) inode=13883 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=24 name=(null) inode=13881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=25 name=(null) inode=13884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=26 name=(null) inode=13881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=27 name=(null) inode=13885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=28 name=(null) inode=13881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=29 name=(null) inode=13886 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=30 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=31 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=32 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=33 name=(null) inode=13888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=34 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=35 name=(null) inode=13889 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=36 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=37 name=(null) inode=13890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=38 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=39 name=(null) inode=13891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=40 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=41 name=(null) inode=13892 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=42 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=43 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=44 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=45 name=(null) inode=13894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=46 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=47 name=(null) inode=13895 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=48 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=49 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=50 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=51 name=(null) inode=13897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=52 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=53 name=(null) inode=13898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=55 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=56 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=57 name=(null) inode=13900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=58 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=59 name=(null) inode=13901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=60 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=61 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=62 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=63 name=(null) inode=13903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=64 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=65 name=(null) inode=13904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=66 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=67 name=(null) inode=13905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=68 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=69 name=(null) inode=13906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=70 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=71 name=(null) inode=13907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=72 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=73 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=74 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=75 name=(null) inode=13909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=76 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=77 name=(null) inode=13910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=78 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=79 name=(null) inode=13911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=80 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=81 name=(null) inode=13912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=82 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=83 name=(null) inode=13913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=84 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=85 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=86 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=87 name=(null) inode=13915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=88 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.753581 systemd-networkd[971]: eth0: DHCPv4 address 172.24.4.19/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:30:41.718000 audit: PATH item=89 name=(null) inode=13916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=90 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=91 name=(null) inode=13917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=92 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=93 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=94 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=95 name=(null) inode=13919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=96 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=97 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=98 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=99 name=(null) inode=13921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=100 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=101 name=(null) inode=13922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=102 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=103 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=104 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=105 name=(null) inode=13924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=106 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PATH item=107 name=(null) inode=13925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:30:41.718000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:30:41.758500 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:30:41.765534 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 12 20:30:41.770452 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:30:41.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.814096 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:30:41.815908 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:30:41.845611 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:30:41.879375 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:30:41.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.880874 systemd[1]: Reached target cryptsetup.target. Feb 12 20:30:41.884678 systemd[1]: Starting lvm2-activation.service... Feb 12 20:30:41.894185 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:30:41.934639 systemd[1]: Finished lvm2-activation.service. Feb 12 20:30:41.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:41.936069 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:30:41.937234 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:30:41.937304 systemd[1]: Reached target local-fs.target. Feb 12 20:30:41.938394 systemd[1]: Reached target machines.target. Feb 12 20:30:41.942117 systemd[1]: Starting ldconfig.service... Feb 12 20:30:41.946296 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:30:41.946397 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:30:41.948863 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:30:41.952219 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:30:41.956554 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:30:41.957985 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:30:41.958079 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:30:41.961780 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:30:41.991386 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Feb 12 20:30:41.994155 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:30:42.038714 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:30:42.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:42.041087 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:30:42.046761 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:30:42.057344 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:30:42.605460 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:30:42.607179 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:30:42.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:42.744226 systemd-fsck[1001]: fsck.fat 4.2 (2021-01-31) Feb 12 20:30:42.744226 systemd-fsck[1001]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:30:42.747853 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:30:42.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:42.754594 systemd[1]: Mounting boot.mount... Feb 12 20:30:42.782978 systemd[1]: Mounted boot.mount. Feb 12 20:30:42.819504 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:30:42.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:42.992970 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:30:42.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:42.997107 systemd[1]: Starting audit-rules.service... Feb 12 20:30:43.001672 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:30:43.007236 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:30:43.019000 audit: BPF prog-id=30 op=LOAD Feb 12 20:30:43.022210 systemd[1]: Starting systemd-resolved.service... Feb 12 20:30:43.025000 audit: BPF prog-id=31 op=LOAD Feb 12 20:30:43.027669 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:30:43.030618 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:30:43.032291 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:30:43.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:43.033460 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:30:43.040000 audit[1010]: SYSTEM_BOOT pid=1010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:30:43.041969 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:30:43.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:43.076946 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:30:43.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:43.290182 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:30:43.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:30:43.292579 systemd[1]: Reached target time-set.target. Feb 12 20:30:43.544924 systemd-resolved[1008]: Positive Trust Anchors: Feb 12 20:30:43.545759 systemd-resolved[1008]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:30:43.545848 systemd-resolved[1008]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:30:43.649000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:30:43.649000 audit[1025]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff596e8ad0 a2=420 a3=0 items=0 ppid=1004 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:30:43.649000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:30:43.651054 augenrules[1025]: No rules Feb 12 20:30:43.652865 systemd[1]: Finished audit-rules.service. Feb 12 20:30:43.691392 systemd-resolved[1008]: Using system hostname 'ci-3510-3-2-7-712d402420.novalocal'. Feb 12 20:30:43.702905 systemd[1]: Started systemd-resolved.service. Feb 12 20:30:43.704415 systemd[1]: Reached target network.target. Feb 12 20:30:43.705826 systemd[1]: Reached target nss-lookup.target. Feb 12 20:30:43.749857 systemd-networkd[971]: eth0: Gained IPv6LL Feb 12 20:30:44.344450 systemd-timesyncd[1009]: Contacted time server 45.128.41.10:123 (0.flatcar.pool.ntp.org). Feb 12 20:30:44.344572 systemd-timesyncd[1009]: Initial clock synchronization to Mon 2024-02-12 20:30:44.344247 UTC. Feb 12 20:30:44.345352 systemd-resolved[1008]: Clock change detected. Flushing caches. Feb 12 20:30:44.871307 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:30:44.885587 systemd[1]: Finished ldconfig.service. Feb 12 20:30:44.889640 systemd[1]: Starting systemd-update-done.service... Feb 12 20:30:44.905570 systemd[1]: Finished systemd-update-done.service. Feb 12 20:30:44.906942 systemd[1]: Reached target sysinit.target. Feb 12 20:30:44.908238 systemd[1]: Started motdgen.path. Feb 12 20:30:44.909363 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:30:44.910959 systemd[1]: Started logrotate.timer. Feb 12 20:30:44.912429 systemd[1]: Started mdadm.timer. Feb 12 20:30:44.913438 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:30:44.914543 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:30:44.914622 systemd[1]: Reached target paths.target. Feb 12 20:30:44.915690 systemd[1]: Reached target timers.target. Feb 12 20:30:44.918344 systemd[1]: Listening on dbus.socket. Feb 12 20:30:44.921680 systemd[1]: Starting docker.socket... Feb 12 20:30:44.929849 systemd[1]: Listening on sshd.socket. Feb 12 20:30:44.931431 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:30:44.932802 systemd[1]: Listening on docker.socket. Feb 12 20:30:44.934273 systemd[1]: Reached target sockets.target. Feb 12 20:30:44.935543 systemd[1]: Reached target basic.target. Feb 12 20:30:44.936932 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:30:44.937178 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:30:44.939785 systemd[1]: Starting containerd.service... Feb 12 20:30:44.943802 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:30:44.947677 systemd[1]: Starting dbus.service... Feb 12 20:30:44.953167 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:30:44.958223 systemd[1]: Starting extend-filesystems.service... Feb 12 20:30:44.959481 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:30:44.962316 systemd[1]: Starting motdgen.service... Feb 12 20:30:44.969143 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:30:44.992863 jq[1038]: false Feb 12 20:30:44.973006 systemd[1]: Starting prepare-critools.service... Feb 12 20:30:44.977800 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:30:44.981891 systemd[1]: Starting sshd-keygen.service... Feb 12 20:30:44.991449 systemd[1]: Starting systemd-logind.service... Feb 12 20:30:44.992766 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:30:44.992876 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:30:44.994423 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:30:44.997854 systemd[1]: Starting update-engine.service... Feb 12 20:30:45.000748 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:30:45.005439 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:30:45.005640 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:30:45.015549 jq[1052]: true Feb 12 20:30:45.028815 tar[1055]: crictl Feb 12 20:30:45.033398 tar[1054]: ./ Feb 12 20:30:45.034341 tar[1054]: ./loopback Feb 12 20:30:45.048691 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:30:45.048899 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:30:45.055526 dbus-daemon[1035]: [system] SELinux support is enabled Feb 12 20:30:45.056328 systemd[1]: Started dbus.service. Feb 12 20:30:45.060636 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:30:45.060670 systemd[1]: Reached target system-config.target. Feb 12 20:30:45.061184 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:30:45.061209 systemd[1]: Reached target user-config.target. Feb 12 20:30:45.062653 jq[1062]: true Feb 12 20:30:45.074919 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:30:45.075115 systemd[1]: Finished motdgen.service. Feb 12 20:30:45.078886 extend-filesystems[1039]: Found vda Feb 12 20:30:45.088556 extend-filesystems[1039]: Found vda1 Feb 12 20:30:45.089434 extend-filesystems[1039]: Found vda2 Feb 12 20:30:45.090370 extend-filesystems[1039]: Found vda3 Feb 12 20:30:45.091258 extend-filesystems[1039]: Found usr Feb 12 20:30:45.092027 extend-filesystems[1039]: Found vda4 Feb 12 20:30:45.093151 extend-filesystems[1039]: Found vda6 Feb 12 20:30:45.094357 extend-filesystems[1039]: Found vda7 Feb 12 20:30:45.095739 extend-filesystems[1039]: Found vda9 Feb 12 20:30:45.096574 extend-filesystems[1039]: Checking size of /dev/vda9 Feb 12 20:30:45.130378 extend-filesystems[1039]: Resized partition /dev/vda9 Feb 12 20:30:45.139189 extend-filesystems[1086]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:30:45.170879 update_engine[1050]: I0212 20:30:45.167690 1050 main.cc:92] Flatcar Update Engine starting Feb 12 20:30:45.181314 systemd[1]: Started update-engine.service. Feb 12 20:30:45.184109 update_engine[1050]: I0212 20:30:45.183003 1050 update_check_scheduler.cc:74] Next update check in 5m56s Feb 12 20:30:45.184734 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 12 20:30:45.190064 systemd[1]: Started locksmithd.service. Feb 12 20:30:45.194768 systemd-logind[1048]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:30:45.194796 systemd-logind[1048]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:30:45.195049 systemd-logind[1048]: New seat seat0. Feb 12 20:30:45.199153 systemd[1]: Started systemd-logind.service. Feb 12 20:30:45.258859 coreos-metadata[1034]: Feb 12 20:30:45.258 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 12 20:30:45.267961 env[1063]: time="2024-02-12T20:30:45.265359016Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:30:45.301963 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 12 20:30:45.373620 env[1063]: time="2024-02-12T20:30:45.306841055Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:30:45.375370 extend-filesystems[1086]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:30:45.375370 extend-filesystems[1086]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 12 20:30:45.375370 extend-filesystems[1086]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 12 20:30:45.375674 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.383663947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.386103622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.386205954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.386893754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.386987249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.387097707Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.387170393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.387501143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.388508633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399190 env[1063]: time="2024-02-12T20:30:45.397454701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:30:45.399920 extend-filesystems[1039]: Resized filesystem in /dev/vda9 Feb 12 20:30:45.408283 bash[1091]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:30:45.375904 systemd[1]: Finished extend-filesystems.service. Feb 12 20:30:45.408949 env[1063]: time="2024-02-12T20:30:45.397552014Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:30:45.408949 env[1063]: time="2024-02-12T20:30:45.397805028Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:30:45.408949 env[1063]: time="2024-02-12T20:30:45.397885880Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:30:45.378396 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.409674980Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.409808751Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.409848466Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.409934828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.409976897Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410097873Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410171381Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410221646Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410258344Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410293701Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410326562Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410360536Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410623369Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:30:45.415681 env[1063]: time="2024-02-12T20:30:45.410880611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411582338Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411648021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411685441Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411849799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411891367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411925802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411957161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.411995182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.412043813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.412075933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.412106380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.412142628Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.412450716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.412497644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.416501 env[1063]: time="2024-02-12T20:30:45.412529784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.417156 env[1063]: time="2024-02-12T20:30:45.412560192Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:30:45.417156 env[1063]: time="2024-02-12T20:30:45.412599656Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:30:45.417156 env[1063]: time="2024-02-12T20:30:45.412632267Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:30:45.417156 env[1063]: time="2024-02-12T20:30:45.412674165Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:30:45.421175 env[1063]: time="2024-02-12T20:30:45.419446647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:30:45.421234 env[1063]: time="2024-02-12T20:30:45.419854051Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:30:45.421234 env[1063]: time="2024-02-12T20:30:45.419980168Z" level=info msg="Connect containerd service" Feb 12 20:30:45.421234 env[1063]: time="2024-02-12T20:30:45.420030602Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:30:45.421234 env[1063]: time="2024-02-12T20:30:45.421145644Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.421284143Z" level=info msg="Start subscribing containerd event" Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.421390914Z" level=info msg="Start recovering state" Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.421477857Z" level=info msg="Start event monitor" Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.421492023Z" level=info msg="Start snapshots syncer" Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.421501912Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.421510458Z" level=info msg="Start streaming server" Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.421981531Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:30:45.432885 env[1063]: time="2024-02-12T20:30:45.422026175Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:30:45.433108 tar[1054]: ./bandwidth Feb 12 20:30:45.422210 systemd[1]: Started containerd.service. Feb 12 20:30:45.435664 env[1063]: time="2024-02-12T20:30:45.433576407Z" level=info msg="containerd successfully booted in 0.247698s" Feb 12 20:30:45.469317 coreos-metadata[1034]: Feb 12 20:30:45.469 INFO Fetch successful Feb 12 20:30:45.469317 coreos-metadata[1034]: Feb 12 20:30:45.469 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:30:45.485830 coreos-metadata[1034]: Feb 12 20:30:45.483 INFO Fetch successful Feb 12 20:30:45.494540 unknown[1034]: wrote ssh authorized keys file for user: core Feb 12 20:30:45.502268 tar[1054]: ./ptp Feb 12 20:30:45.534975 update-ssh-keys[1101]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:30:45.535369 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:30:45.600202 tar[1054]: ./vlan Feb 12 20:30:45.720958 tar[1054]: ./host-device Feb 12 20:30:45.816792 tar[1054]: ./tuning Feb 12 20:30:45.889114 tar[1054]: ./vrf Feb 12 20:30:45.977579 locksmithd[1093]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:30:45.978361 tar[1054]: ./sbr Feb 12 20:30:46.070128 tar[1054]: ./tap Feb 12 20:30:46.150987 tar[1054]: ./dhcp Feb 12 20:30:46.295232 systemd[1]: Finished prepare-critools.service. Feb 12 20:30:46.305623 tar[1054]: ./static Feb 12 20:30:46.335053 tar[1054]: ./firewall Feb 12 20:30:46.377775 tar[1054]: ./macvlan Feb 12 20:30:46.417840 tar[1054]: ./dummy Feb 12 20:30:46.457105 tar[1054]: ./bridge Feb 12 20:30:46.500779 tar[1054]: ./ipvlan Feb 12 20:30:46.540845 tar[1054]: ./portmap Feb 12 20:30:46.579415 tar[1054]: ./host-local Feb 12 20:30:46.624201 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:30:46.874729 sshd_keygen[1047]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:30:46.903236 systemd[1]: Finished sshd-keygen.service. Feb 12 20:30:46.908227 systemd[1]: Starting issuegen.service... Feb 12 20:30:46.915477 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:30:46.915849 systemd[1]: Finished issuegen.service. Feb 12 20:30:46.927779 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:30:46.941237 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:30:46.943283 systemd[1]: Started getty@tty1.service. Feb 12 20:30:46.945364 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:30:46.946016 systemd[1]: Reached target getty.target. Feb 12 20:30:46.946523 systemd[1]: Reached target multi-user.target. Feb 12 20:30:46.948330 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:30:46.962142 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:30:46.962659 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:30:46.968899 systemd[1]: Startup finished in 1.001s (kernel) + 9.727s (initrd) + 12.869s (userspace) = 23.598s. Feb 12 20:30:54.534658 systemd[1]: Created slice system-sshd.slice. Feb 12 20:30:54.537824 systemd[1]: Started sshd@0-172.24.4.19:22-172.24.4.1:41164.service. Feb 12 20:30:55.839821 sshd[1122]: Accepted publickey for core from 172.24.4.1 port 41164 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:30:55.843641 sshd[1122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:30:55.867142 systemd[1]: Created slice user-500.slice. Feb 12 20:30:55.869938 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:30:55.873260 systemd-logind[1048]: New session 1 of user core. Feb 12 20:30:55.887167 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:30:55.890320 systemd[1]: Starting user@500.service... Feb 12 20:30:55.898077 (systemd)[1125]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:30:56.032592 systemd[1125]: Queued start job for default target default.target. Feb 12 20:30:56.033242 systemd[1125]: Reached target paths.target. Feb 12 20:30:56.033268 systemd[1125]: Reached target sockets.target. Feb 12 20:30:56.033285 systemd[1125]: Reached target timers.target. Feb 12 20:30:56.033300 systemd[1125]: Reached target basic.target. Feb 12 20:30:56.033424 systemd[1]: Started user@500.service. Feb 12 20:30:56.034537 systemd[1]: Started session-1.scope. Feb 12 20:30:56.035050 systemd[1125]: Reached target default.target. Feb 12 20:30:56.035208 systemd[1125]: Startup finished in 123ms. Feb 12 20:30:56.490113 systemd[1]: Started sshd@1-172.24.4.19:22-172.24.4.1:41166.service. Feb 12 20:30:58.518501 sshd[1134]: Accepted publickey for core from 172.24.4.1 port 41166 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:30:58.523419 sshd[1134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:30:58.537393 systemd-logind[1048]: New session 2 of user core. Feb 12 20:30:58.539330 systemd[1]: Started session-2.scope. Feb 12 20:30:59.326032 sshd[1134]: pam_unix(sshd:session): session closed for user core Feb 12 20:30:59.334334 systemd[1]: Started sshd@2-172.24.4.19:22-172.24.4.1:41176.service. Feb 12 20:30:59.338533 systemd[1]: sshd@1-172.24.4.19:22-172.24.4.1:41166.service: Deactivated successfully. Feb 12 20:30:59.340308 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:30:59.343903 systemd-logind[1048]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:30:59.346197 systemd-logind[1048]: Removed session 2. Feb 12 20:31:00.608069 sshd[1139]: Accepted publickey for core from 172.24.4.1 port 41176 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:31:00.610310 sshd[1139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:00.623003 systemd-logind[1048]: New session 3 of user core. Feb 12 20:31:00.624379 systemd[1]: Started session-3.scope. Feb 12 20:31:01.379952 sshd[1139]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:01.387400 systemd[1]: Started sshd@3-172.24.4.19:22-172.24.4.1:41192.service. Feb 12 20:31:01.388506 systemd[1]: sshd@2-172.24.4.19:22-172.24.4.1:41176.service: Deactivated successfully. Feb 12 20:31:01.390878 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:31:01.393419 systemd-logind[1048]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:31:01.395688 systemd-logind[1048]: Removed session 3. Feb 12 20:31:02.508292 sshd[1145]: Accepted publickey for core from 172.24.4.1 port 41192 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:31:02.511485 sshd[1145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:02.521001 systemd-logind[1048]: New session 4 of user core. Feb 12 20:31:02.521859 systemd[1]: Started session-4.scope. Feb 12 20:31:03.329091 sshd[1145]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:03.336849 systemd[1]: Started sshd@4-172.24.4.19:22-172.24.4.1:41194.service. Feb 12 20:31:03.338319 systemd[1]: sshd@3-172.24.4.19:22-172.24.4.1:41192.service: Deactivated successfully. Feb 12 20:31:03.340627 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:31:03.342672 systemd-logind[1048]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:31:03.346157 systemd-logind[1048]: Removed session 4. Feb 12 20:31:04.870240 sshd[1152]: Accepted publickey for core from 172.24.4.1 port 41194 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:31:04.873103 sshd[1152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:04.884614 systemd-logind[1048]: New session 5 of user core. Feb 12 20:31:04.885649 systemd[1]: Started session-5.scope. Feb 12 20:31:05.419261 sudo[1156]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:31:05.420498 sudo[1156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:31:06.052624 systemd[1]: Reloading. Feb 12 20:31:06.180083 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2024-02-12T20:31:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:31:06.180117 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2024-02-12T20:31:06Z" level=info msg="torcx already run" Feb 12 20:31:06.268123 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:31:06.268145 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:31:06.293590 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:31:06.373399 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:31:06.387828 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:31:06.388351 systemd[1]: Reached target network-online.target. Feb 12 20:31:06.390537 systemd[1]: Started kubelet.service. Feb 12 20:31:06.406823 systemd[1]: Starting coreos-metadata.service... Feb 12 20:31:06.468908 coreos-metadata[1240]: Feb 12 20:31:06.468 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 12 20:31:06.472995 kubelet[1232]: E0212 20:31:06.472894 1232 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 20:31:06.477483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:31:06.477700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:31:06.669931 coreos-metadata[1240]: Feb 12 20:31:06.668 INFO Fetch successful Feb 12 20:31:06.670348 coreos-metadata[1240]: Feb 12 20:31:06.670 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 12 20:31:06.687453 coreos-metadata[1240]: Feb 12 20:31:06.687 INFO Fetch successful Feb 12 20:31:06.687789 coreos-metadata[1240]: Feb 12 20:31:06.687 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 12 20:31:06.704066 coreos-metadata[1240]: Feb 12 20:31:06.703 INFO Fetch successful Feb 12 20:31:06.704341 coreos-metadata[1240]: Feb 12 20:31:06.704 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 12 20:31:06.722349 coreos-metadata[1240]: Feb 12 20:31:06.722 INFO Fetch successful Feb 12 20:31:06.722654 coreos-metadata[1240]: Feb 12 20:31:06.722 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 12 20:31:06.734921 coreos-metadata[1240]: Feb 12 20:31:06.734 INFO Fetch successful Feb 12 20:31:06.752300 systemd[1]: Finished coreos-metadata.service. Feb 12 20:31:07.437525 systemd[1]: Stopped kubelet.service. Feb 12 20:31:07.478681 systemd[1]: Reloading. Feb 12 20:31:07.605853 /usr/lib/systemd/system-generators/torcx-generator[1295]: time="2024-02-12T20:31:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:31:07.605887 /usr/lib/systemd/system-generators/torcx-generator[1295]: time="2024-02-12T20:31:07Z" level=info msg="torcx already run" Feb 12 20:31:07.690331 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:31:07.690357 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:31:07.716779 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:31:07.813250 systemd[1]: Started kubelet.service. Feb 12 20:31:07.894645 kubelet[1342]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:31:07.895118 kubelet[1342]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:31:07.895201 kubelet[1342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:31:07.895385 kubelet[1342]: I0212 20:31:07.895344 1342 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:31:08.657628 kubelet[1342]: I0212 20:31:08.657562 1342 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 20:31:08.658000 kubelet[1342]: I0212 20:31:08.657987 1342 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:31:08.658374 kubelet[1342]: I0212 20:31:08.658358 1342 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 20:31:08.661640 kubelet[1342]: I0212 20:31:08.661538 1342 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:31:08.669009 kubelet[1342]: I0212 20:31:08.668984 1342 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:31:08.669446 kubelet[1342]: I0212 20:31:08.669433 1342 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:31:08.669588 kubelet[1342]: I0212 20:31:08.669574 1342 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:31:08.669754 kubelet[1342]: I0212 20:31:08.669740 1342 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:31:08.669819 kubelet[1342]: I0212 20:31:08.669810 1342 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 20:31:08.669987 kubelet[1342]: I0212 20:31:08.669975 1342 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:31:08.676430 kubelet[1342]: I0212 20:31:08.676405 1342 kubelet.go:405] "Attempting to sync node with API server" Feb 12 20:31:08.676606 kubelet[1342]: I0212 20:31:08.676596 1342 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:31:08.676732 kubelet[1342]: I0212 20:31:08.676704 1342 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:31:08.676824 kubelet[1342]: I0212 20:31:08.676814 1342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:31:08.677166 kubelet[1342]: E0212 20:31:08.677128 1342 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:08.677222 kubelet[1342]: E0212 20:31:08.677213 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:08.678233 kubelet[1342]: I0212 20:31:08.678210 1342 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:31:08.678595 kubelet[1342]: W0212 20:31:08.678583 1342 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:31:08.679248 kubelet[1342]: I0212 20:31:08.679231 1342 server.go:1168] "Started kubelet" Feb 12 20:31:08.679605 kubelet[1342]: I0212 20:31:08.679553 1342 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:31:08.683446 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:31:08.683638 kubelet[1342]: I0212 20:31:08.683594 1342 server.go:461] "Adding debug handlers to kubelet server" Feb 12 20:31:08.683760 kubelet[1342]: I0212 20:31:08.683743 1342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:31:08.686109 kubelet[1342]: I0212 20:31:08.679577 1342 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:31:08.686923 kubelet[1342]: E0212 20:31:08.686885 1342 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:31:08.686984 kubelet[1342]: E0212 20:31:08.686943 1342 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:31:08.698268 kubelet[1342]: I0212 20:31:08.698246 1342 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 20:31:08.698501 kubelet[1342]: I0212 20:31:08.698433 1342 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 20:31:08.711467 kubelet[1342]: W0212 20:31:08.711412 1342 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:31:08.711727 kubelet[1342]: E0212 20:31:08.711701 1342 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:31:08.711869 kubelet[1342]: W0212 20:31:08.711856 1342 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:31:08.711940 kubelet[1342]: E0212 20:31:08.711931 1342 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:31:08.712118 kubelet[1342]: E0212 20:31:08.712031 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5bcbd5588", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 679206280, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 679206280, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.712453 kubelet[1342]: W0212 20:31:08.712438 1342 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:31:08.712564 kubelet[1342]: E0212 20:31:08.712552 1342 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:31:08.712765 kubelet[1342]: E0212 20:31:08.712749 1342 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 12 20:31:08.720663 kubelet[1342]: E0212 20:31:08.720388 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5bd330ace", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 686920398, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 686920398, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.740482 kubelet[1342]: I0212 20:31:08.740006 1342 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:31:08.740935 kubelet[1342]: I0212 20:31:08.740470 1342 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:31:08.741153 kubelet[1342]: I0212 20:31:08.741118 1342 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:31:08.743126 kubelet[1342]: E0212 20:31:08.741521 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034bb23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737362723, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737362723, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.744809 kubelet[1342]: E0212 20:31:08.744533 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034fad9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737379033, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737379033, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.747470 kubelet[1342]: E0212 20:31:08.747333 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c03519a6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737386918, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737386918, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.752350 kubelet[1342]: I0212 20:31:08.752328 1342 policy_none.go:49] "None policy: Start" Feb 12 20:31:08.753632 kubelet[1342]: I0212 20:31:08.753601 1342 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:31:08.753746 kubelet[1342]: I0212 20:31:08.753636 1342 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:31:08.761240 systemd[1]: Created slice kubepods.slice. Feb 12 20:31:08.766125 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:31:08.771090 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:31:08.781042 kubelet[1342]: I0212 20:31:08.778566 1342 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:31:08.781042 kubelet[1342]: I0212 20:31:08.778950 1342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:31:08.784450 kubelet[1342]: E0212 20:31:08.783613 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c2cc30e8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 780843240, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 780843240, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.791054 kubelet[1342]: E0212 20:31:08.790979 1342 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.19\" not found" Feb 12 20:31:08.799627 kubelet[1342]: I0212 20:31:08.799567 1342 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.19" Feb 12 20:31:08.801733 kubelet[1342]: E0212 20:31:08.801631 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034bb23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737362723, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 799529627, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c034bb23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.801952 kubelet[1342]: E0212 20:31:08.801934 1342 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.19" Feb 12 20:31:08.802671 kubelet[1342]: E0212 20:31:08.802554 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034fad9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737379033, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 799535187, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c034fad9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.804437 kubelet[1342]: E0212 20:31:08.804141 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c03519a6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737386918, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 799538584, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c03519a6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:08.852534 kubelet[1342]: I0212 20:31:08.852455 1342 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:31:08.857361 kubelet[1342]: I0212 20:31:08.857321 1342 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:31:08.857479 kubelet[1342]: I0212 20:31:08.857420 1342 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 20:31:08.857519 kubelet[1342]: I0212 20:31:08.857497 1342 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 20:31:08.857648 kubelet[1342]: E0212 20:31:08.857614 1342 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:31:08.861750 kubelet[1342]: W0212 20:31:08.861696 1342 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:31:08.861931 kubelet[1342]: E0212 20:31:08.861916 1342 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:31:08.915820 kubelet[1342]: E0212 20:31:08.915546 1342 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 12 20:31:09.004266 kubelet[1342]: I0212 20:31:09.004223 1342 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.19" Feb 12 20:31:09.007202 kubelet[1342]: E0212 20:31:09.007147 1342 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.19" Feb 12 20:31:09.007696 kubelet[1342]: E0212 20:31:09.007535 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034bb23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737362723, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 9, 4122691, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c034bb23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:09.010202 kubelet[1342]: E0212 20:31:09.010034 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034fad9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737379033, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 9, 4152868, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c034fad9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:09.011989 kubelet[1342]: E0212 20:31:09.011859 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c03519a6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737386918, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 9, 4164349, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c03519a6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:09.318573 kubelet[1342]: E0212 20:31:09.318516 1342 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 12 20:31:09.410552 kubelet[1342]: I0212 20:31:09.410505 1342 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.19" Feb 12 20:31:09.412310 kubelet[1342]: E0212 20:31:09.412154 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034bb23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737362723, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 9, 409775633, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c034bb23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:09.413228 kubelet[1342]: E0212 20:31:09.413153 1342 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.19" Feb 12 20:31:09.414900 kubelet[1342]: E0212 20:31:09.414688 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c034fad9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737379033, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 9, 409816089, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c034fad9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:09.416653 kubelet[1342]: E0212 20:31:09.416522 1342 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.19.17b337a5c03519a6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.19", UID:"172.24.4.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.19"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 31, 8, 737386918, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 31, 9, 409827099, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.19.17b337a5c03519a6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:31:09.559912 kubelet[1342]: W0212 20:31:09.559837 1342 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:31:09.560341 kubelet[1342]: E0212 20:31:09.560309 1342 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:31:09.660842 kubelet[1342]: I0212 20:31:09.660614 1342 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:31:09.677419 kubelet[1342]: E0212 20:31:09.677361 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:10.082885 kubelet[1342]: E0212 20:31:10.082832 1342 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.19" not found Feb 12 20:31:10.133591 kubelet[1342]: E0212 20:31:10.133525 1342 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.19\" not found" node="172.24.4.19" Feb 12 20:31:10.214697 kubelet[1342]: I0212 20:31:10.214649 1342 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.19" Feb 12 20:31:10.223851 kubelet[1342]: I0212 20:31:10.223794 1342 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.19" Feb 12 20:31:10.247904 kubelet[1342]: E0212 20:31:10.247814 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:10.343324 sudo[1156]: pam_unix(sudo:session): session closed for user root Feb 12 20:31:10.348694 kubelet[1342]: E0212 20:31:10.348630 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:10.449923 kubelet[1342]: E0212 20:31:10.449834 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:10.527691 sshd[1152]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:10.534197 systemd-logind[1048]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:31:10.534573 systemd[1]: sshd@4-172.24.4.19:22-172.24.4.1:41194.service: Deactivated successfully. Feb 12 20:31:10.536332 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:31:10.539977 systemd-logind[1048]: Removed session 5. Feb 12 20:31:10.550374 kubelet[1342]: E0212 20:31:10.550296 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:10.651664 kubelet[1342]: E0212 20:31:10.651455 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:10.677933 kubelet[1342]: E0212 20:31:10.677835 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:10.752306 kubelet[1342]: E0212 20:31:10.752244 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:10.852694 kubelet[1342]: E0212 20:31:10.852496 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:10.953680 kubelet[1342]: E0212 20:31:10.953464 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.054596 kubelet[1342]: E0212 20:31:11.054524 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.155477 kubelet[1342]: E0212 20:31:11.155407 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.256276 kubelet[1342]: E0212 20:31:11.256142 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.357164 kubelet[1342]: E0212 20:31:11.357014 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.458180 kubelet[1342]: E0212 20:31:11.458018 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.559491 kubelet[1342]: E0212 20:31:11.559261 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.660472 kubelet[1342]: E0212 20:31:11.660281 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.679128 kubelet[1342]: E0212 20:31:11.679037 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:11.761154 kubelet[1342]: E0212 20:31:11.760945 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.862102 kubelet[1342]: E0212 20:31:11.861790 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:11.962043 kubelet[1342]: E0212 20:31:11.961973 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:12.063256 kubelet[1342]: E0212 20:31:12.063173 1342 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.19\" not found" Feb 12 20:31:12.165330 kubelet[1342]: I0212 20:31:12.165139 1342 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:31:12.167265 env[1063]: time="2024-02-12T20:31:12.166350398Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:31:12.168511 kubelet[1342]: I0212 20:31:12.167566 1342 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:31:12.680082 kubelet[1342]: I0212 20:31:12.679900 1342 apiserver.go:52] "Watching apiserver" Feb 12 20:31:12.680608 kubelet[1342]: E0212 20:31:12.680511 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:12.687443 kubelet[1342]: I0212 20:31:12.687346 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:31:12.687863 kubelet[1342]: I0212 20:31:12.687561 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:31:12.702259 kubelet[1342]: I0212 20:31:12.702193 1342 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 20:31:12.710805 systemd[1]: Created slice kubepods-besteffort-pode1955c7b_ce13_4c23_8147_f298654c1ed6.slice. Feb 12 20:31:12.719841 kubelet[1342]: I0212 20:31:12.719567 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hostproc\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.720914 kubelet[1342]: I0212 20:31:12.720878 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-xtables-lock\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.721135 kubelet[1342]: I0212 20:31:12.721112 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-bpf-maps\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.721356 kubelet[1342]: I0212 20:31:12.721332 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-kernel\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.721579 kubelet[1342]: I0212 20:31:12.721556 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhm8m\" (UniqueName: \"kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-kube-api-access-lhm8m\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.721850 kubelet[1342]: I0212 20:31:12.721823 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1955c7b-ce13-4c23-8147-f298654c1ed6-lib-modules\") pod \"kube-proxy-xvfnc\" (UID: \"e1955c7b-ce13-4c23-8147-f298654c1ed6\") " pod="kube-system/kube-proxy-xvfnc" Feb 12 20:31:12.722220 kubelet[1342]: I0212 20:31:12.722155 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-config-path\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.722317 kubelet[1342]: I0212 20:31:12.722273 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1955c7b-ce13-4c23-8147-f298654c1ed6-xtables-lock\") pod \"kube-proxy-xvfnc\" (UID: \"e1955c7b-ce13-4c23-8147-f298654c1ed6\") " pod="kube-system/kube-proxy-xvfnc" Feb 12 20:31:12.722405 kubelet[1342]: I0212 20:31:12.722336 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-run\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.722405 kubelet[1342]: I0212 20:31:12.722393 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-etc-cni-netd\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.722534 kubelet[1342]: I0212 20:31:12.722463 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-lib-modules\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.722534 kubelet[1342]: I0212 20:31:12.722531 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2ab816fe-5e81-41b2-9526-0d1e9b627b08-clustermesh-secrets\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.722650 kubelet[1342]: I0212 20:31:12.722587 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1955c7b-ce13-4c23-8147-f298654c1ed6-kube-proxy\") pod \"kube-proxy-xvfnc\" (UID: \"e1955c7b-ce13-4c23-8147-f298654c1ed6\") " pod="kube-system/kube-proxy-xvfnc" Feb 12 20:31:12.722650 kubelet[1342]: I0212 20:31:12.722649 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqpk\" (UniqueName: \"kubernetes.io/projected/e1955c7b-ce13-4c23-8147-f298654c1ed6-kube-api-access-cdqpk\") pod \"kube-proxy-xvfnc\" (UID: \"e1955c7b-ce13-4c23-8147-f298654c1ed6\") " pod="kube-system/kube-proxy-xvfnc" Feb 12 20:31:12.722836 kubelet[1342]: I0212 20:31:12.722705 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-cgroup\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.722902 kubelet[1342]: I0212 20:31:12.722829 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cni-path\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.722965 kubelet[1342]: I0212 20:31:12.722915 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-net\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.723060 kubelet[1342]: I0212 20:31:12.722972 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hubble-tls\") pod \"cilium-n6cd2\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " pod="kube-system/cilium-n6cd2" Feb 12 20:31:12.723060 kubelet[1342]: I0212 20:31:12.723004 1342 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:31:12.730911 systemd[1]: Created slice kubepods-burstable-pod2ab816fe_5e81_41b2_9526_0d1e9b627b08.slice. Feb 12 20:31:13.028852 env[1063]: time="2024-02-12T20:31:13.028752613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvfnc,Uid:e1955c7b-ce13-4c23-8147-f298654c1ed6,Namespace:kube-system,Attempt:0,}" Feb 12 20:31:13.042915 env[1063]: time="2024-02-12T20:31:13.042819336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6cd2,Uid:2ab816fe-5e81-41b2-9526-0d1e9b627b08,Namespace:kube-system,Attempt:0,}" Feb 12 20:31:13.681750 kubelet[1342]: E0212 20:31:13.681585 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:13.888539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755839587.mount: Deactivated successfully. Feb 12 20:31:13.916497 env[1063]: time="2024-02-12T20:31:13.916339252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.920652 env[1063]: time="2024-02-12T20:31:13.920588111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.928867 env[1063]: time="2024-02-12T20:31:13.928810722Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.931461 env[1063]: time="2024-02-12T20:31:13.931411520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.933559 env[1063]: time="2024-02-12T20:31:13.933435897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.939702 env[1063]: time="2024-02-12T20:31:13.939617951Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.945314 env[1063]: time="2024-02-12T20:31:13.945245145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.950895 env[1063]: time="2024-02-12T20:31:13.950835660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:13.998320 env[1063]: time="2024-02-12T20:31:13.998177550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:31:13.998547 env[1063]: time="2024-02-12T20:31:13.998281314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:31:13.998547 env[1063]: time="2024-02-12T20:31:13.998315438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:31:14.002878 env[1063]: time="2024-02-12T20:31:14.001933584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b pid=1395 runtime=io.containerd.runc.v2 Feb 12 20:31:14.012561 env[1063]: time="2024-02-12T20:31:14.011914002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:31:14.012561 env[1063]: time="2024-02-12T20:31:14.011960660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:31:14.012561 env[1063]: time="2024-02-12T20:31:14.011973744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:31:14.012561 env[1063]: time="2024-02-12T20:31:14.012091585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01c826ad04468c156348c5308d529dd123a4f27585d1310acd2555f1b282e979 pid=1412 runtime=io.containerd.runc.v2 Feb 12 20:31:14.032696 systemd[1]: Started cri-containerd-ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b.scope. Feb 12 20:31:14.047105 systemd[1]: Started cri-containerd-01c826ad04468c156348c5308d529dd123a4f27585d1310acd2555f1b282e979.scope. Feb 12 20:31:14.088137 env[1063]: time="2024-02-12T20:31:14.088054574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6cd2,Uid:2ab816fe-5e81-41b2-9526-0d1e9b627b08,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\"" Feb 12 20:31:14.090812 env[1063]: time="2024-02-12T20:31:14.090777180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:31:14.100084 env[1063]: time="2024-02-12T20:31:14.099991452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvfnc,Uid:e1955c7b-ce13-4c23-8147-f298654c1ed6,Namespace:kube-system,Attempt:0,} returns sandbox id \"01c826ad04468c156348c5308d529dd123a4f27585d1310acd2555f1b282e979\"" Feb 12 20:31:14.682105 kubelet[1342]: E0212 20:31:14.682031 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:15.682884 kubelet[1342]: E0212 20:31:15.682812 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:16.683313 kubelet[1342]: E0212 20:31:16.683252 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:17.684408 kubelet[1342]: E0212 20:31:17.684332 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:18.685018 kubelet[1342]: E0212 20:31:18.684950 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:19.685691 kubelet[1342]: E0212 20:31:19.685598 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:20.685899 kubelet[1342]: E0212 20:31:20.685836 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:21.603269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901608800.mount: Deactivated successfully. Feb 12 20:31:21.686803 kubelet[1342]: E0212 20:31:21.686727 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:22.687485 kubelet[1342]: E0212 20:31:22.686847 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:23.688257 kubelet[1342]: E0212 20:31:23.688037 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:24.688666 kubelet[1342]: E0212 20:31:24.688621 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:25.689135 kubelet[1342]: E0212 20:31:25.689069 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:26.325589 env[1063]: time="2024-02-12T20:31:26.325415811Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:26.334788 env[1063]: time="2024-02-12T20:31:26.334653111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:26.344404 env[1063]: time="2024-02-12T20:31:26.344318308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:26.345115 env[1063]: time="2024-02-12T20:31:26.345024514Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:31:26.347848 env[1063]: time="2024-02-12T20:31:26.347689519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 20:31:26.351262 env[1063]: time="2024-02-12T20:31:26.351231151Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:31:26.371079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732250217.mount: Deactivated successfully. Feb 12 20:31:26.387882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744827401.mount: Deactivated successfully. Feb 12 20:31:26.407195 env[1063]: time="2024-02-12T20:31:26.407092704Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\"" Feb 12 20:31:26.409450 env[1063]: time="2024-02-12T20:31:26.409388204Z" level=info msg="StartContainer for \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\"" Feb 12 20:31:26.452196 systemd[1]: Started cri-containerd-0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e.scope. Feb 12 20:31:26.507493 env[1063]: time="2024-02-12T20:31:26.507403222Z" level=info msg="StartContainer for \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\" returns successfully" Feb 12 20:31:26.513589 systemd[1]: cri-containerd-0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e.scope: Deactivated successfully. Feb 12 20:31:26.855029 kubelet[1342]: E0212 20:31:26.690323 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:26.955101 env[1063]: time="2024-02-12T20:31:26.954987475Z" level=info msg="shim disconnected" id=0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e Feb 12 20:31:26.955634 env[1063]: time="2024-02-12T20:31:26.955555434Z" level=warning msg="cleaning up after shim disconnected" id=0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e namespace=k8s.io Feb 12 20:31:26.955937 env[1063]: time="2024-02-12T20:31:26.955882847Z" level=info msg="cleaning up dead shim" Feb 12 20:31:26.974215 env[1063]: time="2024-02-12T20:31:26.974129585Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1522 runtime=io.containerd.runc.v2\n" Feb 12 20:31:27.372143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e-rootfs.mount: Deactivated successfully. Feb 12 20:31:27.691607 kubelet[1342]: E0212 20:31:27.691419 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:27.929340 env[1063]: time="2024-02-12T20:31:27.929288979Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:31:28.124864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240081876.mount: Deactivated successfully. Feb 12 20:31:28.158565 env[1063]: time="2024-02-12T20:31:28.158398121Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\"" Feb 12 20:31:28.159691 env[1063]: time="2024-02-12T20:31:28.159636198Z" level=info msg="StartContainer for \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\"" Feb 12 20:31:28.200915 systemd[1]: Started cri-containerd-fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866.scope. Feb 12 20:31:28.254327 env[1063]: time="2024-02-12T20:31:28.254245942Z" level=info msg="StartContainer for \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\" returns successfully" Feb 12 20:31:28.270497 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:31:28.270906 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:31:28.271196 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:31:28.274243 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:31:28.277682 systemd[1]: cri-containerd-fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866.scope: Deactivated successfully. Feb 12 20:31:28.288940 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:31:28.370272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469565795.mount: Deactivated successfully. Feb 12 20:31:28.404042 env[1063]: time="2024-02-12T20:31:28.403856263Z" level=info msg="shim disconnected" id=fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866 Feb 12 20:31:28.404042 env[1063]: time="2024-02-12T20:31:28.403987475Z" level=warning msg="cleaning up after shim disconnected" id=fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866 namespace=k8s.io Feb 12 20:31:28.404042 env[1063]: time="2024-02-12T20:31:28.404012975Z" level=info msg="cleaning up dead shim" Feb 12 20:31:28.431821 env[1063]: time="2024-02-12T20:31:28.431691871Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1588 runtime=io.containerd.runc.v2\n" Feb 12 20:31:28.677695 kubelet[1342]: E0212 20:31:28.677552 1342 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:28.692184 kubelet[1342]: E0212 20:31:28.692085 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:28.759043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521433266.mount: Deactivated successfully. Feb 12 20:31:28.933531 env[1063]: time="2024-02-12T20:31:28.933413207Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:31:28.968989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914563806.mount: Deactivated successfully. Feb 12 20:31:28.986065 env[1063]: time="2024-02-12T20:31:28.986008556Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\"" Feb 12 20:31:28.986838 env[1063]: time="2024-02-12T20:31:28.986691573Z" level=info msg="StartContainer for \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\"" Feb 12 20:31:29.024575 systemd[1]: Started cri-containerd-e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d.scope. Feb 12 20:31:29.070502 systemd[1]: cri-containerd-e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d.scope: Deactivated successfully. Feb 12 20:31:29.087111 env[1063]: time="2024-02-12T20:31:29.086995583Z" level=info msg="StartContainer for \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\" returns successfully" Feb 12 20:31:29.371191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431855724.mount: Deactivated successfully. Feb 12 20:31:29.380958 env[1063]: time="2024-02-12T20:31:29.380862161Z" level=info msg="shim disconnected" id=e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d Feb 12 20:31:29.381404 env[1063]: time="2024-02-12T20:31:29.381328358Z" level=warning msg="cleaning up after shim disconnected" id=e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d namespace=k8s.io Feb 12 20:31:29.381629 env[1063]: time="2024-02-12T20:31:29.381587917Z" level=info msg="cleaning up dead shim" Feb 12 20:31:29.407782 env[1063]: time="2024-02-12T20:31:29.407627640Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1651 runtime=io.containerd.runc.v2\n" Feb 12 20:31:29.693349 kubelet[1342]: E0212 20:31:29.693159 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:29.922494 env[1063]: time="2024-02-12T20:31:29.922411525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:29.925400 env[1063]: time="2024-02-12T20:31:29.925333908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:29.928588 env[1063]: time="2024-02-12T20:31:29.928532563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:29.931193 env[1063]: time="2024-02-12T20:31:29.931144368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:29.932531 env[1063]: time="2024-02-12T20:31:29.932506570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 12 20:31:29.945037 env[1063]: time="2024-02-12T20:31:29.944907641Z" level=info msg="CreateContainer within sandbox \"01c826ad04468c156348c5308d529dd123a4f27585d1310acd2555f1b282e979\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:31:29.963636 env[1063]: time="2024-02-12T20:31:29.963587682Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:31:29.976261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146413874.mount: Deactivated successfully. Feb 12 20:31:29.984079 env[1063]: time="2024-02-12T20:31:29.984015207Z" level=info msg="CreateContainer within sandbox \"01c826ad04468c156348c5308d529dd123a4f27585d1310acd2555f1b282e979\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f772fc2457cadb8b6fc0b785ce69eec01b4f2996b6de3d1d6d319770c5ac30b\"" Feb 12 20:31:29.985389 env[1063]: time="2024-02-12T20:31:29.985358562Z" level=info msg="StartContainer for \"0f772fc2457cadb8b6fc0b785ce69eec01b4f2996b6de3d1d6d319770c5ac30b\"" Feb 12 20:31:30.008167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644503733.mount: Deactivated successfully. Feb 12 20:31:30.025337 systemd[1]: Started cri-containerd-0f772fc2457cadb8b6fc0b785ce69eec01b4f2996b6de3d1d6d319770c5ac30b.scope. Feb 12 20:31:30.047336 env[1063]: time="2024-02-12T20:31:30.047286015Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\"" Feb 12 20:31:30.048498 env[1063]: time="2024-02-12T20:31:30.048472905Z" level=info msg="StartContainer for \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\"" Feb 12 20:31:30.085621 systemd[1]: Started cri-containerd-2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2.scope. Feb 12 20:31:30.111576 env[1063]: time="2024-02-12T20:31:30.111518380Z" level=info msg="StartContainer for \"0f772fc2457cadb8b6fc0b785ce69eec01b4f2996b6de3d1d6d319770c5ac30b\" returns successfully" Feb 12 20:31:30.150666 systemd[1]: cri-containerd-2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2.scope: Deactivated successfully. Feb 12 20:31:30.157646 env[1063]: time="2024-02-12T20:31:30.157541983Z" level=info msg="StartContainer for \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\" returns successfully" Feb 12 20:31:30.355216 env[1063]: time="2024-02-12T20:31:30.355111757Z" level=info msg="shim disconnected" id=2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2 Feb 12 20:31:30.355675 env[1063]: time="2024-02-12T20:31:30.355629823Z" level=warning msg="cleaning up after shim disconnected" id=2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2 namespace=k8s.io Feb 12 20:31:30.355901 env[1063]: time="2024-02-12T20:31:30.355862389Z" level=info msg="cleaning up dead shim" Feb 12 20:31:30.383941 update_engine[1050]: I0212 20:31:30.381070 1050 update_attempter.cc:509] Updating boot flags... Feb 12 20:31:30.395040 env[1063]: time="2024-02-12T20:31:30.394957363Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1766 runtime=io.containerd.runc.v2\n" Feb 12 20:31:30.694887 kubelet[1342]: E0212 20:31:30.694464 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:30.980616 env[1063]: time="2024-02-12T20:31:30.980520365Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:31:31.015971 env[1063]: time="2024-02-12T20:31:31.015856966Z" level=info msg="CreateContainer within sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\"" Feb 12 20:31:31.019391 kubelet[1342]: I0212 20:31:31.019324 1342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xvfnc" podStartSLOduration=5.186102731 podCreationTimestamp="2024-02-12 20:31:10 +0000 UTC" firstStartedPulling="2024-02-12 20:31:14.102085739 +0000 UTC m=+6.284311515" lastFinishedPulling="2024-02-12 20:31:29.934848616 +0000 UTC m=+22.117074442" observedRunningTime="2024-02-12 20:31:31.01631757 +0000 UTC m=+23.198543486" watchObservedRunningTime="2024-02-12 20:31:31.018865658 +0000 UTC m=+23.201091484" Feb 12 20:31:31.019817 env[1063]: time="2024-02-12T20:31:31.019757598Z" level=info msg="StartContainer for \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\"" Feb 12 20:31:31.084141 systemd[1]: Started cri-containerd-7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3.scope. Feb 12 20:31:31.133662 env[1063]: time="2024-02-12T20:31:31.133580876Z" level=info msg="StartContainer for \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\" returns successfully" Feb 12 20:31:31.251579 kubelet[1342]: I0212 20:31:31.251138 1342 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:31:31.371398 systemd[1]: run-containerd-runc-k8s.io-7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3-runc.3wihVb.mount: Deactivated successfully. Feb 12 20:31:31.695630 kubelet[1342]: E0212 20:31:31.695241 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:31.721843 kernel: Initializing XFRM netlink socket Feb 12 20:31:32.697973 kubelet[1342]: E0212 20:31:32.697855 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:33.066995 systemd-networkd[971]: cilium_host: Link UP Feb 12 20:31:33.071444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:31:33.067327 systemd-networkd[971]: cilium_net: Link UP Feb 12 20:31:33.067337 systemd-networkd[971]: cilium_net: Gained carrier Feb 12 20:31:33.067684 systemd-networkd[971]: cilium_host: Gained carrier Feb 12 20:31:33.081025 systemd-networkd[971]: cilium_host: Gained IPv6LL Feb 12 20:31:33.165893 systemd-networkd[971]: cilium_net: Gained IPv6LL Feb 12 20:31:33.391986 systemd-networkd[971]: cilium_vxlan: Link UP Feb 12 20:31:33.392005 systemd-networkd[971]: cilium_vxlan: Gained carrier Feb 12 20:31:33.698748 kubelet[1342]: E0212 20:31:33.698575 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:33.825914 kernel: NET: Registered PF_ALG protocol family Feb 12 20:31:34.670007 systemd-networkd[971]: cilium_vxlan: Gained IPv6LL Feb 12 20:31:34.699608 kubelet[1342]: E0212 20:31:34.699565 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:34.744896 systemd-networkd[971]: lxc_health: Link UP Feb 12 20:31:34.762498 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:31:34.761914 systemd-networkd[971]: lxc_health: Gained carrier Feb 12 20:31:35.066680 kubelet[1342]: I0212 20:31:35.066635 1342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-n6cd2" podStartSLOduration=12.810717895 podCreationTimestamp="2024-02-12 20:31:10 +0000 UTC" firstStartedPulling="2024-02-12 20:31:14.090193295 +0000 UTC m=+6.272419081" lastFinishedPulling="2024-02-12 20:31:26.346039327 +0000 UTC m=+18.528265153" observedRunningTime="2024-02-12 20:31:32.019187839 +0000 UTC m=+24.201413666" watchObservedRunningTime="2024-02-12 20:31:35.066563967 +0000 UTC m=+27.248789753" Feb 12 20:31:35.700766 kubelet[1342]: E0212 20:31:35.700704 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:36.701269 kubelet[1342]: E0212 20:31:36.700965 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:36.782327 systemd-networkd[971]: lxc_health: Gained IPv6LL Feb 12 20:31:37.701694 kubelet[1342]: E0212 20:31:37.701583 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:37.961470 kubelet[1342]: I0212 20:31:37.961286 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:31:37.975817 systemd[1]: Created slice kubepods-besteffort-pod3d77cdc6_436c_4cf3_a7f0_444ccf090195.slice. Feb 12 20:31:38.025701 kubelet[1342]: I0212 20:31:38.025593 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62tcd\" (UniqueName: \"kubernetes.io/projected/3d77cdc6-436c-4cf3-a7f0-444ccf090195-kube-api-access-62tcd\") pod \"nginx-deployment-845c78c8b9-b9wkw\" (UID: \"3d77cdc6-436c-4cf3-a7f0-444ccf090195\") " pod="default/nginx-deployment-845c78c8b9-b9wkw" Feb 12 20:31:38.283695 env[1063]: time="2024-02-12T20:31:38.282666779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-b9wkw,Uid:3d77cdc6-436c-4cf3-a7f0-444ccf090195,Namespace:default,Attempt:0,}" Feb 12 20:31:38.395863 systemd-networkd[971]: lxc418254bca662: Link UP Feb 12 20:31:38.412848 kernel: eth0: renamed from tmp7c8a1 Feb 12 20:31:38.421174 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:31:38.421307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc418254bca662: link becomes ready Feb 12 20:31:38.421482 systemd-networkd[971]: lxc418254bca662: Gained carrier Feb 12 20:31:38.703472 kubelet[1342]: E0212 20:31:38.703243 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:39.704271 kubelet[1342]: E0212 20:31:39.704225 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:39.751818 env[1063]: time="2024-02-12T20:31:39.751638688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:31:39.752298 env[1063]: time="2024-02-12T20:31:39.751763726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:31:39.752298 env[1063]: time="2024-02-12T20:31:39.751808581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:31:39.752298 env[1063]: time="2024-02-12T20:31:39.752014863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c8a1526477429edb343c3335b26aefd961a10d5b932f879984be9b84fdc8f63 pid=2400 runtime=io.containerd.runc.v2 Feb 12 20:31:39.774616 systemd[1]: Started cri-containerd-7c8a1526477429edb343c3335b26aefd961a10d5b932f879984be9b84fdc8f63.scope. Feb 12 20:31:39.776017 systemd[1]: run-containerd-runc-k8s.io-7c8a1526477429edb343c3335b26aefd961a10d5b932f879984be9b84fdc8f63-runc.S3iE3l.mount: Deactivated successfully. Feb 12 20:31:39.826846 env[1063]: time="2024-02-12T20:31:39.826782105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-b9wkw,Uid:3d77cdc6-436c-4cf3-a7f0-444ccf090195,Namespace:default,Attempt:0,} returns sandbox id \"7c8a1526477429edb343c3335b26aefd961a10d5b932f879984be9b84fdc8f63\"" Feb 12 20:31:39.829700 env[1063]: time="2024-02-12T20:31:39.829650979Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:31:39.918137 systemd-networkd[971]: lxc418254bca662: Gained IPv6LL Feb 12 20:31:40.706055 kubelet[1342]: E0212 20:31:40.705935 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:41.706508 kubelet[1342]: E0212 20:31:41.706425 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:42.706771 kubelet[1342]: E0212 20:31:42.706627 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:43.707627 kubelet[1342]: E0212 20:31:43.707528 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:44.580891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355351365.mount: Deactivated successfully. Feb 12 20:31:44.708604 kubelet[1342]: E0212 20:31:44.708532 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:45.709366 kubelet[1342]: E0212 20:31:45.709287 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:46.550546 env[1063]: time="2024-02-12T20:31:46.550463758Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:46.557427 env[1063]: time="2024-02-12T20:31:46.557368400Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:46.563633 env[1063]: time="2024-02-12T20:31:46.563573268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:46.570297 env[1063]: time="2024-02-12T20:31:46.570225211Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:31:46.574976 env[1063]: time="2024-02-12T20:31:46.573208996Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:31:46.580937 env[1063]: time="2024-02-12T20:31:46.580851183Z" level=info msg="CreateContainer within sandbox \"7c8a1526477429edb343c3335b26aefd961a10d5b932f879984be9b84fdc8f63\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:31:46.609879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144717293.mount: Deactivated successfully. Feb 12 20:31:46.621549 env[1063]: time="2024-02-12T20:31:46.621448051Z" level=info msg="CreateContainer within sandbox \"7c8a1526477429edb343c3335b26aefd961a10d5b932f879984be9b84fdc8f63\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"26a26b567ceb2d8c12d2d8fa7f9dbaa4d39ccdb564fdfd6a47eef9006c9d686f\"" Feb 12 20:31:46.623886 env[1063]: time="2024-02-12T20:31:46.623782978Z" level=info msg="StartContainer for \"26a26b567ceb2d8c12d2d8fa7f9dbaa4d39ccdb564fdfd6a47eef9006c9d686f\"" Feb 12 20:31:46.682808 systemd[1]: Started cri-containerd-26a26b567ceb2d8c12d2d8fa7f9dbaa4d39ccdb564fdfd6a47eef9006c9d686f.scope. Feb 12 20:31:46.710916 kubelet[1342]: E0212 20:31:46.710851 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:46.729516 env[1063]: time="2024-02-12T20:31:46.729459399Z" level=info msg="StartContainer for \"26a26b567ceb2d8c12d2d8fa7f9dbaa4d39ccdb564fdfd6a47eef9006c9d686f\" returns successfully" Feb 12 20:31:47.214614 kubelet[1342]: I0212 20:31:47.214502 1342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-b9wkw" podStartSLOduration=3.467659493 podCreationTimestamp="2024-02-12 20:31:37 +0000 UTC" firstStartedPulling="2024-02-12 20:31:39.828876316 +0000 UTC m=+32.011102142" lastFinishedPulling="2024-02-12 20:31:46.575490262 +0000 UTC m=+38.757716088" observedRunningTime="2024-02-12 20:31:47.212623029 +0000 UTC m=+39.394848865" watchObservedRunningTime="2024-02-12 20:31:47.214273439 +0000 UTC m=+39.396499305" Feb 12 20:31:47.601811 systemd[1]: run-containerd-runc-k8s.io-26a26b567ceb2d8c12d2d8fa7f9dbaa4d39ccdb564fdfd6a47eef9006c9d686f-runc.eEfAcw.mount: Deactivated successfully. Feb 12 20:31:47.711495 kubelet[1342]: E0212 20:31:47.711385 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:48.677230 kubelet[1342]: E0212 20:31:48.677178 1342 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:48.711966 kubelet[1342]: E0212 20:31:48.711889 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:49.713482 kubelet[1342]: E0212 20:31:49.713413 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:50.715291 kubelet[1342]: E0212 20:31:50.715194 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:51.716241 kubelet[1342]: E0212 20:31:51.716172 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:52.717401 kubelet[1342]: E0212 20:31:52.717343 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:53.718959 kubelet[1342]: E0212 20:31:53.718891 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:54.420044 kubelet[1342]: I0212 20:31:54.419962 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:31:54.434314 systemd[1]: Created slice kubepods-besteffort-pod73f36227_754c_4b87_98ef_f98e26001ed6.slice. Feb 12 20:31:54.555373 kubelet[1342]: I0212 20:31:54.555290 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/73f36227-754c-4b87-98ef-f98e26001ed6-data\") pod \"nfs-server-provisioner-0\" (UID: \"73f36227-754c-4b87-98ef-f98e26001ed6\") " pod="default/nfs-server-provisioner-0" Feb 12 20:31:54.555978 kubelet[1342]: I0212 20:31:54.555928 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nbvv\" (UniqueName: \"kubernetes.io/projected/73f36227-754c-4b87-98ef-f98e26001ed6-kube-api-access-8nbvv\") pod \"nfs-server-provisioner-0\" (UID: \"73f36227-754c-4b87-98ef-f98e26001ed6\") " pod="default/nfs-server-provisioner-0" Feb 12 20:31:54.721368 kubelet[1342]: E0212 20:31:54.720407 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:54.743129 env[1063]: time="2024-02-12T20:31:54.742972276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:73f36227-754c-4b87-98ef-f98e26001ed6,Namespace:default,Attempt:0,}" Feb 12 20:31:54.848236 systemd-networkd[971]: lxc58bf3eda0761: Link UP Feb 12 20:31:54.868812 kernel: eth0: renamed from tmpf35d0 Feb 12 20:31:54.874373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:31:54.874555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc58bf3eda0761: link becomes ready Feb 12 20:31:54.874533 systemd-networkd[971]: lxc58bf3eda0761: Gained carrier Feb 12 20:31:55.205115 env[1063]: time="2024-02-12T20:31:55.204860246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:31:55.205115 env[1063]: time="2024-02-12T20:31:55.204920158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:31:55.205115 env[1063]: time="2024-02-12T20:31:55.204934715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:31:55.205996 env[1063]: time="2024-02-12T20:31:55.205816397Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f35d010ed7fbc605312354d5fdf65ce3f0fcb7bd4f1061147e3cbf613c4ef9bc pid=2522 runtime=io.containerd.runc.v2 Feb 12 20:31:55.234366 systemd[1]: Started cri-containerd-f35d010ed7fbc605312354d5fdf65ce3f0fcb7bd4f1061147e3cbf613c4ef9bc.scope. Feb 12 20:31:55.284798 env[1063]: time="2024-02-12T20:31:55.284738412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:73f36227-754c-4b87-98ef-f98e26001ed6,Namespace:default,Attempt:0,} returns sandbox id \"f35d010ed7fbc605312354d5fdf65ce3f0fcb7bd4f1061147e3cbf613c4ef9bc\"" Feb 12 20:31:55.287293 env[1063]: time="2024-02-12T20:31:55.287229006Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:31:55.720982 kubelet[1342]: E0212 20:31:55.720867 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:56.494627 systemd-networkd[971]: lxc58bf3eda0761: Gained IPv6LL Feb 12 20:31:56.721893 kubelet[1342]: E0212 20:31:56.721827 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:57.722615 kubelet[1342]: E0212 20:31:57.722548 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:58.722924 kubelet[1342]: E0212 20:31:58.722848 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:31:59.119138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224414362.mount: Deactivated successfully. Feb 12 20:31:59.724172 kubelet[1342]: E0212 20:31:59.724102 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:00.724412 kubelet[1342]: E0212 20:32:00.724346 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:01.725334 kubelet[1342]: E0212 20:32:01.725225 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:02.167071 env[1063]: time="2024-02-12T20:32:02.166883485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:02.171895 env[1063]: time="2024-02-12T20:32:02.171823419Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:02.177411 env[1063]: time="2024-02-12T20:32:02.177357810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:02.180786 env[1063]: time="2024-02-12T20:32:02.180761334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:02.182533 env[1063]: time="2024-02-12T20:32:02.182474537Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 20:32:02.185963 env[1063]: time="2024-02-12T20:32:02.185894730Z" level=info msg="CreateContainer within sandbox \"f35d010ed7fbc605312354d5fdf65ce3f0fcb7bd4f1061147e3cbf613c4ef9bc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:32:02.201751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057423355.mount: Deactivated successfully. Feb 12 20:32:02.207976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924336641.mount: Deactivated successfully. Feb 12 20:32:02.217616 env[1063]: time="2024-02-12T20:32:02.217541077Z" level=info msg="CreateContainer within sandbox \"f35d010ed7fbc605312354d5fdf65ce3f0fcb7bd4f1061147e3cbf613c4ef9bc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"59621f01c8c84e45824c6648eabe0c763f968b0cd99187c569e14dd8b95daa93\"" Feb 12 20:32:02.218866 env[1063]: time="2024-02-12T20:32:02.218819451Z" level=info msg="StartContainer for \"59621f01c8c84e45824c6648eabe0c763f968b0cd99187c569e14dd8b95daa93\"" Feb 12 20:32:02.255315 systemd[1]: Started cri-containerd-59621f01c8c84e45824c6648eabe0c763f968b0cd99187c569e14dd8b95daa93.scope. Feb 12 20:32:02.314112 env[1063]: time="2024-02-12T20:32:02.314019649Z" level=info msg="StartContainer for \"59621f01c8c84e45824c6648eabe0c763f968b0cd99187c569e14dd8b95daa93\" returns successfully" Feb 12 20:32:02.726202 kubelet[1342]: E0212 20:32:02.726109 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:03.323402 kubelet[1342]: I0212 20:32:03.323334 1342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.42638513 podCreationTimestamp="2024-02-12 20:31:54 +0000 UTC" firstStartedPulling="2024-02-12 20:31:55.286476978 +0000 UTC m=+47.468702755" lastFinishedPulling="2024-02-12 20:32:02.183313805 +0000 UTC m=+54.365539581" observedRunningTime="2024-02-12 20:32:03.321458599 +0000 UTC m=+55.503684445" watchObservedRunningTime="2024-02-12 20:32:03.323221956 +0000 UTC m=+55.505447782" Feb 12 20:32:03.726797 kubelet[1342]: E0212 20:32:03.726749 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:04.727922 kubelet[1342]: E0212 20:32:04.727861 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:05.730070 kubelet[1342]: E0212 20:32:05.729949 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:06.730843 kubelet[1342]: E0212 20:32:06.730781 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:07.732592 kubelet[1342]: E0212 20:32:07.732482 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:08.677187 kubelet[1342]: E0212 20:32:08.677134 1342 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:08.733401 kubelet[1342]: E0212 20:32:08.733360 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:09.735351 kubelet[1342]: E0212 20:32:09.735275 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:10.737806 kubelet[1342]: E0212 20:32:10.737257 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:11.737618 kubelet[1342]: E0212 20:32:11.737535 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:12.023682 kubelet[1342]: I0212 20:32:12.023368 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:32:12.047084 systemd[1]: Created slice kubepods-besteffort-pod2613e59d_bd03_4d20_adc6_eab5b144bdad.slice. Feb 12 20:32:12.088877 kubelet[1342]: I0212 20:32:12.088806 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-13684d80-cf2a-41cc-808c-ab5e96821f30\" (UniqueName: \"kubernetes.io/nfs/2613e59d-bd03-4d20-adc6-eab5b144bdad-pvc-13684d80-cf2a-41cc-808c-ab5e96821f30\") pod \"test-pod-1\" (UID: \"2613e59d-bd03-4d20-adc6-eab5b144bdad\") " pod="default/test-pod-1" Feb 12 20:32:12.089488 kubelet[1342]: I0212 20:32:12.089437 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphl7\" (UniqueName: \"kubernetes.io/projected/2613e59d-bd03-4d20-adc6-eab5b144bdad-kube-api-access-qphl7\") pod \"test-pod-1\" (UID: \"2613e59d-bd03-4d20-adc6-eab5b144bdad\") " pod="default/test-pod-1" Feb 12 20:32:12.287870 kernel: FS-Cache: Loaded Feb 12 20:32:12.369897 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:32:12.370189 kernel: RPC: Registered udp transport module. Feb 12 20:32:12.370256 kernel: RPC: Registered tcp transport module. Feb 12 20:32:12.370537 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:32:12.431796 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:32:12.660483 kernel: NFS: Registering the id_resolver key type Feb 12 20:32:12.660837 kernel: Key type id_resolver registered Feb 12 20:32:12.660906 kernel: Key type id_legacy registered Feb 12 20:32:12.724560 nfsidmap[2675]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 12 20:32:12.732533 nfsidmap[2676]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 12 20:32:12.737946 kubelet[1342]: E0212 20:32:12.737897 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:12.954991 env[1063]: time="2024-02-12T20:32:12.954870704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2613e59d-bd03-4d20-adc6-eab5b144bdad,Namespace:default,Attempt:0,}" Feb 12 20:32:13.048776 systemd-networkd[971]: lxc0125c1c0f259: Link UP Feb 12 20:32:13.051916 kernel: eth0: renamed from tmpe5d8f Feb 12 20:32:13.059394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:32:13.059495 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0125c1c0f259: link becomes ready Feb 12 20:32:13.059508 systemd-networkd[971]: lxc0125c1c0f259: Gained carrier Feb 12 20:32:13.454151 env[1063]: time="2024-02-12T20:32:13.453371026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:32:13.454333 env[1063]: time="2024-02-12T20:32:13.453555833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:32:13.454333 env[1063]: time="2024-02-12T20:32:13.453642616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:32:13.454570 env[1063]: time="2024-02-12T20:32:13.454529132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5d8f3e9756b4d9c25f3de1322aed45c6edd462d64ca98e624dd01e183266c02 pid=2703 runtime=io.containerd.runc.v2 Feb 12 20:32:13.473513 systemd[1]: Started cri-containerd-e5d8f3e9756b4d9c25f3de1322aed45c6edd462d64ca98e624dd01e183266c02.scope. Feb 12 20:32:13.479140 systemd[1]: run-containerd-runc-k8s.io-e5d8f3e9756b4d9c25f3de1322aed45c6edd462d64ca98e624dd01e183266c02-runc.tddBr3.mount: Deactivated successfully. Feb 12 20:32:13.550632 env[1063]: time="2024-02-12T20:32:13.550549881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2613e59d-bd03-4d20-adc6-eab5b144bdad,Namespace:default,Attempt:0,} returns sandbox id \"e5d8f3e9756b4d9c25f3de1322aed45c6edd462d64ca98e624dd01e183266c02\"" Feb 12 20:32:13.553958 env[1063]: time="2024-02-12T20:32:13.553756083Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:32:13.739043 kubelet[1342]: E0212 20:32:13.738970 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:14.082166 env[1063]: time="2024-02-12T20:32:14.081923297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:14.085348 env[1063]: time="2024-02-12T20:32:14.085282847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:14.089703 env[1063]: time="2024-02-12T20:32:14.089602680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:14.094383 env[1063]: time="2024-02-12T20:32:14.094324198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:14.096322 env[1063]: time="2024-02-12T20:32:14.096212264Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:32:14.101092 env[1063]: time="2024-02-12T20:32:14.100996109Z" level=info msg="CreateContainer within sandbox \"e5d8f3e9756b4d9c25f3de1322aed45c6edd462d64ca98e624dd01e183266c02\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:32:14.141539 env[1063]: time="2024-02-12T20:32:14.141438607Z" level=info msg="CreateContainer within sandbox \"e5d8f3e9756b4d9c25f3de1322aed45c6edd462d64ca98e624dd01e183266c02\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4ebb393299b772fca02f1e8f6e92e8fd6b756bed69809f09865c0c98f3c2fd44\"" Feb 12 20:32:14.144042 env[1063]: time="2024-02-12T20:32:14.143936838Z" level=info msg="StartContainer for \"4ebb393299b772fca02f1e8f6e92e8fd6b756bed69809f09865c0c98f3c2fd44\"" Feb 12 20:32:14.180900 systemd[1]: Started cri-containerd-4ebb393299b772fca02f1e8f6e92e8fd6b756bed69809f09865c0c98f3c2fd44.scope. Feb 12 20:32:14.226887 env[1063]: time="2024-02-12T20:32:14.226827361Z" level=info msg="StartContainer for \"4ebb393299b772fca02f1e8f6e92e8fd6b756bed69809f09865c0c98f3c2fd44\" returns successfully" Feb 12 20:32:14.461947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280043981.mount: Deactivated successfully. Feb 12 20:32:14.739567 kubelet[1342]: E0212 20:32:14.739493 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:14.990645 systemd-networkd[971]: lxc0125c1c0f259: Gained IPv6LL Feb 12 20:32:15.740246 kubelet[1342]: E0212 20:32:15.740156 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:16.740744 kubelet[1342]: E0212 20:32:16.740625 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:17.741243 kubelet[1342]: E0212 20:32:17.741105 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:18.742342 kubelet[1342]: E0212 20:32:18.742275 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:19.743365 kubelet[1342]: E0212 20:32:19.743307 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:20.743988 kubelet[1342]: E0212 20:32:20.743835 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:21.744941 kubelet[1342]: E0212 20:32:21.744846 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:22.745972 kubelet[1342]: E0212 20:32:22.745843 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:23.747048 kubelet[1342]: E0212 20:32:23.746973 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:24.571614 kubelet[1342]: I0212 20:32:24.571504 1342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=27.027413981 podCreationTimestamp="2024-02-12 20:31:57 +0000 UTC" firstStartedPulling="2024-02-12 20:32:13.552947274 +0000 UTC m=+65.735173050" lastFinishedPulling="2024-02-12 20:32:14.096882473 +0000 UTC m=+66.279108299" observedRunningTime="2024-02-12 20:32:14.359434599 +0000 UTC m=+66.541660425" watchObservedRunningTime="2024-02-12 20:32:24.57134923 +0000 UTC m=+76.753575056" Feb 12 20:32:24.621616 systemd[1]: run-containerd-runc-k8s.io-7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3-runc.OKbhjJ.mount: Deactivated successfully. Feb 12 20:32:24.748890 kubelet[1342]: E0212 20:32:24.748766 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:24.750677 env[1063]: time="2024-02-12T20:32:24.750506678Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:32:24.798813 env[1063]: time="2024-02-12T20:32:24.798665056Z" level=info msg="StopContainer for \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\" with timeout 1 (s)" Feb 12 20:32:24.799590 env[1063]: time="2024-02-12T20:32:24.799514333Z" level=info msg="Stop container \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\" with signal terminated" Feb 12 20:32:24.815454 systemd-networkd[971]: lxc_health: Link DOWN Feb 12 20:32:24.815474 systemd-networkd[971]: lxc_health: Lost carrier Feb 12 20:32:24.872996 systemd[1]: cri-containerd-7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3.scope: Deactivated successfully. Feb 12 20:32:24.873671 systemd[1]: cri-containerd-7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3.scope: Consumed 9.450s CPU time. Feb 12 20:32:24.919045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3-rootfs.mount: Deactivated successfully. Feb 12 20:32:24.948414 env[1063]: time="2024-02-12T20:32:24.948347720Z" level=info msg="shim disconnected" id=7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3 Feb 12 20:32:24.948678 env[1063]: time="2024-02-12T20:32:24.948655729Z" level=warning msg="cleaning up after shim disconnected" id=7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3 namespace=k8s.io Feb 12 20:32:24.948795 env[1063]: time="2024-02-12T20:32:24.948778149Z" level=info msg="cleaning up dead shim" Feb 12 20:32:24.958099 env[1063]: time="2024-02-12T20:32:24.958047747Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2831 runtime=io.containerd.runc.v2\n" Feb 12 20:32:24.962486 env[1063]: time="2024-02-12T20:32:24.962451386Z" level=info msg="StopContainer for \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\" returns successfully" Feb 12 20:32:24.963938 env[1063]: time="2024-02-12T20:32:24.963885100Z" level=info msg="StopPodSandbox for \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\"" Feb 12 20:32:24.964017 env[1063]: time="2024-02-12T20:32:24.963989287Z" level=info msg="Container to stop \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:32:24.964059 env[1063]: time="2024-02-12T20:32:24.964018061Z" level=info msg="Container to stop \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:32:24.964059 env[1063]: time="2024-02-12T20:32:24.964036225Z" level=info msg="Container to stop \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:32:24.964059 env[1063]: time="2024-02-12T20:32:24.964052385Z" level=info msg="Container to stop \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:32:24.964151 env[1063]: time="2024-02-12T20:32:24.964068736Z" level=info msg="Container to stop \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:32:24.966060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b-shm.mount: Deactivated successfully. Feb 12 20:32:24.973089 systemd[1]: cri-containerd-ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b.scope: Deactivated successfully. Feb 12 20:32:24.999922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b-rootfs.mount: Deactivated successfully. Feb 12 20:32:25.010645 env[1063]: time="2024-02-12T20:32:25.010575458Z" level=info msg="shim disconnected" id=ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b Feb 12 20:32:25.010898 env[1063]: time="2024-02-12T20:32:25.010877405Z" level=warning msg="cleaning up after shim disconnected" id=ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b namespace=k8s.io Feb 12 20:32:25.010971 env[1063]: time="2024-02-12T20:32:25.010956234Z" level=info msg="cleaning up dead shim" Feb 12 20:32:25.019174 env[1063]: time="2024-02-12T20:32:25.019121634Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2863 runtime=io.containerd.runc.v2\n" Feb 12 20:32:25.019795 env[1063]: time="2024-02-12T20:32:25.019764562Z" level=info msg="TearDown network for sandbox \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" successfully" Feb 12 20:32:25.019889 env[1063]: time="2024-02-12T20:32:25.019870151Z" level=info msg="StopPodSandbox for \"ea1f3691017791d5aa7bcbcdeb78154a7a4c34fa27040a3c91236cf48296170b\" returns successfully" Feb 12 20:32:25.181815 kubelet[1342]: I0212 20:32:25.178738 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hostproc\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.181815 kubelet[1342]: I0212 20:32:25.178873 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-bpf-maps\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.181815 kubelet[1342]: I0212 20:32:25.178968 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-lib-modules\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.181815 kubelet[1342]: I0212 20:32:25.179016 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hostproc" (OuterVolumeSpecName: "hostproc") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.181815 kubelet[1342]: I0212 20:32:25.179076 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2ab816fe-5e81-41b2-9526-0d1e9b627b08-clustermesh-secrets\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.181815 kubelet[1342]: I0212 20:32:25.179142 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.182593 kubelet[1342]: I0212 20:32:25.179174 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-xtables-lock\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.182593 kubelet[1342]: I0212 20:32:25.179274 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-kernel\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.182593 kubelet[1342]: I0212 20:32:25.179411 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhm8m\" (UniqueName: \"kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-kube-api-access-lhm8m\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.182593 kubelet[1342]: I0212 20:32:25.179470 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cni-path\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.182593 kubelet[1342]: I0212 20:32:25.179567 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-net\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.182593 kubelet[1342]: I0212 20:32:25.179670 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hubble-tls\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.183074 kubelet[1342]: I0212 20:32:25.179794 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-config-path\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.183074 kubelet[1342]: I0212 20:32:25.179849 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-run\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.183074 kubelet[1342]: I0212 20:32:25.179942 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-etc-cni-netd\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.183074 kubelet[1342]: I0212 20:32:25.179998 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-cgroup\") pod \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\" (UID: \"2ab816fe-5e81-41b2-9526-0d1e9b627b08\") " Feb 12 20:32:25.183074 kubelet[1342]: I0212 20:32:25.180067 1342 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hostproc\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.183074 kubelet[1342]: I0212 20:32:25.180096 1342 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-bpf-maps\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.183474 kubelet[1342]: I0212 20:32:25.180227 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.183474 kubelet[1342]: I0212 20:32:25.180288 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.183474 kubelet[1342]: I0212 20:32:25.179193 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.183474 kubelet[1342]: I0212 20:32:25.180336 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.185531 kubelet[1342]: I0212 20:32:25.185449 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cni-path" (OuterVolumeSpecName: "cni-path") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.185690 kubelet[1342]: I0212 20:32:25.185554 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.186297 kubelet[1342]: I0212 20:32:25.186248 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.186848 kubelet[1342]: W0212 20:32:25.186749 1342 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2ab816fe-5e81-41b2-9526-0d1e9b627b08/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:32:25.190489 kubelet[1342]: I0212 20:32:25.190428 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab816fe-5e81-41b2-9526-0d1e9b627b08-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:32:25.192293 kubelet[1342]: I0212 20:32:25.192223 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:25.193291 kubelet[1342]: I0212 20:32:25.193235 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:32:25.194395 kubelet[1342]: I0212 20:32:25.194343 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-kube-api-access-lhm8m" (OuterVolumeSpecName: "kube-api-access-lhm8m") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "kube-api-access-lhm8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:32:25.197899 kubelet[1342]: I0212 20:32:25.197820 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2ab816fe-5e81-41b2-9526-0d1e9b627b08" (UID: "2ab816fe-5e81-41b2-9526-0d1e9b627b08"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:32:25.281330 kubelet[1342]: I0212 20:32:25.281201 1342 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cni-path\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281330 kubelet[1342]: I0212 20:32:25.281284 1342 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-xtables-lock\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281330 kubelet[1342]: I0212 20:32:25.281323 1342 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-kernel\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281330 kubelet[1342]: I0212 20:32:25.281355 1342 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lhm8m\" (UniqueName: \"kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-kube-api-access-lhm8m\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281385 1342 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-cgroup\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281418 1342 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-host-proc-sys-net\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281445 1342 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2ab816fe-5e81-41b2-9526-0d1e9b627b08-hubble-tls\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281473 1342 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-config-path\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281502 1342 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-cilium-run\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281529 1342 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-etc-cni-netd\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281559 1342 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2ab816fe-5e81-41b2-9526-0d1e9b627b08-clustermesh-secrets\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.281882 kubelet[1342]: I0212 20:32:25.281587 1342 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ab816fe-5e81-41b2-9526-0d1e9b627b08-lib-modules\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:25.380945 kubelet[1342]: I0212 20:32:25.380896 1342 scope.go:115] "RemoveContainer" containerID="7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3" Feb 12 20:32:25.385080 env[1063]: time="2024-02-12T20:32:25.384978291Z" level=info msg="RemoveContainer for \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\"" Feb 12 20:32:25.391574 env[1063]: time="2024-02-12T20:32:25.391486486Z" level=info msg="RemoveContainer for \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\" returns successfully" Feb 12 20:32:25.392500 kubelet[1342]: I0212 20:32:25.392451 1342 scope.go:115] "RemoveContainer" containerID="2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2" Feb 12 20:32:25.395776 env[1063]: time="2024-02-12T20:32:25.395598366Z" level=info msg="RemoveContainer for \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\"" Feb 12 20:32:25.400157 systemd[1]: Removed slice kubepods-burstable-pod2ab816fe_5e81_41b2_9526_0d1e9b627b08.slice. Feb 12 20:32:25.400413 systemd[1]: kubepods-burstable-pod2ab816fe_5e81_41b2_9526_0d1e9b627b08.slice: Consumed 9.597s CPU time. Feb 12 20:32:25.406849 env[1063]: time="2024-02-12T20:32:25.406654109Z" level=info msg="RemoveContainer for \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\" returns successfully" Feb 12 20:32:25.407531 kubelet[1342]: I0212 20:32:25.407481 1342 scope.go:115] "RemoveContainer" containerID="e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d" Feb 12 20:32:25.412451 env[1063]: time="2024-02-12T20:32:25.411246803Z" level=info msg="RemoveContainer for \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\"" Feb 12 20:32:25.418924 env[1063]: time="2024-02-12T20:32:25.418838104Z" level=info msg="RemoveContainer for \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\" returns successfully" Feb 12 20:32:25.419775 kubelet[1342]: I0212 20:32:25.419678 1342 scope.go:115] "RemoveContainer" containerID="fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866" Feb 12 20:32:25.425957 env[1063]: time="2024-02-12T20:32:25.425864723Z" level=info msg="RemoveContainer for \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\"" Feb 12 20:32:25.432943 env[1063]: time="2024-02-12T20:32:25.432078765Z" level=info msg="RemoveContainer for \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\" returns successfully" Feb 12 20:32:25.435326 kubelet[1342]: I0212 20:32:25.434477 1342 scope.go:115] "RemoveContainer" containerID="0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e" Feb 12 20:32:25.440530 env[1063]: time="2024-02-12T20:32:25.440467845Z" level=info msg="RemoveContainer for \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\"" Feb 12 20:32:25.445578 env[1063]: time="2024-02-12T20:32:25.445519061Z" level=info msg="RemoveContainer for \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\" returns successfully" Feb 12 20:32:25.446341 kubelet[1342]: I0212 20:32:25.446277 1342 scope.go:115] "RemoveContainer" containerID="7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3" Feb 12 20:32:25.447167 env[1063]: time="2024-02-12T20:32:25.446935133Z" level=error msg="ContainerStatus for \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\": not found" Feb 12 20:32:25.447646 kubelet[1342]: E0212 20:32:25.447585 1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\": not found" containerID="7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3" Feb 12 20:32:25.447837 kubelet[1342]: I0212 20:32:25.447752 1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3} err="failed to get container status \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c1e8ae4fee8c97d1cd156376afe51c9852a647fb599431fc0bbaece5017eab3\": not found" Feb 12 20:32:25.447837 kubelet[1342]: I0212 20:32:25.447783 1342 scope.go:115] "RemoveContainer" containerID="2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2" Feb 12 20:32:25.448574 env[1063]: time="2024-02-12T20:32:25.448452385Z" level=error msg="ContainerStatus for \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\": not found" Feb 12 20:32:25.449155 kubelet[1342]: E0212 20:32:25.449082 1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\": not found" containerID="2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2" Feb 12 20:32:25.449292 kubelet[1342]: I0212 20:32:25.449205 1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2} err="failed to get container status \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ee649bd4fa4b203d6a4a6614ddb3b887d2b7919cec4f1a9b9f7b28372cbabe2\": not found" Feb 12 20:32:25.449292 kubelet[1342]: I0212 20:32:25.449270 1342 scope.go:115] "RemoveContainer" containerID="e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d" Feb 12 20:32:25.449857 env[1063]: time="2024-02-12T20:32:25.449695482Z" level=error msg="ContainerStatus for \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\": not found" Feb 12 20:32:25.450465 kubelet[1342]: E0212 20:32:25.450185 1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\": not found" containerID="e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d" Feb 12 20:32:25.450465 kubelet[1342]: I0212 20:32:25.450272 1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d} err="failed to get container status \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e985bdfd9c4def77807227b2e149e64de8ab44e21c322717ee25e792db6d614d\": not found" Feb 12 20:32:25.450465 kubelet[1342]: I0212 20:32:25.450305 1342 scope.go:115] "RemoveContainer" containerID="fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866" Feb 12 20:32:25.451100 env[1063]: time="2024-02-12T20:32:25.450991378Z" level=error msg="ContainerStatus for \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\": not found" Feb 12 20:32:25.451662 kubelet[1342]: E0212 20:32:25.451601 1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\": not found" containerID="fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866" Feb 12 20:32:25.451842 kubelet[1342]: I0212 20:32:25.451755 1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866} err="failed to get container status \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe78ca724131e498c6940fd9d147d9da9eb385f3e38bb9957212ec2245df8866\": not found" Feb 12 20:32:25.451842 kubelet[1342]: I0212 20:32:25.451786 1342 scope.go:115] "RemoveContainer" containerID="0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e" Feb 12 20:32:25.452483 env[1063]: time="2024-02-12T20:32:25.452324084Z" level=error msg="ContainerStatus for \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\": not found" Feb 12 20:32:25.452847 kubelet[1342]: E0212 20:32:25.452778 1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\": not found" containerID="0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e" Feb 12 20:32:25.452994 kubelet[1342]: I0212 20:32:25.452947 1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e} err="failed to get container status \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bd77ede30174d8e6c2d3f3659cbe9b13ccfd19b4763a3e6c96d468815d12a4e\": not found" Feb 12 20:32:25.609511 systemd[1]: var-lib-kubelet-pods-2ab816fe\x2d5e81\x2d41b2\x2d9526\x2d0d1e9b627b08-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhm8m.mount: Deactivated successfully. Feb 12 20:32:25.609816 systemd[1]: var-lib-kubelet-pods-2ab816fe\x2d5e81\x2d41b2\x2d9526\x2d0d1e9b627b08-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:32:25.609986 systemd[1]: var-lib-kubelet-pods-2ab816fe\x2d5e81\x2d41b2\x2d9526\x2d0d1e9b627b08-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:32:25.749245 kubelet[1342]: E0212 20:32:25.749192 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:26.750604 kubelet[1342]: E0212 20:32:26.750495 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:26.863383 kubelet[1342]: I0212 20:32:26.863337 1342 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2ab816fe-5e81-41b2-9526-0d1e9b627b08 path="/var/lib/kubelet/pods/2ab816fe-5e81-41b2-9526-0d1e9b627b08/volumes" Feb 12 20:32:27.751058 kubelet[1342]: E0212 20:32:27.750984 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:28.677360 kubelet[1342]: E0212 20:32:28.677286 1342 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:28.752965 kubelet[1342]: E0212 20:32:28.752891 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:28.804467 kubelet[1342]: E0212 20:32:28.804386 1342 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:32:29.754490 kubelet[1342]: E0212 20:32:29.754375 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:30.083338 kubelet[1342]: I0212 20:32:30.075463 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:32:30.083338 kubelet[1342]: E0212 20:32:30.076199 1342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ab816fe-5e81-41b2-9526-0d1e9b627b08" containerName="cilium-agent" Feb 12 20:32:30.083338 kubelet[1342]: E0212 20:32:30.076313 1342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ab816fe-5e81-41b2-9526-0d1e9b627b08" containerName="mount-bpf-fs" Feb 12 20:32:30.083338 kubelet[1342]: E0212 20:32:30.076347 1342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ab816fe-5e81-41b2-9526-0d1e9b627b08" containerName="apply-sysctl-overwrites" Feb 12 20:32:30.083338 kubelet[1342]: E0212 20:32:30.076404 1342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ab816fe-5e81-41b2-9526-0d1e9b627b08" containerName="clean-cilium-state" Feb 12 20:32:30.083338 kubelet[1342]: E0212 20:32:30.076429 1342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ab816fe-5e81-41b2-9526-0d1e9b627b08" containerName="mount-cgroup" Feb 12 20:32:30.083338 kubelet[1342]: I0212 20:32:30.076516 1342 memory_manager.go:346] "RemoveStaleState removing state" podUID="2ab816fe-5e81-41b2-9526-0d1e9b627b08" containerName="cilium-agent" Feb 12 20:32:30.090656 systemd[1]: Created slice kubepods-besteffort-podcc2321ad_6eda_4308_bfc7_3e8cb90e07c5.slice. Feb 12 20:32:30.116304 kubelet[1342]: I0212 20:32:30.116225 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc2321ad-6eda-4308-bfc7-3e8cb90e07c5-cilium-config-path\") pod \"cilium-operator-574c4bb98d-bnhxh\" (UID: \"cc2321ad-6eda-4308-bfc7-3e8cb90e07c5\") " pod="kube-system/cilium-operator-574c4bb98d-bnhxh" Feb 12 20:32:30.116601 kubelet[1342]: I0212 20:32:30.116332 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfnrm\" (UniqueName: \"kubernetes.io/projected/cc2321ad-6eda-4308-bfc7-3e8cb90e07c5-kube-api-access-cfnrm\") pod \"cilium-operator-574c4bb98d-bnhxh\" (UID: \"cc2321ad-6eda-4308-bfc7-3e8cb90e07c5\") " pod="kube-system/cilium-operator-574c4bb98d-bnhxh" Feb 12 20:32:30.135442 kubelet[1342]: I0212 20:32:30.135380 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:32:30.147278 systemd[1]: Created slice kubepods-burstable-pod69dfec2d_1350_4d4e_a6e2_ab33bb3d2c73.slice. Feb 12 20:32:30.217278 kubelet[1342]: I0212 20:32:30.217229 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hostproc\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.217668 kubelet[1342]: I0212 20:32:30.217640 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-ipsec-secrets\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.217933 kubelet[1342]: I0212 20:32:30.217907 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-net\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.218192 kubelet[1342]: I0212 20:32:30.218165 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-kernel\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.218439 kubelet[1342]: I0212 20:32:30.218409 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8jfq\" (UniqueName: \"kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-kube-api-access-j8jfq\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.218706 kubelet[1342]: I0212 20:32:30.218681 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-bpf-maps\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.218973 kubelet[1342]: I0212 20:32:30.218948 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-etc-cni-netd\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.219171 kubelet[1342]: I0212 20:32:30.219148 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-xtables-lock\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.219396 kubelet[1342]: I0212 20:32:30.219371 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-config-path\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.219600 kubelet[1342]: I0212 20:32:30.219577 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hubble-tls\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.219858 kubelet[1342]: I0212 20:32:30.219831 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-run\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.220058 kubelet[1342]: I0212 20:32:30.220035 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cni-path\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.220277 kubelet[1342]: I0212 20:32:30.220252 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-clustermesh-secrets\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.220487 kubelet[1342]: I0212 20:32:30.220463 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-cgroup\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.220684 kubelet[1342]: I0212 20:32:30.220661 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-lib-modules\") pod \"cilium-dqk4n\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " pod="kube-system/cilium-dqk4n" Feb 12 20:32:30.400977 env[1063]: time="2024-02-12T20:32:30.397431214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-bnhxh,Uid:cc2321ad-6eda-4308-bfc7-3e8cb90e07c5,Namespace:kube-system,Attempt:0,}" Feb 12 20:32:30.433416 env[1063]: time="2024-02-12T20:32:30.433234970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:32:30.433416 env[1063]: time="2024-02-12T20:32:30.433341670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:32:30.433993 env[1063]: time="2024-02-12T20:32:30.433374702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:32:30.434186 env[1063]: time="2024-02-12T20:32:30.434055602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/922e2f620a1f457d950cc3929b961c6e46360629cb6f90f6e5b59926f85dbb60 pid=2892 runtime=io.containerd.runc.v2 Feb 12 20:32:30.456652 env[1063]: time="2024-02-12T20:32:30.456602144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dqk4n,Uid:69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73,Namespace:kube-system,Attempt:0,}" Feb 12 20:32:30.461288 systemd[1]: Started cri-containerd-922e2f620a1f457d950cc3929b961c6e46360629cb6f90f6e5b59926f85dbb60.scope. Feb 12 20:32:30.492444 env[1063]: time="2024-02-12T20:32:30.491886094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:32:30.492444 env[1063]: time="2024-02-12T20:32:30.491960925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:32:30.492444 env[1063]: time="2024-02-12T20:32:30.491975914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:32:30.492444 env[1063]: time="2024-02-12T20:32:30.492142616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb pid=2928 runtime=io.containerd.runc.v2 Feb 12 20:32:30.518103 systemd[1]: Started cri-containerd-45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb.scope. Feb 12 20:32:30.537045 env[1063]: time="2024-02-12T20:32:30.536892326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-bnhxh,Uid:cc2321ad-6eda-4308-bfc7-3e8cb90e07c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"922e2f620a1f457d950cc3929b961c6e46360629cb6f90f6e5b59926f85dbb60\"" Feb 12 20:32:30.540673 env[1063]: time="2024-02-12T20:32:30.540634619Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:32:30.557191 env[1063]: time="2024-02-12T20:32:30.557124819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dqk4n,Uid:69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73,Namespace:kube-system,Attempt:0,} returns sandbox id \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\"" Feb 12 20:32:30.560845 env[1063]: time="2024-02-12T20:32:30.560800416Z" level=info msg="CreateContainer within sandbox \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:32:30.579678 env[1063]: time="2024-02-12T20:32:30.579613222Z" level=info msg="CreateContainer within sandbox \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\"" Feb 12 20:32:30.580906 env[1063]: time="2024-02-12T20:32:30.580878860Z" level=info msg="StartContainer for \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\"" Feb 12 20:32:30.598653 systemd[1]: Started cri-containerd-fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14.scope. Feb 12 20:32:30.617160 systemd[1]: cri-containerd-fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14.scope: Deactivated successfully. Feb 12 20:32:30.639339 env[1063]: time="2024-02-12T20:32:30.639275646Z" level=info msg="shim disconnected" id=fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14 Feb 12 20:32:30.639609 env[1063]: time="2024-02-12T20:32:30.639590569Z" level=warning msg="cleaning up after shim disconnected" id=fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14 namespace=k8s.io Feb 12 20:32:30.639699 env[1063]: time="2024-02-12T20:32:30.639683643Z" level=info msg="cleaning up dead shim" Feb 12 20:32:30.647683 env[1063]: time="2024-02-12T20:32:30.647627642Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2996 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:32:30Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:32:30.648100 env[1063]: time="2024-02-12T20:32:30.647973192Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Feb 12 20:32:30.648840 env[1063]: time="2024-02-12T20:32:30.648794506Z" level=error msg="Failed to pipe stdout of container \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\"" error="reading from a closed fifo" Feb 12 20:32:30.649201 env[1063]: time="2024-02-12T20:32:30.648938636Z" level=error msg="Failed to pipe stderr of container \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\"" error="reading from a closed fifo" Feb 12 20:32:30.652528 env[1063]: time="2024-02-12T20:32:30.652439526Z" level=error msg="StartContainer for \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:32:30.654105 kubelet[1342]: E0212 20:32:30.653947 1342 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14" Feb 12 20:32:30.654301 kubelet[1342]: E0212 20:32:30.654132 1342 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:32:30.654301 kubelet[1342]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:32:30.654301 kubelet[1342]: rm /hostbin/cilium-mount Feb 12 20:32:30.654396 kubelet[1342]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-j8jfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-dqk4n_kube-system(69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:32:30.654396 kubelet[1342]: E0212 20:32:30.654189 1342 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dqk4n" podUID=69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73 Feb 12 20:32:30.754951 kubelet[1342]: E0212 20:32:30.754885 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:31.414390 env[1063]: time="2024-02-12T20:32:31.414304983Z" level=info msg="CreateContainer within sandbox \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 12 20:32:31.446053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276422333.mount: Deactivated successfully. Feb 12 20:32:31.462308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383427449.mount: Deactivated successfully. Feb 12 20:32:31.473316 env[1063]: time="2024-02-12T20:32:31.473180112Z" level=info msg="CreateContainer within sandbox \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\"" Feb 12 20:32:31.475270 env[1063]: time="2024-02-12T20:32:31.475147890Z" level=info msg="StartContainer for \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\"" Feb 12 20:32:31.518429 systemd[1]: Started cri-containerd-23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca.scope. Feb 12 20:32:31.540535 systemd[1]: cri-containerd-23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca.scope: Deactivated successfully. Feb 12 20:32:31.550959 env[1063]: time="2024-02-12T20:32:31.550887752Z" level=info msg="shim disconnected" id=23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca Feb 12 20:32:31.550959 env[1063]: time="2024-02-12T20:32:31.550956030Z" level=warning msg="cleaning up after shim disconnected" id=23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca namespace=k8s.io Feb 12 20:32:31.551193 env[1063]: time="2024-02-12T20:32:31.550968113Z" level=info msg="cleaning up dead shim" Feb 12 20:32:31.559667 env[1063]: time="2024-02-12T20:32:31.559607628Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3033 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:32:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:32:31.560171 env[1063]: time="2024-02-12T20:32:31.560110673Z" level=error msg="copy shim log" error="read /proc/self/fd/77: file already closed" Feb 12 20:32:31.560853 env[1063]: time="2024-02-12T20:32:31.560791864Z" level=error msg="Failed to pipe stderr of container \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\"" error="reading from a closed fifo" Feb 12 20:32:31.561009 env[1063]: time="2024-02-12T20:32:31.560965890Z" level=error msg="Failed to pipe stdout of container \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\"" error="reading from a closed fifo" Feb 12 20:32:31.564786 env[1063]: time="2024-02-12T20:32:31.564700819Z" level=error msg="StartContainer for \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:32:31.565775 kubelet[1342]: E0212 20:32:31.565136 1342 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca" Feb 12 20:32:31.565775 kubelet[1342]: E0212 20:32:31.565273 1342 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:32:31.565775 kubelet[1342]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:32:31.565775 kubelet[1342]: rm /hostbin/cilium-mount Feb 12 20:32:31.565775 kubelet[1342]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-j8jfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-dqk4n_kube-system(69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:32:31.565775 kubelet[1342]: E0212 20:32:31.565323 1342 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dqk4n" podUID=69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73 Feb 12 20:32:31.755112 kubelet[1342]: E0212 20:32:31.755046 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:32.267046 kubelet[1342]: I0212 20:32:32.266988 1342 setters.go:548] "Node became not ready" node="172.24.4.19" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:32:32.266926337 +0000 UTC m=+84.449152113 LastTransitionTime:2024-02-12 20:32:32.266926337 +0000 UTC m=+84.449152113 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:32:32.418894 kubelet[1342]: I0212 20:32:32.418843 1342 scope.go:115] "RemoveContainer" containerID="fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14" Feb 12 20:32:32.420031 env[1063]: time="2024-02-12T20:32:32.419964935Z" level=info msg="StopPodSandbox for \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\"" Feb 12 20:32:32.422269 env[1063]: time="2024-02-12T20:32:32.420085090Z" level=info msg="Container to stop \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:32:32.422269 env[1063]: time="2024-02-12T20:32:32.420123372Z" level=info msg="Container to stop \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:32:32.423884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb-shm.mount: Deactivated successfully. Feb 12 20:32:32.430375 env[1063]: time="2024-02-12T20:32:32.430269688Z" level=info msg="RemoveContainer for \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\"" Feb 12 20:32:32.436548 env[1063]: time="2024-02-12T20:32:32.436476332Z" level=info msg="RemoveContainer for \"fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14\" returns successfully" Feb 12 20:32:32.449986 systemd[1]: cri-containerd-45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb.scope: Deactivated successfully. Feb 12 20:32:32.503330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb-rootfs.mount: Deactivated successfully. Feb 12 20:32:32.545591 env[1063]: time="2024-02-12T20:32:32.545444817Z" level=info msg="shim disconnected" id=45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb Feb 12 20:32:32.545591 env[1063]: time="2024-02-12T20:32:32.545536769Z" level=warning msg="cleaning up after shim disconnected" id=45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb namespace=k8s.io Feb 12 20:32:32.545591 env[1063]: time="2024-02-12T20:32:32.545555845Z" level=info msg="cleaning up dead shim" Feb 12 20:32:32.576105 env[1063]: time="2024-02-12T20:32:32.576021684Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3066 runtime=io.containerd.runc.v2\n" Feb 12 20:32:32.576680 env[1063]: time="2024-02-12T20:32:32.576628344Z" level=info msg="TearDown network for sandbox \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\" successfully" Feb 12 20:32:32.576769 env[1063]: time="2024-02-12T20:32:32.576679410Z" level=info msg="StopPodSandbox for \"45f993a4b027fb0b13d52af20a543d158a92423cc6b9d640cd4d05a91254bfeb\" returns successfully" Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.645854 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-xtables-lock\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.645918 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-config-path\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.645943 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-net\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.645967 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hostproc\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.645993 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-bpf-maps\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.646016 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-etc-cni-netd\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.646020 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.646043 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hubble-tls\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.646236 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-run\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646431 kubelet[1342]: I0212 20:32:32.646408 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-ipsec-secrets\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646895 kubelet[1342]: I0212 20:32:32.646517 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-kernel\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646895 kubelet[1342]: I0212 20:32:32.646622 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-clustermesh-secrets\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646895 kubelet[1342]: I0212 20:32:32.646789 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8jfq\" (UniqueName: \"kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-kube-api-access-j8jfq\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.646980 kubelet[1342]: I0212 20:32:32.646929 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-lib-modules\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.647026 kubelet[1342]: I0212 20:32:32.646987 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cni-path\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.647112 kubelet[1342]: I0212 20:32:32.647081 1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-cgroup\") pod \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\" (UID: \"69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73\") " Feb 12 20:32:32.647188 kubelet[1342]: I0212 20:32:32.647165 1342 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-xtables-lock\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.647267 kubelet[1342]: I0212 20:32:32.647232 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.647372 kubelet[1342]: I0212 20:32:32.647335 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.647944 kubelet[1342]: I0212 20:32:32.647892 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.648705 kubelet[1342]: I0212 20:32:32.648658 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.650726 kubelet[1342]: W0212 20:32:32.649106 1342 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:32:32.650726 kubelet[1342]: I0212 20:32:32.649874 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cni-path" (OuterVolumeSpecName: "cni-path") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.651415 kubelet[1342]: I0212 20:32:32.651390 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:32:32.651520 kubelet[1342]: I0212 20:32:32.651502 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.651602 kubelet[1342]: I0212 20:32:32.651588 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hostproc" (OuterVolumeSpecName: "hostproc") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.651686 kubelet[1342]: I0212 20:32:32.651672 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.651792 kubelet[1342]: I0212 20:32:32.651778 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:32:32.657133 systemd[1]: var-lib-kubelet-pods-69dfec2d\x2d1350\x2d4d4e\x2da6e2\x2dab33bb3d2c73-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:32:32.660035 kubelet[1342]: I0212 20:32:32.659978 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:32:32.670537 systemd[1]: var-lib-kubelet-pods-69dfec2d\x2d1350\x2d4d4e\x2da6e2\x2dab33bb3d2c73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj8jfq.mount: Deactivated successfully. Feb 12 20:32:32.672167 kubelet[1342]: I0212 20:32:32.672111 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-kube-api-access-j8jfq" (OuterVolumeSpecName: "kube-api-access-j8jfq") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "kube-api-access-j8jfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:32:32.676890 kubelet[1342]: I0212 20:32:32.676850 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:32:32.677643 kubelet[1342]: I0212 20:32:32.677585 1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" (UID: "69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:32:32.748084 kubelet[1342]: I0212 20:32:32.748025 1342 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-net\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748084 kubelet[1342]: I0212 20:32:32.748090 1342 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hostproc\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748301 kubelet[1342]: I0212 20:32:32.748120 1342 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-bpf-maps\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748301 kubelet[1342]: I0212 20:32:32.748149 1342 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-etc-cni-netd\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748301 kubelet[1342]: I0212 20:32:32.748178 1342 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-hubble-tls\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748301 kubelet[1342]: I0212 20:32:32.748205 1342 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-run\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748301 kubelet[1342]: I0212 20:32:32.748233 1342 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-ipsec-secrets\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748301 kubelet[1342]: I0212 20:32:32.748263 1342 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-host-proc-sys-kernel\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748301 kubelet[1342]: I0212 20:32:32.748290 1342 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-clustermesh-secrets\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748494 kubelet[1342]: I0212 20:32:32.748318 1342 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j8jfq\" (UniqueName: \"kubernetes.io/projected/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-kube-api-access-j8jfq\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748494 kubelet[1342]: I0212 20:32:32.748345 1342 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-lib-modules\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748494 kubelet[1342]: I0212 20:32:32.748371 1342 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cni-path\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748494 kubelet[1342]: I0212 20:32:32.748400 1342 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-cgroup\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.748494 kubelet[1342]: I0212 20:32:32.748428 1342 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73-cilium-config-path\") on node \"172.24.4.19\" DevicePath \"\"" Feb 12 20:32:32.755838 kubelet[1342]: E0212 20:32:32.755798 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:32.870445 systemd[1]: Removed slice kubepods-burstable-pod69dfec2d_1350_4d4e_a6e2_ab33bb3d2c73.slice. Feb 12 20:32:33.252409 systemd[1]: var-lib-kubelet-pods-69dfec2d\x2d1350\x2d4d4e\x2da6e2\x2dab33bb3d2c73-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:32:33.252966 systemd[1]: var-lib-kubelet-pods-69dfec2d\x2d1350\x2d4d4e\x2da6e2\x2dab33bb3d2c73-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:32:33.319582 env[1063]: time="2024-02-12T20:32:33.319477494Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:33.321463 env[1063]: time="2024-02-12T20:32:33.321402031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:33.324070 env[1063]: time="2024-02-12T20:32:33.324033335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:32:33.325565 env[1063]: time="2024-02-12T20:32:33.325488629Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:32:33.331145 env[1063]: time="2024-02-12T20:32:33.331118969Z" level=info msg="CreateContainer within sandbox \"922e2f620a1f457d950cc3929b961c6e46360629cb6f90f6e5b59926f85dbb60\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:32:33.351695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219748392.mount: Deactivated successfully. Feb 12 20:32:33.354681 env[1063]: time="2024-02-12T20:32:33.354643913Z" level=info msg="CreateContainer within sandbox \"922e2f620a1f457d950cc3929b961c6e46360629cb6f90f6e5b59926f85dbb60\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c19f5d8a66fcab5554222f2a19ede166b77ccbb1915a25c70e38a36b76399779\"" Feb 12 20:32:33.356911 env[1063]: time="2024-02-12T20:32:33.356828999Z" level=info msg="StartContainer for \"c19f5d8a66fcab5554222f2a19ede166b77ccbb1915a25c70e38a36b76399779\"" Feb 12 20:32:33.390109 systemd[1]: Started cri-containerd-c19f5d8a66fcab5554222f2a19ede166b77ccbb1915a25c70e38a36b76399779.scope. Feb 12 20:32:33.423247 kubelet[1342]: I0212 20:32:33.422766 1342 scope.go:115] "RemoveContainer" containerID="23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca" Feb 12 20:32:33.427079 env[1063]: time="2024-02-12T20:32:33.426977569Z" level=info msg="RemoveContainer for \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\"" Feb 12 20:32:33.431168 env[1063]: time="2024-02-12T20:32:33.431122007Z" level=info msg="RemoveContainer for \"23db15dda8d46aa66afd1d394be94c670895e2e7c147f7a7393001041e38dfca\" returns successfully" Feb 12 20:32:33.444980 env[1063]: time="2024-02-12T20:32:33.444902620Z" level=info msg="StartContainer for \"c19f5d8a66fcab5554222f2a19ede166b77ccbb1915a25c70e38a36b76399779\" returns successfully" Feb 12 20:32:33.495558 kubelet[1342]: I0212 20:32:33.495527 1342 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:32:33.495843 kubelet[1342]: E0212 20:32:33.495830 1342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" containerName="mount-cgroup" Feb 12 20:32:33.495949 kubelet[1342]: E0212 20:32:33.495937 1342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" containerName="mount-cgroup" Feb 12 20:32:33.496079 kubelet[1342]: I0212 20:32:33.496067 1342 memory_manager.go:346] "RemoveStaleState removing state" podUID="69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" containerName="mount-cgroup" Feb 12 20:32:33.496186 kubelet[1342]: I0212 20:32:33.496175 1342 memory_manager.go:346] "RemoveStaleState removing state" podUID="69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73" containerName="mount-cgroup" Feb 12 20:32:33.502548 systemd[1]: Created slice kubepods-burstable-podf628995a_5d9a_41a4_ad38_d37126232890.slice. Feb 12 20:32:33.553540 kubelet[1342]: I0212 20:32:33.553501 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f628995a-5d9a-41a4-ad38-d37126232890-clustermesh-secrets\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554238 kubelet[1342]: I0212 20:32:33.554187 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f628995a-5d9a-41a4-ad38-d37126232890-hubble-tls\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554320 kubelet[1342]: I0212 20:32:33.554289 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsw9\" (UniqueName: \"kubernetes.io/projected/f628995a-5d9a-41a4-ad38-d37126232890-kube-api-access-cdsw9\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554362 kubelet[1342]: I0212 20:32:33.554330 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-bpf-maps\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554406 kubelet[1342]: I0212 20:32:33.554365 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-xtables-lock\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554406 kubelet[1342]: I0212 20:32:33.554404 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f628995a-5d9a-41a4-ad38-d37126232890-cilium-config-path\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554472 kubelet[1342]: I0212 20:32:33.554438 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-host-proc-sys-net\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554472 kubelet[1342]: I0212 20:32:33.554471 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-cilium-run\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554529 kubelet[1342]: I0212 20:32:33.554504 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-cni-path\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554566 kubelet[1342]: I0212 20:32:33.554537 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-host-proc-sys-kernel\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554598 kubelet[1342]: I0212 20:32:33.554576 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-lib-modules\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554630 kubelet[1342]: I0212 20:32:33.554609 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f628995a-5d9a-41a4-ad38-d37126232890-cilium-ipsec-secrets\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554664 kubelet[1342]: I0212 20:32:33.554642 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-cilium-cgroup\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554697 kubelet[1342]: I0212 20:32:33.554673 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-etc-cni-netd\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.554757 kubelet[1342]: I0212 20:32:33.554728 1342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f628995a-5d9a-41a4-ad38-d37126232890-hostproc\") pod \"cilium-bvcrk\" (UID: \"f628995a-5d9a-41a4-ad38-d37126232890\") " pod="kube-system/cilium-bvcrk" Feb 12 20:32:33.756644 kubelet[1342]: E0212 20:32:33.756450 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:33.776001 kubelet[1342]: W0212 20:32:33.775894 1342 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69dfec2d_1350_4d4e_a6e2_ab33bb3d2c73.slice/cri-containerd-fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14.scope WatchSource:0}: container "fe060341588107881b9ae46731b20cb72ee09db6db5572661e3cc18e14d65d14" in namespace "k8s.io": not found Feb 12 20:32:33.806123 kubelet[1342]: E0212 20:32:33.806011 1342 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:32:33.811353 env[1063]: time="2024-02-12T20:32:33.811252560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvcrk,Uid:f628995a-5d9a-41a4-ad38-d37126232890,Namespace:kube-system,Attempt:0,}" Feb 12 20:32:33.883459 env[1063]: time="2024-02-12T20:32:33.883033297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:32:33.883459 env[1063]: time="2024-02-12T20:32:33.883124229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:32:33.883459 env[1063]: time="2024-02-12T20:32:33.883157882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:32:33.884141 env[1063]: time="2024-02-12T20:32:33.884063253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725 pid=3132 runtime=io.containerd.runc.v2 Feb 12 20:32:33.916968 systemd[1]: Started cri-containerd-97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725.scope. Feb 12 20:32:34.076701 env[1063]: time="2024-02-12T20:32:34.074844740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvcrk,Uid:f628995a-5d9a-41a4-ad38-d37126232890,Namespace:kube-system,Attempt:0,} returns sandbox id \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\"" Feb 12 20:32:34.080558 env[1063]: time="2024-02-12T20:32:34.080462885Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:32:34.106132 env[1063]: time="2024-02-12T20:32:34.106016689Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0\"" Feb 12 20:32:34.108360 env[1063]: time="2024-02-12T20:32:34.108240116Z" level=info msg="StartContainer for \"2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0\"" Feb 12 20:32:34.143694 systemd[1]: Started cri-containerd-2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0.scope. Feb 12 20:32:34.192228 env[1063]: time="2024-02-12T20:32:34.192160407Z" level=info msg="StartContainer for \"2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0\" returns successfully" Feb 12 20:32:34.243223 systemd[1]: cri-containerd-2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0.scope: Deactivated successfully. Feb 12 20:32:34.280514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0-rootfs.mount: Deactivated successfully. Feb 12 20:32:34.289783 env[1063]: time="2024-02-12T20:32:34.289689467Z" level=info msg="shim disconnected" id=2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0 Feb 12 20:32:34.290124 env[1063]: time="2024-02-12T20:32:34.290089749Z" level=warning msg="cleaning up after shim disconnected" id=2f5b9b448c734639414c383c5abbf537d91bf7983284e74391019a52c6664ce0 namespace=k8s.io Feb 12 20:32:34.290219 env[1063]: time="2024-02-12T20:32:34.290202521Z" level=info msg="cleaning up dead shim" Feb 12 20:32:34.300002 env[1063]: time="2024-02-12T20:32:34.299943003Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3217 runtime=io.containerd.runc.v2\n" Feb 12 20:32:34.448228 env[1063]: time="2024-02-12T20:32:34.446310728Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:32:34.487203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692666864.mount: Deactivated successfully. Feb 12 20:32:34.512260 env[1063]: time="2024-02-12T20:32:34.512109699Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aaadaa5a6e83c294cfd9065c26549d53a3d22334cfcc96ffa9a26778bb7a2585\"" Feb 12 20:32:34.514063 env[1063]: time="2024-02-12T20:32:34.513946841Z" level=info msg="StartContainer for \"aaadaa5a6e83c294cfd9065c26549d53a3d22334cfcc96ffa9a26778bb7a2585\"" Feb 12 20:32:34.537543 systemd[1]: Started cri-containerd-aaadaa5a6e83c294cfd9065c26549d53a3d22334cfcc96ffa9a26778bb7a2585.scope. Feb 12 20:32:34.552054 kubelet[1342]: I0212 20:32:34.551991 1342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-bnhxh" podStartSLOduration=2.765722353 podCreationTimestamp="2024-02-12 20:32:29 +0000 UTC" firstStartedPulling="2024-02-12 20:32:30.539966853 +0000 UTC m=+82.722192639" lastFinishedPulling="2024-02-12 20:32:33.326187263 +0000 UTC m=+85.508413089" observedRunningTime="2024-02-12 20:32:34.476532442 +0000 UTC m=+86.658758268" watchObservedRunningTime="2024-02-12 20:32:34.551942803 +0000 UTC m=+86.734168589" Feb 12 20:32:34.594185 env[1063]: time="2024-02-12T20:32:34.594137434Z" level=info msg="StartContainer for \"aaadaa5a6e83c294cfd9065c26549d53a3d22334cfcc96ffa9a26778bb7a2585\" returns successfully" Feb 12 20:32:34.607476 systemd[1]: cri-containerd-aaadaa5a6e83c294cfd9065c26549d53a3d22334cfcc96ffa9a26778bb7a2585.scope: Deactivated successfully. Feb 12 20:32:34.635552 env[1063]: time="2024-02-12T20:32:34.635495613Z" level=info msg="shim disconnected" id=aaadaa5a6e83c294cfd9065c26549d53a3d22334cfcc96ffa9a26778bb7a2585 Feb 12 20:32:34.635882 env[1063]: time="2024-02-12T20:32:34.635858825Z" level=warning msg="cleaning up after shim disconnected" id=aaadaa5a6e83c294cfd9065c26549d53a3d22334cfcc96ffa9a26778bb7a2585 namespace=k8s.io Feb 12 20:32:34.635984 env[1063]: time="2024-02-12T20:32:34.635966887Z" level=info msg="cleaning up dead shim" Feb 12 20:32:34.645751 env[1063]: time="2024-02-12T20:32:34.645671282Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3281 runtime=io.containerd.runc.v2\n" Feb 12 20:32:34.758100 kubelet[1342]: E0212 20:32:34.758020 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:34.864155 kubelet[1342]: I0212 20:32:34.863886 1342 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73 path="/var/lib/kubelet/pods/69dfec2d-1350-4d4e-a6e2-ab33bb3d2c73/volumes" Feb 12 20:32:35.252298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530344000.mount: Deactivated successfully. Feb 12 20:32:35.453178 env[1063]: time="2024-02-12T20:32:35.453087782Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:32:35.494544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47609478.mount: Deactivated successfully. Feb 12 20:32:35.510859 env[1063]: time="2024-02-12T20:32:35.510583066Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"64ad1adec683c9a92fbd4d9c2e468aaa773729bccbaec590cb27a318b15eba74\"" Feb 12 20:32:35.513272 env[1063]: time="2024-02-12T20:32:35.513195273Z" level=info msg="StartContainer for \"64ad1adec683c9a92fbd4d9c2e468aaa773729bccbaec590cb27a318b15eba74\"" Feb 12 20:32:35.554190 systemd[1]: Started cri-containerd-64ad1adec683c9a92fbd4d9c2e468aaa773729bccbaec590cb27a318b15eba74.scope. Feb 12 20:32:35.606924 env[1063]: time="2024-02-12T20:32:35.606878268Z" level=info msg="StartContainer for \"64ad1adec683c9a92fbd4d9c2e468aaa773729bccbaec590cb27a318b15eba74\" returns successfully" Feb 12 20:32:35.619034 systemd[1]: cri-containerd-64ad1adec683c9a92fbd4d9c2e468aaa773729bccbaec590cb27a318b15eba74.scope: Deactivated successfully. Feb 12 20:32:35.660489 env[1063]: time="2024-02-12T20:32:35.660373477Z" level=info msg="shim disconnected" id=64ad1adec683c9a92fbd4d9c2e468aaa773729bccbaec590cb27a318b15eba74 Feb 12 20:32:35.660489 env[1063]: time="2024-02-12T20:32:35.660441785Z" level=warning msg="cleaning up after shim disconnected" id=64ad1adec683c9a92fbd4d9c2e468aaa773729bccbaec590cb27a318b15eba74 namespace=k8s.io Feb 12 20:32:35.660489 env[1063]: time="2024-02-12T20:32:35.660453517Z" level=info msg="cleaning up dead shim" Feb 12 20:32:35.669142 env[1063]: time="2024-02-12T20:32:35.669075777Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3342 runtime=io.containerd.runc.v2\n" Feb 12 20:32:35.758575 kubelet[1342]: E0212 20:32:35.758486 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:36.462866 env[1063]: time="2024-02-12T20:32:36.462657289Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:32:36.495484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402649246.mount: Deactivated successfully. Feb 12 20:32:36.506610 env[1063]: time="2024-02-12T20:32:36.506496251Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c\"" Feb 12 20:32:36.508181 env[1063]: time="2024-02-12T20:32:36.508112458Z" level=info msg="StartContainer for \"d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c\"" Feb 12 20:32:36.552204 systemd[1]: Started cri-containerd-d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c.scope. Feb 12 20:32:36.605672 systemd[1]: cri-containerd-d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c.scope: Deactivated successfully. Feb 12 20:32:36.608349 env[1063]: time="2024-02-12T20:32:36.608300586Z" level=info msg="StartContainer for \"d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c\" returns successfully" Feb 12 20:32:36.642819 env[1063]: time="2024-02-12T20:32:36.642696989Z" level=info msg="shim disconnected" id=d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c Feb 12 20:32:36.643136 env[1063]: time="2024-02-12T20:32:36.643115795Z" level=warning msg="cleaning up after shim disconnected" id=d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c namespace=k8s.io Feb 12 20:32:36.643207 env[1063]: time="2024-02-12T20:32:36.643192901Z" level=info msg="cleaning up dead shim" Feb 12 20:32:36.650882 env[1063]: time="2024-02-12T20:32:36.650854966Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:32:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3398 runtime=io.containerd.runc.v2\n" Feb 12 20:32:36.759330 kubelet[1342]: E0212 20:32:36.759214 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:37.253401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d35f98ab6c4b778b7f9cd6a87eaa293a659a09015fc517c3302570692b9fca5c-rootfs.mount: Deactivated successfully. Feb 12 20:32:37.472105 env[1063]: time="2024-02-12T20:32:37.472028226Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:32:37.508361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414747107.mount: Deactivated successfully. Feb 12 20:32:37.528523 env[1063]: time="2024-02-12T20:32:37.528151052Z" level=info msg="CreateContainer within sandbox \"97013cb94a61c38cb54727ebcd305eee0eeae42f397106220a41f6146bacb725\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9\"" Feb 12 20:32:37.531782 env[1063]: time="2024-02-12T20:32:37.530499865Z" level=info msg="StartContainer for \"d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9\"" Feb 12 20:32:37.566225 systemd[1]: Started cri-containerd-d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9.scope. Feb 12 20:32:37.610985 env[1063]: time="2024-02-12T20:32:37.610932561Z" level=info msg="StartContainer for \"d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9\" returns successfully" Feb 12 20:32:37.761018 kubelet[1342]: E0212 20:32:37.760550 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:38.466754 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:32:38.491363 kubelet[1342]: I0212 20:32:38.491270 1342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bvcrk" podStartSLOduration=5.491190592 podCreationTimestamp="2024-02-12 20:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:32:38.490279981 +0000 UTC m=+90.672505777" watchObservedRunningTime="2024-02-12 20:32:38.491190592 +0000 UTC m=+90.673416368" Feb 12 20:32:38.542824 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 12 20:32:38.761861 kubelet[1342]: E0212 20:32:38.761651 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:38.878472 systemd[1]: run-containerd-runc-k8s.io-d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9-runc.XayyQN.mount: Deactivated successfully. Feb 12 20:32:39.762168 kubelet[1342]: E0212 20:32:39.762099 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:40.763852 kubelet[1342]: E0212 20:32:40.763757 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:41.199207 systemd[1]: run-containerd-runc-k8s.io-d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9-runc.pY3Pyf.mount: Deactivated successfully. Feb 12 20:32:41.765591 kubelet[1342]: E0212 20:32:41.765526 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:41.811195 systemd-networkd[971]: lxc_health: Link UP Feb 12 20:32:41.820510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:32:41.820788 systemd-networkd[971]: lxc_health: Gained carrier Feb 12 20:32:42.766489 kubelet[1342]: E0212 20:32:42.766373 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:43.429657 systemd[1]: run-containerd-runc-k8s.io-d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9-runc.vqbTki.mount: Deactivated successfully. Feb 12 20:32:43.767216 kubelet[1342]: E0212 20:32:43.767147 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:43.853992 systemd-networkd[971]: lxc_health: Gained IPv6LL Feb 12 20:32:44.768439 kubelet[1342]: E0212 20:32:44.768378 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:45.684335 systemd[1]: run-containerd-runc-k8s.io-d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9-runc.jSk4gx.mount: Deactivated successfully. Feb 12 20:32:45.768875 kubelet[1342]: E0212 20:32:45.768836 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:46.769319 kubelet[1342]: E0212 20:32:46.769253 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:47.770135 kubelet[1342]: E0212 20:32:47.770071 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:47.908312 systemd[1]: run-containerd-runc-k8s.io-d0e0f8ce50bb45553bdd4c38cb2e94109d0a62322a4b952dbaf4ab228d48a1d9-runc.FNezWR.mount: Deactivated successfully. Feb 12 20:32:48.677268 kubelet[1342]: E0212 20:32:48.677182 1342 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:48.771117 kubelet[1342]: E0212 20:32:48.771034 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:49.771889 kubelet[1342]: E0212 20:32:49.771829 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:50.773656 kubelet[1342]: E0212 20:32:50.773469 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:51.773997 kubelet[1342]: E0212 20:32:51.773937 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:52.774883 kubelet[1342]: E0212 20:32:52.774828 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:53.775812 kubelet[1342]: E0212 20:32:53.775707 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:32:54.777607 kubelet[1342]: E0212 20:32:54.777552 1342 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"