Jul 2 08:46:10.047327 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 08:46:10.047348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:46:10.047362 kernel: BIOS-provided physical RAM map: Jul 2 08:46:10.047369 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 08:46:10.047376 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 08:46:10.047384 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 08:46:10.047392 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 2 08:46:10.047400 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 2 08:46:10.047409 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 08:46:10.047416 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 08:46:10.047423 kernel: NX (Execute Disable) protection: active Jul 2 08:46:10.047430 kernel: SMBIOS 2.8 present. Jul 2 08:46:10.047437 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jul 2 08:46:10.047444 kernel: Hypervisor detected: KVM Jul 2 08:46:10.047452 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 08:46:10.047462 kernel: kvm-clock: cpu 0, msr 7c192001, primary cpu clock Jul 2 08:46:10.047470 kernel: kvm-clock: using sched offset of 5551387173 cycles Jul 2 08:46:10.047478 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 08:46:10.047486 kernel: tsc: Detected 1996.249 MHz processor Jul 2 08:46:10.047494 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 08:46:10.047503 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 08:46:10.047510 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 2 08:46:10.047518 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 08:46:10.047528 kernel: ACPI: Early table checksum verification disabled Jul 2 08:46:10.047536 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jul 2 08:46:10.047543 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:46:10.047551 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:46:10.047559 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:46:10.047566 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 08:46:10.047574 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:46:10.047582 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:46:10.047590 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jul 2 08:46:10.047599 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jul 2 08:46:10.047607 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 08:46:10.047614 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jul 2 08:46:10.047622 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jul 2 08:46:10.047629 kernel: No NUMA configuration found Jul 2 08:46:10.047637 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jul 2 08:46:10.047645 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jul 2 08:46:10.047653 kernel: Zone ranges: Jul 2 08:46:10.047668 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 08:46:10.047676 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jul 2 08:46:10.047684 kernel: Normal empty Jul 2 08:46:10.047692 kernel: Movable zone start for each node Jul 2 08:46:10.047700 kernel: Early memory node ranges Jul 2 08:46:10.047708 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 08:46:10.047717 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 2 08:46:10.047736 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jul 2 08:46:10.047745 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 08:46:10.047753 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 08:46:10.047761 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jul 2 08:46:10.047769 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 08:46:10.047777 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 08:46:10.047785 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 08:46:10.047793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 08:46:10.047802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 08:46:10.047811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 08:46:10.047819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 08:46:10.047827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 08:46:10.047835 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 08:46:10.047843 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 08:46:10.047851 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 08:46:10.047859 kernel: Booting paravirtualized kernel on KVM Jul 2 08:46:10.049892 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 08:46:10.049904 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 08:46:10.049916 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 08:46:10.049924 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 08:46:10.049934 kernel: pcpu-alloc: [0] 0 1 Jul 2 08:46:10.049942 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Jul 2 08:46:10.049950 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 08:46:10.049958 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jul 2 08:46:10.049966 kernel: Policy zone: DMA32 Jul 2 08:46:10.049976 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:46:10.049986 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:46:10.049994 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:46:10.050003 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 08:46:10.050011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:46:10.050019 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 123076K reserved, 0K cma-reserved) Jul 2 08:46:10.050028 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:46:10.050036 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 08:46:10.050044 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 08:46:10.050056 kernel: rcu: Hierarchical RCU implementation. Jul 2 08:46:10.050064 kernel: rcu: RCU event tracing is enabled. Jul 2 08:46:10.050073 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:46:10.050082 kernel: Rude variant of Tasks RCU enabled. Jul 2 08:46:10.050090 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:46:10.050098 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:46:10.050106 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:46:10.050114 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 08:46:10.050122 kernel: Console: colour VGA+ 80x25 Jul 2 08:46:10.050132 kernel: printk: console [tty0] enabled Jul 2 08:46:10.050140 kernel: printk: console [ttyS0] enabled Jul 2 08:46:10.050148 kernel: ACPI: Core revision 20210730 Jul 2 08:46:10.050156 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 08:46:10.050164 kernel: x2apic enabled Jul 2 08:46:10.050172 kernel: Switched APIC routing to physical x2apic. Jul 2 08:46:10.050181 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 08:46:10.050189 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 08:46:10.050197 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 2 08:46:10.050205 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 08:46:10.050215 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 08:46:10.050223 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 08:46:10.050232 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 08:46:10.050240 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 08:46:10.050248 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 08:46:10.050256 kernel: Speculative Store Bypass: Vulnerable Jul 2 08:46:10.050264 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 2 08:46:10.050272 kernel: Freeing SMP alternatives memory: 32K Jul 2 08:46:10.050280 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:46:10.050289 kernel: LSM: Security Framework initializing Jul 2 08:46:10.050297 kernel: SELinux: Initializing. Jul 2 08:46:10.050306 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:46:10.050314 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:46:10.050322 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 2 08:46:10.050330 kernel: Performance Events: AMD PMU driver. Jul 2 08:46:10.050338 kernel: ... version: 0 Jul 2 08:46:10.050346 kernel: ... bit width: 48 Jul 2 08:46:10.050354 kernel: ... generic registers: 4 Jul 2 08:46:10.050371 kernel: ... value mask: 0000ffffffffffff Jul 2 08:46:10.050379 kernel: ... max period: 00007fffffffffff Jul 2 08:46:10.050389 kernel: ... fixed-purpose events: 0 Jul 2 08:46:10.050398 kernel: ... event mask: 000000000000000f Jul 2 08:46:10.050406 kernel: signal: max sigframe size: 1440 Jul 2 08:46:10.050414 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:46:10.050423 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:46:10.050431 kernel: x86: Booting SMP configuration: Jul 2 08:46:10.050442 kernel: .... node #0, CPUs: #1 Jul 2 08:46:10.050451 kernel: kvm-clock: cpu 1, msr 7c192041, secondary cpu clock Jul 2 08:46:10.050459 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Jul 2 08:46:10.050468 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:46:10.050477 kernel: smpboot: Max logical packages: 2 Jul 2 08:46:10.050486 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 2 08:46:10.050495 kernel: devtmpfs: initialized Jul 2 08:46:10.050503 kernel: x86/mm: Memory block size: 128MB Jul 2 08:46:10.050511 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:46:10.050522 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:46:10.050530 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:46:10.050539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:46:10.050547 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:46:10.050556 kernel: audit: type=2000 audit(1719909969.759:1): state=initialized audit_enabled=0 res=1 Jul 2 08:46:10.050564 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:46:10.050573 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 08:46:10.050581 kernel: cpuidle: using governor menu Jul 2 08:46:10.050589 kernel: ACPI: bus type PCI registered Jul 2 08:46:10.050599 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:46:10.050608 kernel: dca service started, version 1.12.1 Jul 2 08:46:10.050616 kernel: PCI: Using configuration type 1 for base access Jul 2 08:46:10.050625 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 08:46:10.050633 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:46:10.050642 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:46:10.050650 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:46:10.050659 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:46:10.050667 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:46:10.050677 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 08:46:10.050686 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 08:46:10.050694 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 08:46:10.050702 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:46:10.050711 kernel: ACPI: Interpreter enabled Jul 2 08:46:10.050719 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 08:46:10.050728 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 08:46:10.050736 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 08:46:10.050745 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 08:46:10.050755 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 08:46:10.050909 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:46:10.051002 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 08:46:10.051016 kernel: acpiphp: Slot [3] registered Jul 2 08:46:10.051024 kernel: acpiphp: Slot [4] registered Jul 2 08:46:10.051033 kernel: acpiphp: Slot [5] registered Jul 2 08:46:10.051041 kernel: acpiphp: Slot [6] registered Jul 2 08:46:10.051053 kernel: acpiphp: Slot [7] registered Jul 2 08:46:10.051062 kernel: acpiphp: Slot [8] registered Jul 2 08:46:10.051070 kernel: acpiphp: Slot [9] registered Jul 2 08:46:10.051079 kernel: acpiphp: Slot [10] registered Jul 2 08:46:10.051087 kernel: acpiphp: Slot [11] registered Jul 2 08:46:10.051095 kernel: acpiphp: Slot [12] registered Jul 2 08:46:10.051104 kernel: acpiphp: Slot [13] registered Jul 2 08:46:10.051112 kernel: acpiphp: Slot [14] registered Jul 2 08:46:10.051120 kernel: acpiphp: Slot [15] registered Jul 2 08:46:10.051128 kernel: acpiphp: Slot [16] registered Jul 2 08:46:10.051139 kernel: acpiphp: Slot [17] registered Jul 2 08:46:10.051147 kernel: acpiphp: Slot [18] registered Jul 2 08:46:10.051155 kernel: acpiphp: Slot [19] registered Jul 2 08:46:10.051163 kernel: acpiphp: Slot [20] registered Jul 2 08:46:10.051172 kernel: acpiphp: Slot [21] registered Jul 2 08:46:10.051180 kernel: acpiphp: Slot [22] registered Jul 2 08:46:10.051188 kernel: acpiphp: Slot [23] registered Jul 2 08:46:10.051197 kernel: acpiphp: Slot [24] registered Jul 2 08:46:10.051205 kernel: acpiphp: Slot [25] registered Jul 2 08:46:10.051216 kernel: acpiphp: Slot [26] registered Jul 2 08:46:10.051225 kernel: acpiphp: Slot [27] registered Jul 2 08:46:10.051233 kernel: acpiphp: Slot [28] registered Jul 2 08:46:10.051241 kernel: acpiphp: Slot [29] registered Jul 2 08:46:10.051249 kernel: acpiphp: Slot [30] registered Jul 2 08:46:10.051257 kernel: acpiphp: Slot [31] registered Jul 2 08:46:10.051264 kernel: PCI host bridge to bus 0000:00 Jul 2 08:46:10.051366 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 08:46:10.051442 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 08:46:10.051518 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 08:46:10.051589 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 08:46:10.051659 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 08:46:10.051740 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 08:46:10.051837 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 08:46:10.056075 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 08:46:10.056197 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 08:46:10.056289 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 2 08:46:10.056379 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 08:46:10.056466 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 08:46:10.056562 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 08:46:10.056651 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 08:46:10.056750 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 08:46:10.056844 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 08:46:10.056962 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 08:46:10.057062 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 08:46:10.057150 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 08:46:10.057240 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 08:46:10.057326 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 2 08:46:10.057413 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 2 08:46:10.057495 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 08:46:10.057591 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 08:46:10.057675 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 2 08:46:10.057757 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 2 08:46:10.057837 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 08:46:10.057951 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 2 08:46:10.058047 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 08:46:10.058131 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 08:46:10.058212 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 2 08:46:10.058292 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 08:46:10.058382 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 08:46:10.058465 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 2 08:46:10.058545 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 08:46:10.058638 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 08:46:10.058721 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 2 08:46:10.058801 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 08:46:10.058813 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 08:46:10.058821 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 08:46:10.058830 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 08:46:10.058838 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 08:46:10.058846 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 08:46:10.058858 kernel: iommu: Default domain type: Translated Jul 2 08:46:10.058927 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 08:46:10.059013 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 08:46:10.059094 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 08:46:10.059173 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 08:46:10.059185 kernel: vgaarb: loaded Jul 2 08:46:10.059193 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:46:10.059201 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:46:10.059209 kernel: PTP clock support registered Jul 2 08:46:10.059220 kernel: PCI: Using ACPI for IRQ routing Jul 2 08:46:10.059228 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 08:46:10.059236 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 08:46:10.059244 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 2 08:46:10.059252 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 08:46:10.059260 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:46:10.059268 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:46:10.059276 kernel: pnp: PnP ACPI init Jul 2 08:46:10.059358 kernel: pnp 00:03: [dma 2] Jul 2 08:46:10.059375 kernel: pnp: PnP ACPI: found 5 devices Jul 2 08:46:10.059383 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 08:46:10.059391 kernel: NET: Registered PF_INET protocol family Jul 2 08:46:10.059399 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:46:10.059408 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 08:46:10.059416 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:46:10.059424 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:46:10.059432 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 08:46:10.059444 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 08:46:10.059452 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:46:10.059460 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:46:10.059468 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:46:10.059476 kernel: NET: Registered PF_XDP protocol family Jul 2 08:46:10.059547 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 08:46:10.059621 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 08:46:10.059690 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 08:46:10.059774 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 08:46:10.059851 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 08:46:10.061000 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 08:46:10.061092 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 08:46:10.061178 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 08:46:10.061191 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:46:10.061201 kernel: Initialise system trusted keyrings Jul 2 08:46:10.061210 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 08:46:10.061222 kernel: Key type asymmetric registered Jul 2 08:46:10.061231 kernel: Asymmetric key parser 'x509' registered Jul 2 08:46:10.061240 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 08:46:10.061249 kernel: io scheduler mq-deadline registered Jul 2 08:46:10.061259 kernel: io scheduler kyber registered Jul 2 08:46:10.061267 kernel: io scheduler bfq registered Jul 2 08:46:10.061275 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 08:46:10.061283 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 08:46:10.061291 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 08:46:10.061300 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 08:46:10.061309 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 08:46:10.061318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:46:10.061326 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 08:46:10.061333 kernel: random: crng init done Jul 2 08:46:10.061341 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 08:46:10.061349 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 08:46:10.061357 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 08:46:10.061451 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 2 08:46:10.061468 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 08:46:10.061540 kernel: rtc_cmos 00:04: registered as rtc0 Jul 2 08:46:10.061611 kernel: rtc_cmos 00:04: setting system clock to 2024-07-02T08:46:09 UTC (1719909969) Jul 2 08:46:10.061682 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 2 08:46:10.061694 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:46:10.061702 kernel: Segment Routing with IPv6 Jul 2 08:46:10.061710 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:46:10.061718 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:46:10.061726 kernel: Key type dns_resolver registered Jul 2 08:46:10.061744 kernel: IPI shorthand broadcast: enabled Jul 2 08:46:10.061752 kernel: sched_clock: Marking stable (666424721, 120860486)->(823921795, -36636588) Jul 2 08:46:10.061760 kernel: registered taskstats version 1 Jul 2 08:46:10.061768 kernel: Loading compiled-in X.509 certificates Jul 2 08:46:10.061776 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 08:46:10.061784 kernel: Key type .fscrypt registered Jul 2 08:46:10.061792 kernel: Key type fscrypt-provisioning registered Jul 2 08:46:10.061800 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:46:10.061810 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:46:10.061818 kernel: ima: No architecture policies found Jul 2 08:46:10.061826 kernel: clk: Disabling unused clocks Jul 2 08:46:10.061834 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 08:46:10.061842 kernel: Write protecting the kernel read-only data: 28672k Jul 2 08:46:10.061850 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 08:46:10.061858 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 08:46:10.062920 kernel: Run /init as init process Jul 2 08:46:10.062930 kernel: with arguments: Jul 2 08:46:10.062942 kernel: /init Jul 2 08:46:10.062950 kernel: with environment: Jul 2 08:46:10.062958 kernel: HOME=/ Jul 2 08:46:10.062967 kernel: TERM=linux Jul 2 08:46:10.062975 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:46:10.062987 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:46:10.062999 systemd[1]: Detected virtualization kvm. Jul 2 08:46:10.063009 systemd[1]: Detected architecture x86-64. Jul 2 08:46:10.063022 systemd[1]: Running in initrd. Jul 2 08:46:10.063031 systemd[1]: No hostname configured, using default hostname. Jul 2 08:46:10.063040 systemd[1]: Hostname set to . Jul 2 08:46:10.063050 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:46:10.063060 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:46:10.063069 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:46:10.063078 systemd[1]: Reached target cryptsetup.target. Jul 2 08:46:10.063087 systemd[1]: Reached target paths.target. Jul 2 08:46:10.063098 systemd[1]: Reached target slices.target. Jul 2 08:46:10.063108 systemd[1]: Reached target swap.target. Jul 2 08:46:10.063117 systemd[1]: Reached target timers.target. Jul 2 08:46:10.063127 systemd[1]: Listening on iscsid.socket. Jul 2 08:46:10.063136 systemd[1]: Listening on iscsiuio.socket. Jul 2 08:46:10.063145 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:46:10.063154 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:46:10.063164 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:46:10.063175 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:46:10.063184 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:46:10.063193 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:46:10.063202 systemd[1]: Reached target sockets.target. Jul 2 08:46:10.063223 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:46:10.063235 systemd[1]: Finished network-cleanup.service. Jul 2 08:46:10.063246 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:46:10.063256 systemd[1]: Starting systemd-journald.service... Jul 2 08:46:10.063265 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:46:10.063275 systemd[1]: Starting systemd-resolved.service... Jul 2 08:46:10.063284 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 08:46:10.063294 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:46:10.063303 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:46:10.063317 systemd-journald[185]: Journal started Jul 2 08:46:10.063369 systemd-journald[185]: Runtime Journal (/run/log/journal/a7fe33ca53fd445991cc01a4333ba8fe) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:46:10.028890 systemd-modules-load[186]: Inserted module 'overlay' Jul 2 08:46:10.073436 systemd-resolved[187]: Positive Trust Anchors: Jul 2 08:46:10.090447 systemd[1]: Started systemd-journald.service. Jul 2 08:46:10.090469 kernel: audit: type=1130 audit(1719909970.082:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.090483 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:46:10.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.073447 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:46:10.095367 kernel: audit: type=1130 audit(1719909970.090:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.073485 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:46:10.102297 kernel: Bridge firewalling registered Jul 2 08:46:10.102328 kernel: audit: type=1130 audit(1719909970.094:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.076163 systemd-resolved[187]: Defaulting to hostname 'linux'. Jul 2 08:46:10.106708 kernel: audit: type=1130 audit(1719909970.101:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.091053 systemd[1]: Started systemd-resolved.service. Jul 2 08:46:10.096044 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 08:46:10.097387 systemd-modules-load[186]: Inserted module 'br_netfilter' Jul 2 08:46:10.102933 systemd[1]: Reached target nss-lookup.target. Jul 2 08:46:10.107952 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 08:46:10.109543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:46:10.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.117850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:46:10.123352 kernel: audit: type=1130 audit(1719909970.117:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.137450 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 08:46:10.138416 kernel: SCSI subsystem initialized Jul 2 08:46:10.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.138718 systemd[1]: Starting dracut-cmdline.service... Jul 2 08:46:10.142972 kernel: audit: type=1130 audit(1719909970.137:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.148334 dracut-cmdline[201]: dracut-dracut-053 Jul 2 08:46:10.149997 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:46:10.160733 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:46:10.160784 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:46:10.162884 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 08:46:10.166096 systemd-modules-load[186]: Inserted module 'dm_multipath' Jul 2 08:46:10.167073 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:46:10.172403 kernel: audit: type=1130 audit(1719909970.167:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.171859 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:46:10.179075 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:46:10.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.183906 kernel: audit: type=1130 audit(1719909970.179:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.215919 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:46:10.236913 kernel: iscsi: registered transport (tcp) Jul 2 08:46:10.264962 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:46:10.264998 kernel: QLogic iSCSI HBA Driver Jul 2 08:46:10.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.304259 systemd[1]: Finished dracut-cmdline.service. Jul 2 08:46:10.309211 kernel: audit: type=1130 audit(1719909970.303:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.305757 systemd[1]: Starting dracut-pre-udev.service... Jul 2 08:46:10.361952 kernel: raid6: sse2x4 gen() 12152 MB/s Jul 2 08:46:10.378920 kernel: raid6: sse2x4 xor() 4770 MB/s Jul 2 08:46:10.395918 kernel: raid6: sse2x2 gen() 13457 MB/s Jul 2 08:46:10.412963 kernel: raid6: sse2x2 xor() 8411 MB/s Jul 2 08:46:10.429958 kernel: raid6: sse2x1 gen() 10565 MB/s Jul 2 08:46:10.447682 kernel: raid6: sse2x1 xor() 6670 MB/s Jul 2 08:46:10.447776 kernel: raid6: using algorithm sse2x2 gen() 13457 MB/s Jul 2 08:46:10.447806 kernel: raid6: .... xor() 8411 MB/s, rmw enabled Jul 2 08:46:10.448536 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 08:46:10.463936 kernel: xor: measuring software checksum speed Jul 2 08:46:10.466771 kernel: prefetch64-sse : 17259 MB/sec Jul 2 08:46:10.466817 kernel: generic_sse : 15633 MB/sec Jul 2 08:46:10.466843 kernel: xor: using function: prefetch64-sse (17259 MB/sec) Jul 2 08:46:10.587925 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 08:46:10.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.605188 systemd[1]: Finished dracut-pre-udev.service. Jul 2 08:46:10.606000 audit: BPF prog-id=7 op=LOAD Jul 2 08:46:10.606000 audit: BPF prog-id=8 op=LOAD Jul 2 08:46:10.609023 systemd[1]: Starting systemd-udevd.service... Jul 2 08:46:10.632803 systemd-udevd[384]: Using default interface naming scheme 'v252'. Jul 2 08:46:10.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.637835 systemd[1]: Started systemd-udevd.service. Jul 2 08:46:10.644747 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 08:46:10.663917 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jul 2 08:46:10.718723 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 08:46:10.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.721707 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:46:10.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:10.791648 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:46:10.881889 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jul 2 08:46:10.898892 kernel: libata version 3.00 loaded. Jul 2 08:46:10.901908 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 08:46:10.902886 kernel: scsi host0: ata_piix Jul 2 08:46:10.903902 kernel: scsi host1: ata_piix Jul 2 08:46:10.904054 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 2 08:46:10.904068 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 2 08:46:10.951718 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:46:10.951815 kernel: GPT:17805311 != 41943039 Jul 2 08:46:10.951836 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:46:10.951855 kernel: GPT:17805311 != 41943039 Jul 2 08:46:10.952415 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:46:10.953541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:46:11.346990 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) Jul 2 08:46:11.370596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:46:11.489276 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 08:46:11.501614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 08:46:11.509529 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 08:46:11.510919 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 08:46:11.516115 systemd[1]: Starting disk-uuid.service... Jul 2 08:46:11.534020 disk-uuid[461]: Primary Header is updated. Jul 2 08:46:11.534020 disk-uuid[461]: Secondary Entries is updated. Jul 2 08:46:11.534020 disk-uuid[461]: Secondary Header is updated. Jul 2 08:46:11.545961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:46:11.550932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:46:12.566920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:46:12.567401 disk-uuid[462]: The operation has completed successfully. Jul 2 08:46:12.622294 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:46:12.623239 systemd[1]: Finished disk-uuid.service. Jul 2 08:46:12.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:12.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:12.644281 systemd[1]: Starting verity-setup.service... Jul 2 08:46:12.676920 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 2 08:46:12.772137 systemd[1]: Found device dev-mapper-usr.device. Jul 2 08:46:12.774828 systemd[1]: Mounting sysusr-usr.mount... Jul 2 08:46:12.777678 systemd[1]: Finished verity-setup.service. Jul 2 08:46:12.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:12.914957 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 08:46:12.915938 systemd[1]: Mounted sysusr-usr.mount. Jul 2 08:46:12.916586 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 08:46:12.917402 systemd[1]: Starting ignition-setup.service... Jul 2 08:46:12.922693 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 08:46:12.949753 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:46:12.949804 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:46:12.949816 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:46:12.971334 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:46:12.988808 systemd[1]: Finished ignition-setup.service. Jul 2 08:46:12.990280 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 08:46:12.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.062090 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 08:46:13.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.062000 audit: BPF prog-id=9 op=LOAD Jul 2 08:46:13.065485 systemd[1]: Starting systemd-networkd.service... Jul 2 08:46:13.092624 systemd-networkd[632]: lo: Link UP Jul 2 08:46:13.093392 systemd-networkd[632]: lo: Gained carrier Jul 2 08:46:13.094448 systemd-networkd[632]: Enumeration completed Jul 2 08:46:13.095322 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:46:13.095542 systemd[1]: Started systemd-networkd.service. Jul 2 08:46:13.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.097642 systemd-networkd[632]: eth0: Link UP Jul 2 08:46:13.097646 systemd-networkd[632]: eth0: Gained carrier Jul 2 08:46:13.099397 systemd[1]: Reached target network.target. Jul 2 08:46:13.101975 systemd[1]: Starting iscsiuio.service... Jul 2 08:46:13.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.108070 systemd[1]: Started iscsiuio.service. Jul 2 08:46:13.109407 systemd[1]: Starting iscsid.service... Jul 2 08:46:13.112358 iscsid[642]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:46:13.112358 iscsid[642]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 08:46:13.112358 iscsid[642]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 08:46:13.112358 iscsid[642]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 08:46:13.112358 iscsid[642]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:46:13.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.120768 iscsid[642]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 08:46:13.113804 systemd[1]: Started iscsid.service. Jul 2 08:46:13.113991 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.86/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:46:13.114963 systemd[1]: Starting dracut-initqueue.service... Jul 2 08:46:13.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.135148 systemd[1]: Finished dracut-initqueue.service. Jul 2 08:46:13.135676 systemd[1]: Reached target remote-fs-pre.target. Jul 2 08:46:13.136125 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:46:13.136727 systemd[1]: Reached target remote-fs.target. Jul 2 08:46:13.139672 systemd[1]: Starting dracut-pre-mount.service... Jul 2 08:46:13.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.156114 systemd[1]: Finished dracut-pre-mount.service. Jul 2 08:46:13.289346 ignition[564]: Ignition 2.14.0 Jul 2 08:46:13.289392 ignition[564]: Stage: fetch-offline Jul 2 08:46:13.289505 ignition[564]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:46:13.289550 ignition[564]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:46:13.293571 ignition[564]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:46:13.293808 ignition[564]: parsed url from cmdline: "" Jul 2 08:46:13.293817 ignition[564]: no config URL provided Jul 2 08:46:13.293831 ignition[564]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:46:13.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.297326 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 08:46:13.293851 ignition[564]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:46:13.301217 systemd[1]: Starting ignition-fetch.service... Jul 2 08:46:13.293900 ignition[564]: failed to fetch config: resource requires networking Jul 2 08:46:13.294970 ignition[564]: Ignition finished successfully Jul 2 08:46:13.320388 ignition[656]: Ignition 2.14.0 Jul 2 08:46:13.320415 ignition[656]: Stage: fetch Jul 2 08:46:13.320648 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:46:13.320690 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:46:13.323461 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:46:13.323686 ignition[656]: parsed url from cmdline: "" Jul 2 08:46:13.323696 ignition[656]: no config URL provided Jul 2 08:46:13.323710 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:46:13.323762 ignition[656]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:46:13.326821 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 08:46:13.327189 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 08:46:13.327260 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 08:46:13.530355 ignition[656]: GET result: OK Jul 2 08:46:13.530504 ignition[656]: parsing config with SHA512: ce99d6f05d61919624a08b4f6cfac223e5328ee0cd276b73297b2294127bccb9d03dbb409025dfa796dc4d2502971087a0e7d762f6afb7e4bc0529dece971c58 Jul 2 08:46:13.549648 unknown[656]: fetched base config from "system" Jul 2 08:46:13.549678 unknown[656]: fetched base config from "system" Jul 2 08:46:13.550859 ignition[656]: fetch: fetch complete Jul 2 08:46:13.549693 unknown[656]: fetched user config from "openstack" Jul 2 08:46:13.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.550906 ignition[656]: fetch: fetch passed Jul 2 08:46:13.554154 systemd[1]: Finished ignition-fetch.service. Jul 2 08:46:13.550987 ignition[656]: Ignition finished successfully Jul 2 08:46:13.557500 systemd[1]: Starting ignition-kargs.service... Jul 2 08:46:13.578036 ignition[662]: Ignition 2.14.0 Jul 2 08:46:13.578063 ignition[662]: Stage: kargs Jul 2 08:46:13.578302 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:46:13.578346 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:46:13.580619 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:46:13.583609 ignition[662]: kargs: kargs passed Jul 2 08:46:13.592928 systemd[1]: Finished ignition-kargs.service. Jul 2 08:46:13.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.583748 ignition[662]: Ignition finished successfully Jul 2 08:46:13.596282 systemd[1]: Starting ignition-disks.service... Jul 2 08:46:13.616159 ignition[668]: Ignition 2.14.0 Jul 2 08:46:13.616186 ignition[668]: Stage: disks Jul 2 08:46:13.616426 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:46:13.616468 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:46:13.618727 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:46:13.622684 ignition[668]: disks: disks passed Jul 2 08:46:13.622801 ignition[668]: Ignition finished successfully Jul 2 08:46:13.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.624641 systemd[1]: Finished ignition-disks.service. Jul 2 08:46:13.626034 systemd[1]: Reached target initrd-root-device.target. Jul 2 08:46:13.628128 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:46:13.630308 systemd[1]: Reached target local-fs.target. Jul 2 08:46:13.632520 systemd[1]: Reached target sysinit.target. Jul 2 08:46:13.634683 systemd[1]: Reached target basic.target. Jul 2 08:46:13.638675 systemd[1]: Starting systemd-fsck-root.service... Jul 2 08:46:13.670104 systemd-fsck[675]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 08:46:13.683033 systemd[1]: Finished systemd-fsck-root.service. Jul 2 08:46:13.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:13.686006 systemd[1]: Mounting sysroot.mount... Jul 2 08:46:13.705929 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 08:46:13.707447 systemd[1]: Mounted sysroot.mount. Jul 2 08:46:13.709841 systemd[1]: Reached target initrd-root-fs.target. Jul 2 08:46:13.714639 systemd[1]: Mounting sysroot-usr.mount... Jul 2 08:46:13.716674 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 08:46:13.718232 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 2 08:46:13.723605 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:46:13.723670 systemd[1]: Reached target ignition-diskful.target. Jul 2 08:46:13.732003 systemd[1]: Mounted sysroot-usr.mount. Jul 2 08:46:13.736682 systemd[1]: Starting initrd-setup-root.service... Jul 2 08:46:13.748859 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:46:13.798345 initrd-setup-root[694]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:46:13.815251 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:46:13.818516 initrd-setup-root[702]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:46:13.862348 initrd-setup-root[711]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:46:14.063944 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (705) Jul 2 08:46:14.104102 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:46:14.104221 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:46:14.104250 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:46:14.357301 systemd-networkd[632]: eth0: Gained IPv6LL Jul 2 08:46:14.413307 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:46:14.739108 systemd[1]: Finished initrd-setup-root.service. Jul 2 08:46:14.757421 kernel: kauditd_printk_skb: 22 callbacks suppressed Jul 2 08:46:14.757498 kernel: audit: type=1130 audit(1719909974.739:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.743159 systemd[1]: Starting ignition-mount.service... Jul 2 08:46:14.760607 systemd[1]: Starting sysroot-boot.service... Jul 2 08:46:14.771023 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 08:46:14.771268 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 08:46:14.814417 ignition[749]: INFO : Ignition 2.14.0 Jul 2 08:46:14.814417 ignition[749]: INFO : Stage: mount Jul 2 08:46:14.815589 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:46:14.815589 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:46:14.815589 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:46:14.819820 ignition[749]: INFO : mount: mount passed Jul 2 08:46:14.820355 ignition[749]: INFO : Ignition finished successfully Jul 2 08:46:14.821818 systemd[1]: Finished ignition-mount.service. Jul 2 08:46:14.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.827893 kernel: audit: type=1130 audit(1719909974.821:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.832844 systemd[1]: Finished sysroot-boot.service. Jul 2 08:46:14.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.839938 kernel: audit: type=1130 audit(1719909974.833:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.852932 coreos-metadata[681]: Jul 02 08:46:14.852 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 08:46:14.870603 coreos-metadata[681]: Jul 02 08:46:14.870 INFO Fetch successful Jul 2 08:46:14.870603 coreos-metadata[681]: Jul 02 08:46:14.870 INFO wrote hostname ci-3510-3-5-a-cacadfe6a6.novalocal to /sysroot/etc/hostname Jul 2 08:46:14.875216 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 08:46:14.875390 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 2 08:46:14.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.878962 systemd[1]: Starting ignition-files.service... Jul 2 08:46:14.891630 kernel: audit: type=1130 audit(1719909974.876:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.891670 kernel: audit: type=1131 audit(1719909974.876:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:14.898672 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:46:14.912997 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (759) Jul 2 08:46:14.918461 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:46:14.918561 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:46:14.918587 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:46:14.934919 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:46:14.957909 ignition[778]: INFO : Ignition 2.14.0 Jul 2 08:46:14.957909 ignition[778]: INFO : Stage: files Jul 2 08:46:14.960559 ignition[778]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:46:14.960559 ignition[778]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:46:14.960559 ignition[778]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:46:14.967601 ignition[778]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:46:14.967601 ignition[778]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:46:14.967601 ignition[778]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:46:14.973551 ignition[778]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:46:14.973551 ignition[778]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:46:14.980638 ignition[778]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:46:14.980638 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:46:14.980638 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 08:46:14.975347 unknown[778]: wrote ssh authorized keys file for user: core Jul 2 08:46:15.075207 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:46:15.438026 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:46:15.439633 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:46:15.440579 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 08:46:15.849271 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:46:16.333035 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:46:16.333035 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:46:16.337191 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 08:46:16.812383 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 08:46:18.584175 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:46:18.584175 ignition[778]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:46:18.584175 ignition[778]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:46:18.584175 ignition[778]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 2 08:46:18.593128 ignition[778]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:46:18.593128 ignition[778]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:46:18.593128 ignition[778]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 2 08:46:18.593128 ignition[778]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:46:18.593128 ignition[778]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:46:18.593128 ignition[778]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:46:18.593128 ignition[778]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:46:18.593128 ignition[778]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:46:18.593128 ignition[778]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:46:18.593128 ignition[778]: INFO : files: files passed Jul 2 08:46:18.593128 ignition[778]: INFO : Ignition finished successfully Jul 2 08:46:18.618387 kernel: audit: type=1130 audit(1719909978.596:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.594652 systemd[1]: Finished ignition-files.service. Jul 2 08:46:18.597886 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 08:46:18.609296 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 08:46:18.611233 systemd[1]: Starting ignition-quench.service... Jul 2 08:46:18.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.623810 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:46:18.633237 kernel: audit: type=1130 audit(1719909978.623:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.633286 kernel: audit: type=1131 audit(1719909978.627:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.633328 initrd-setup-root-after-ignition[804]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:46:18.623938 systemd[1]: Finished ignition-quench.service. Jul 2 08:46:18.645545 kernel: audit: type=1130 audit(1719909978.634:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.633700 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 08:46:18.635957 systemd[1]: Reached target ignition-complete.target. Jul 2 08:46:18.646728 systemd[1]: Starting initrd-parse-etc.service... Jul 2 08:46:18.671415 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:46:18.672173 systemd[1]: Finished initrd-parse-etc.service. Jul 2 08:46:18.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.673245 systemd[1]: Reached target initrd-fs.target. Jul 2 08:46:18.677096 systemd[1]: Reached target initrd.target. Jul 2 08:46:18.679037 kernel: audit: type=1130 audit(1719909978.672:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.678707 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 08:46:18.680644 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 08:46:18.698481 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 08:46:18.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.701355 systemd[1]: Starting initrd-cleanup.service... Jul 2 08:46:18.719429 systemd[1]: Stopped target nss-lookup.target. Jul 2 08:46:18.722047 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 08:46:18.724733 systemd[1]: Stopped target timers.target. Jul 2 08:46:18.727239 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:46:18.729008 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 08:46:18.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.731199 systemd[1]: Stopped target initrd.target. Jul 2 08:46:18.733241 systemd[1]: Stopped target basic.target. Jul 2 08:46:18.734586 systemd[1]: Stopped target ignition-complete.target. Jul 2 08:46:18.735560 systemd[1]: Stopped target ignition-diskful.target. Jul 2 08:46:18.736825 systemd[1]: Stopped target initrd-root-device.target. Jul 2 08:46:18.738231 systemd[1]: Stopped target remote-fs.target. Jul 2 08:46:18.739463 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 08:46:18.740729 systemd[1]: Stopped target sysinit.target. Jul 2 08:46:18.742126 systemd[1]: Stopped target local-fs.target. Jul 2 08:46:18.743283 systemd[1]: Stopped target local-fs-pre.target. Jul 2 08:46:18.744521 systemd[1]: Stopped target swap.target. Jul 2 08:46:18.745588 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:46:18.745906 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 08:46:18.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.747617 systemd[1]: Stopped target cryptsetup.target. Jul 2 08:46:18.748636 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:46:18.748936 systemd[1]: Stopped dracut-initqueue.service. Jul 2 08:46:18.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.750589 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:46:18.750903 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 08:46:18.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.752587 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:46:18.752844 systemd[1]: Stopped ignition-files.service. Jul 2 08:46:18.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.761950 iscsid[642]: iscsid shutting down. Jul 2 08:46:18.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.755774 systemd[1]: Stopping ignition-mount.service... Jul 2 08:46:18.766112 ignition[817]: INFO : Ignition 2.14.0 Jul 2 08:46:18.766112 ignition[817]: INFO : Stage: umount Jul 2 08:46:18.766112 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:46:18.766112 ignition[817]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:46:18.766112 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:46:18.760676 systemd[1]: Stopping iscsid.service... Jul 2 08:46:18.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.776313 ignition[817]: INFO : umount: umount passed Jul 2 08:46:18.776313 ignition[817]: INFO : Ignition finished successfully Jul 2 08:46:18.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.761453 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:46:18.761734 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 08:46:18.766589 systemd[1]: Stopping sysroot-boot.service... Jul 2 08:46:18.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.773960 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:46:18.774329 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 08:46:18.775462 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:46:18.775743 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 08:46:18.781229 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 08:46:18.781403 systemd[1]: Stopped iscsid.service. Jul 2 08:46:18.792983 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:46:18.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.794141 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:46:18.794290 systemd[1]: Stopped ignition-mount.service. Jul 2 08:46:18.796795 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:46:18.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.796907 systemd[1]: Finished initrd-cleanup.service. Jul 2 08:46:18.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.799847 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:46:18.799940 systemd[1]: Stopped sysroot-boot.service. Jul 2 08:46:18.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.801426 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:46:18.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.801466 systemd[1]: Stopped ignition-disks.service. Jul 2 08:46:18.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.802082 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:46:18.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.802119 systemd[1]: Stopped ignition-kargs.service. Jul 2 08:46:18.802938 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:46:18.802974 systemd[1]: Stopped ignition-fetch.service. Jul 2 08:46:18.803815 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:46:18.803857 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 08:46:18.804700 systemd[1]: Stopped target paths.target. Jul 2 08:46:18.805575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:46:18.810897 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 08:46:18.811379 systemd[1]: Stopped target slices.target. Jul 2 08:46:18.812316 systemd[1]: Stopped target sockets.target. Jul 2 08:46:18.813228 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:46:18.813259 systemd[1]: Closed iscsid.socket. Jul 2 08:46:18.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.814064 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:46:18.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.814099 systemd[1]: Stopped ignition-setup.service. Jul 2 08:46:18.814947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:46:18.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.814983 systemd[1]: Stopped initrd-setup-root.service. Jul 2 08:46:18.815793 systemd[1]: Stopping iscsiuio.service... Jul 2 08:46:18.819455 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 08:46:18.819554 systemd[1]: Stopped iscsiuio.service. Jul 2 08:46:18.820260 systemd[1]: Stopped target network.target. Jul 2 08:46:18.821113 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:46:18.821143 systemd[1]: Closed iscsiuio.socket. Jul 2 08:46:18.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.822037 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:46:18.823134 systemd[1]: Stopping systemd-resolved.service... Jul 2 08:46:18.824916 systemd-networkd[632]: eth0: DHCPv6 lease lost Jul 2 08:46:18.825767 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:46:18.825856 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:46:18.827885 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:46:18.831000 audit: BPF prog-id=9 op=UNLOAD Jul 2 08:46:18.827923 systemd[1]: Closed systemd-networkd.socket. Jul 2 08:46:18.829464 systemd[1]: Stopping network-cleanup.service... Jul 2 08:46:18.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.834036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:46:18.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.834118 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 08:46:18.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.834676 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:46:18.834721 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:46:18.835998 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:46:18.836040 systemd[1]: Stopped systemd-modules-load.service. Jul 2 08:46:18.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.836793 systemd[1]: Stopping systemd-udevd.service... Jul 2 08:46:18.838816 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:46:18.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.839371 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:46:18.839464 systemd[1]: Stopped systemd-resolved.service. Jul 2 08:46:18.843000 audit: BPF prog-id=6 op=UNLOAD Jul 2 08:46:18.841963 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:46:18.842087 systemd[1]: Stopped systemd-udevd.service. Jul 2 08:46:18.844088 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:46:18.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.844120 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 08:46:18.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.844741 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:46:18.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.844781 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 08:46:18.847375 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:46:18.847430 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 08:46:18.848323 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:46:18.848364 systemd[1]: Stopped dracut-cmdline.service. Jul 2 08:46:18.849371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:46:18.849411 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 08:46:18.850968 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 08:46:18.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.857124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:46:18.857171 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 08:46:18.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.857982 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:46:18.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:18.858078 systemd[1]: Stopped network-cleanup.service. Jul 2 08:46:18.859358 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:46:18.859434 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 08:46:18.860400 systemd[1]: Reached target initrd-switch-root.target. Jul 2 08:46:18.861953 systemd[1]: Starting initrd-switch-root.service... Jul 2 08:46:18.881351 systemd[1]: Switching root. Jul 2 08:46:18.902633 systemd-journald[185]: Journal stopped Jul 2 08:46:24.065765 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jul 2 08:46:24.065955 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 08:46:24.065982 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 08:46:24.066001 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 08:46:24.066016 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:46:24.066028 kernel: SELinux: policy capability open_perms=1 Jul 2 08:46:24.066060 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:46:24.066073 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:46:24.066085 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:46:24.066096 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:46:24.066108 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:46:24.066125 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:46:24.066138 systemd[1]: Successfully loaded SELinux policy in 92.242ms. Jul 2 08:46:24.066162 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.519ms. Jul 2 08:46:24.066190 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:46:24.066205 systemd[1]: Detected virtualization kvm. Jul 2 08:46:24.066218 systemd[1]: Detected architecture x86-64. Jul 2 08:46:24.066230 systemd[1]: Detected first boot. Jul 2 08:46:24.066650 systemd[1]: Hostname set to . Jul 2 08:46:24.066666 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:46:24.066682 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 08:46:24.066696 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:46:24.066709 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:46:24.066723 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:46:24.066738 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:46:24.066750 kernel: kauditd_printk_skb: 53 callbacks suppressed Jul 2 08:46:24.066764 kernel: audit: type=1334 audit(1719909983.809:89): prog-id=12 op=LOAD Jul 2 08:46:24.066776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:46:24.066788 kernel: audit: type=1334 audit(1719909983.809:90): prog-id=3 op=UNLOAD Jul 2 08:46:24.066801 kernel: audit: type=1334 audit(1719909983.810:91): prog-id=13 op=LOAD Jul 2 08:46:24.066812 kernel: audit: type=1334 audit(1719909983.811:92): prog-id=14 op=LOAD Jul 2 08:46:24.066824 kernel: audit: type=1334 audit(1719909983.811:93): prog-id=4 op=UNLOAD Jul 2 08:46:24.066835 kernel: audit: type=1334 audit(1719909983.811:94): prog-id=5 op=UNLOAD Jul 2 08:46:24.066847 kernel: audit: type=1131 audit(1719909983.813:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.066859 systemd[1]: Stopped initrd-switch-root.service. Jul 2 08:46:24.066910 kernel: audit: type=1334 audit(1719909983.841:96): prog-id=12 op=UNLOAD Jul 2 08:46:24.066924 kernel: audit: type=1130 audit(1719909983.843:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.066937 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:46:24.066951 kernel: audit: type=1131 audit(1719909983.843:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.066963 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 08:46:24.066977 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 08:46:24.066992 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 08:46:24.067006 systemd[1]: Created slice system-getty.slice. Jul 2 08:46:24.067018 systemd[1]: Created slice system-modprobe.slice. Jul 2 08:46:24.067031 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 08:46:24.067044 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 08:46:24.067057 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 08:46:24.067070 systemd[1]: Created slice user.slice. Jul 2 08:46:24.067083 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:46:24.067098 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 08:46:24.067110 systemd[1]: Set up automount boot.automount. Jul 2 08:46:24.067123 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 08:46:24.067136 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 08:46:24.067149 systemd[1]: Stopped target initrd-fs.target. Jul 2 08:46:24.067161 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 08:46:24.067174 systemd[1]: Reached target integritysetup.target. Jul 2 08:46:24.067186 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:46:24.067199 systemd[1]: Reached target remote-fs.target. Jul 2 08:46:24.067213 systemd[1]: Reached target slices.target. Jul 2 08:46:24.067226 systemd[1]: Reached target swap.target. Jul 2 08:46:24.067238 systemd[1]: Reached target torcx.target. Jul 2 08:46:24.067250 systemd[1]: Reached target veritysetup.target. Jul 2 08:46:24.067263 systemd[1]: Listening on systemd-coredump.socket. Jul 2 08:46:24.067276 systemd[1]: Listening on systemd-initctl.socket. Jul 2 08:46:24.067289 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:46:24.067301 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:46:24.067313 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:46:24.067326 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 08:46:24.067340 systemd[1]: Mounting dev-hugepages.mount... Jul 2 08:46:24.067352 systemd[1]: Mounting dev-mqueue.mount... Jul 2 08:46:24.067366 systemd[1]: Mounting media.mount... Jul 2 08:46:24.067378 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:24.067391 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 08:46:24.067404 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 08:46:24.067454 systemd[1]: Mounting tmp.mount... Jul 2 08:46:24.067467 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 08:46:24.067479 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:46:24.067495 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:46:24.067508 systemd[1]: Starting modprobe@configfs.service... Jul 2 08:46:24.067520 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:46:24.067533 systemd[1]: Starting modprobe@drm.service... Jul 2 08:46:24.067546 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:46:24.067559 systemd[1]: Starting modprobe@fuse.service... Jul 2 08:46:24.067571 systemd[1]: Starting modprobe@loop.service... Jul 2 08:46:24.067584 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:46:24.067597 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:46:24.067612 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 08:46:24.067625 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:46:24.067637 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:46:24.067650 systemd[1]: Stopped systemd-journald.service. Jul 2 08:46:24.067662 kernel: fuse: init (API version 7.34) Jul 2 08:46:24.067674 systemd[1]: Starting systemd-journald.service... Jul 2 08:46:24.067686 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:46:24.067709 systemd[1]: Starting systemd-network-generator.service... Jul 2 08:46:24.067723 systemd[1]: Starting systemd-remount-fs.service... Jul 2 08:46:24.067764 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:46:24.067778 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:46:24.067791 systemd[1]: Stopped verity-setup.service. Jul 2 08:46:24.067804 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:24.067816 systemd[1]: Mounted dev-hugepages.mount. Jul 2 08:46:24.067829 systemd[1]: Mounted dev-mqueue.mount. Jul 2 08:46:24.067841 systemd[1]: Mounted media.mount. Jul 2 08:46:24.067854 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 08:46:24.067882 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 08:46:24.067899 systemd[1]: Mounted tmp.mount. Jul 2 08:46:24.067912 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:46:24.067925 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:46:24.067936 kernel: loop: module loaded Jul 2 08:46:24.067948 systemd[1]: Finished modprobe@configfs.service. Jul 2 08:46:24.067961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:46:24.067974 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:46:24.067986 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:46:24.068004 systemd-journald[929]: Journal started Jul 2 08:46:24.068070 systemd-journald[929]: Runtime Journal (/run/log/journal/a7fe33ca53fd445991cc01a4333ba8fe) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:46:19.199000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:46:19.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:46:19.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:46:19.334000 audit: BPF prog-id=10 op=LOAD Jul 2 08:46:19.334000 audit: BPF prog-id=10 op=UNLOAD Jul 2 08:46:19.334000 audit: BPF prog-id=11 op=LOAD Jul 2 08:46:19.334000 audit: BPF prog-id=11 op=UNLOAD Jul 2 08:46:19.516000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 08:46:19.516000 audit[849]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:46:19.516000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:46:19.518000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 08:46:19.518000 audit[849]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:46:19.518000 audit: CWD cwd="/" Jul 2 08:46:19.518000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:19.518000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:19.518000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:46:23.809000 audit: BPF prog-id=12 op=LOAD Jul 2 08:46:23.809000 audit: BPF prog-id=3 op=UNLOAD Jul 2 08:46:23.810000 audit: BPF prog-id=13 op=LOAD Jul 2 08:46:23.811000 audit: BPF prog-id=14 op=LOAD Jul 2 08:46:23.811000 audit: BPF prog-id=4 op=UNLOAD Jul 2 08:46:23.811000 audit: BPF prog-id=5 op=UNLOAD Jul 2 08:46:23.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.071305 systemd[1]: Finished modprobe@drm.service. Jul 2 08:46:23.841000 audit: BPF prog-id=12 op=UNLOAD Jul 2 08:46:23.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:23.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:23.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.006000 audit: BPF prog-id=15 op=LOAD Jul 2 08:46:24.007000 audit: BPF prog-id=16 op=LOAD Jul 2 08:46:24.007000 audit: BPF prog-id=17 op=LOAD Jul 2 08:46:24.007000 audit: BPF prog-id=13 op=UNLOAD Jul 2 08:46:24.009000 audit: BPF prog-id=14 op=UNLOAD Jul 2 08:46:24.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.063000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:46:24.063000 audit[929]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc224ebe40 a2=4000 a3=7ffc224ebedc items=0 ppid=1 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:46:24.063000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:46:24.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:19.511931 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:46:23.808453 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:46:19.512895 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:46:23.808465 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 08:46:19.512917 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:46:23.813671 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:46:24.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:19.512955 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 08:46:19.512967 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 08:46:19.512999 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 08:46:19.513013 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 08:46:19.513230 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 08:46:19.513270 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:46:19.513284 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:46:19.515806 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 08:46:19.515848 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 08:46:19.515888 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 08:46:19.515908 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 08:46:19.515929 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 08:46:19.515963 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 08:46:24.080067 systemd[1]: Started systemd-journald.service. Jul 2 08:46:24.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:23.390033 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:46:24.074724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:46:23.390945 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:46:24.075012 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:46:23.391781 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:46:24.075715 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:46:23.392675 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:46:24.075928 systemd[1]: Finished modprobe@fuse.service. Jul 2 08:46:23.392828 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 08:46:24.076612 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:46:23.393062 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-07-02T08:46:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 08:46:24.076805 systemd[1]: Finished modprobe@loop.service. Jul 2 08:46:24.077638 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:46:24.078438 systemd[1]: Finished systemd-network-generator.service. Jul 2 08:46:24.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.081274 systemd[1]: Finished systemd-remount-fs.service. Jul 2 08:46:24.082339 systemd[1]: Reached target network-pre.target. Jul 2 08:46:24.084790 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 08:46:24.086577 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 08:46:24.087032 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:46:24.092163 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 08:46:24.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.093977 systemd[1]: Starting systemd-journal-flush.service... Jul 2 08:46:24.094504 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:46:24.095497 systemd[1]: Starting systemd-random-seed.service... Jul 2 08:46:24.096032 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:46:24.097047 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:46:24.099240 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 08:46:24.101161 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 08:46:24.101673 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 08:46:24.103811 systemd[1]: Starting systemd-sysusers.service... Jul 2 08:46:24.116974 systemd-journald[929]: Time spent on flushing to /var/log/journal/a7fe33ca53fd445991cc01a4333ba8fe is 29.336ms for 1104 entries. Jul 2 08:46:24.116974 systemd-journald[929]: System Journal (/var/log/journal/a7fe33ca53fd445991cc01a4333ba8fe) is 8.0M, max 584.8M, 576.8M free. Jul 2 08:46:24.205098 systemd-journald[929]: Received client request to flush runtime journal. Jul 2 08:46:24.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:24.137798 systemd[1]: Finished systemd-random-seed.service. Jul 2 08:46:24.207583 udevadm[960]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:46:24.138497 systemd[1]: Reached target first-boot-complete.target. Jul 2 08:46:24.142733 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:46:24.155520 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:46:24.157345 systemd[1]: Starting systemd-udev-settle.service... Jul 2 08:46:24.164298 systemd[1]: Finished systemd-sysusers.service. Jul 2 08:46:24.205987 systemd[1]: Finished systemd-journal-flush.service. Jul 2 08:46:25.081319 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 08:46:25.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.083000 audit: BPF prog-id=18 op=LOAD Jul 2 08:46:25.083000 audit: BPF prog-id=19 op=LOAD Jul 2 08:46:25.083000 audit: BPF prog-id=7 op=UNLOAD Jul 2 08:46:25.083000 audit: BPF prog-id=8 op=UNLOAD Jul 2 08:46:25.087181 systemd[1]: Starting systemd-udevd.service... Jul 2 08:46:25.129681 systemd-udevd[963]: Using default interface naming scheme 'v252'. Jul 2 08:46:25.196735 systemd[1]: Started systemd-udevd.service. Jul 2 08:46:25.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.201000 audit: BPF prog-id=20 op=LOAD Jul 2 08:46:25.204301 systemd[1]: Starting systemd-networkd.service... Jul 2 08:46:25.223000 audit: BPF prog-id=21 op=LOAD Jul 2 08:46:25.223000 audit: BPF prog-id=22 op=LOAD Jul 2 08:46:25.223000 audit: BPF prog-id=23 op=LOAD Jul 2 08:46:25.226452 systemd[1]: Starting systemd-userdbd.service... Jul 2 08:46:25.270190 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 08:46:25.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.306005 systemd[1]: Started systemd-userdbd.service. Jul 2 08:46:25.365172 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 08:46:25.386905 kernel: ACPI: button: Power Button [PWRF] Jul 2 08:46:25.408000 audit[974]: AVC avc: denied { confidentiality } for pid=974 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:46:25.427573 systemd-networkd[973]: lo: Link UP Jul 2 08:46:25.427584 systemd-networkd[973]: lo: Gained carrier Jul 2 08:46:25.428651 systemd-networkd[973]: Enumeration completed Jul 2 08:46:25.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.428774 systemd[1]: Started systemd-networkd.service. Jul 2 08:46:25.428787 systemd-networkd[973]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:46:25.408000 audit[974]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c10aa544c0 a1=3207c a2=7fb38ade1bc5 a3=5 items=108 ppid=963 pid=974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:46:25.431341 systemd-networkd[973]: eth0: Link UP Jul 2 08:46:25.431347 systemd-networkd[973]: eth0: Gained carrier Jul 2 08:46:25.408000 audit: CWD cwd="/" Jul 2 08:46:25.408000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=1 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=2 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=3 name=(null) inode=14031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=4 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.438091 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 08:46:25.408000 audit: PATH item=5 name=(null) inode=14032 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=6 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=7 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=8 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=9 name=(null) inode=14034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=10 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=11 name=(null) inode=14035 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=12 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=13 name=(null) inode=14036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=14 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=15 name=(null) inode=14037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=16 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=17 name=(null) inode=14038 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=18 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=19 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=20 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=21 name=(null) inode=14040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=22 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=23 name=(null) inode=14041 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=24 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=25 name=(null) inode=14042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=26 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=27 name=(null) inode=14043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=28 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=29 name=(null) inode=14044 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=30 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=31 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=32 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=33 name=(null) inode=14046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=34 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=35 name=(null) inode=14047 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=36 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=37 name=(null) inode=14048 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=38 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=39 name=(null) inode=14049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=40 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=41 name=(null) inode=14050 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=42 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=43 name=(null) inode=14051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=44 name=(null) inode=14051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=45 name=(null) inode=14052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=46 name=(null) inode=14051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=47 name=(null) inode=14053 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=48 name=(null) inode=14051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=49 name=(null) inode=14054 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=50 name=(null) inode=14051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=51 name=(null) inode=14055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=52 name=(null) inode=14051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=53 name=(null) inode=14056 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=55 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=56 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=57 name=(null) inode=14058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=58 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=59 name=(null) inode=14059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=60 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=61 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=62 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=63 name=(null) inode=14061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=64 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=65 name=(null) inode=14062 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=66 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=67 name=(null) inode=14063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=68 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=69 name=(null) inode=14064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=70 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=71 name=(null) inode=14065 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=72 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=73 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=74 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=75 name=(null) inode=14067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=76 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=77 name=(null) inode=14068 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=78 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=79 name=(null) inode=14069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=80 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=81 name=(null) inode=14070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=82 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=83 name=(null) inode=14071 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=84 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=85 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=86 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=87 name=(null) inode=14073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=88 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=89 name=(null) inode=14074 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=90 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=91 name=(null) inode=14075 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=92 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=93 name=(null) inode=14076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=94 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=95 name=(null) inode=14077 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=96 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=97 name=(null) inode=14078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=98 name=(null) inode=14078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=99 name=(null) inode=14079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=100 name=(null) inode=14078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=101 name=(null) inode=14080 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=102 name=(null) inode=14078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=103 name=(null) inode=14081 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=104 name=(null) inode=14078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=105 name=(null) inode=14088 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=106 name=(null) inode=14078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PATH item=107 name=(null) inode=14089 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:46:25.408000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 08:46:25.439618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:46:25.444092 systemd-networkd[973]: eth0: DHCPv4 address 172.24.4.86/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:46:25.456907 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 08:46:25.461884 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:46:25.506356 systemd[1]: Finished systemd-udev-settle.service. Jul 2 08:46:25.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.508299 systemd[1]: Starting lvm2-activation-early.service... Jul 2 08:46:25.540952 lvm[992]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:46:25.577339 systemd[1]: Finished lvm2-activation-early.service. Jul 2 08:46:25.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.578546 systemd[1]: Reached target cryptsetup.target. Jul 2 08:46:25.581534 systemd[1]: Starting lvm2-activation.service... Jul 2 08:46:25.588913 lvm[993]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:46:25.623517 systemd[1]: Finished lvm2-activation.service. Jul 2 08:46:25.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.624782 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:46:25.625831 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:46:25.625911 systemd[1]: Reached target local-fs.target. Jul 2 08:46:25.626852 systemd[1]: Reached target machines.target. Jul 2 08:46:25.630179 systemd[1]: Starting ldconfig.service... Jul 2 08:46:25.632582 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:46:25.632674 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:46:25.634739 systemd[1]: Starting systemd-boot-update.service... Jul 2 08:46:25.638542 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 08:46:25.641570 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 08:46:25.645593 systemd[1]: Starting systemd-sysext.service... Jul 2 08:46:25.659713 systemd[1]: boot.automount: Got automount request for /boot, triggered by 995 (bootctl) Jul 2 08:46:25.661766 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 08:46:25.683156 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 08:46:25.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:25.688965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 08:46:25.780115 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 08:46:25.780522 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 08:46:25.835949 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 08:46:26.494273 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:46:26.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.495517 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 08:46:26.533130 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:46:26.567927 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 08:46:26.602008 systemd-fsck[1005]: fsck.fat 4.2 (2021-01-31) Jul 2 08:46:26.602008 systemd-fsck[1005]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 08:46:26.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.605489 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 08:46:26.609745 systemd[1]: Mounting boot.mount... Jul 2 08:46:26.625714 (sd-sysext)[1008]: Using extensions 'kubernetes'. Jul 2 08:46:26.629432 (sd-sysext)[1008]: Merged extensions into '/usr'. Jul 2 08:46:26.658639 systemd[1]: Mounted boot.mount. Jul 2 08:46:26.671747 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:26.675247 systemd[1]: Mounting usr-share-oem.mount... Jul 2 08:46:26.676018 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:46:26.677421 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:46:26.678933 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:46:26.681170 systemd[1]: Starting modprobe@loop.service... Jul 2 08:46:26.681702 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:46:26.681831 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:46:26.682020 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:26.688103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:46:26.688274 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:46:26.689329 systemd[1]: Finished systemd-boot-update.service. Jul 2 08:46:26.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.690586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:46:26.690701 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:46:26.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.691489 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:46:26.694677 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:46:26.694795 systemd[1]: Finished modprobe@loop.service. Jul 2 08:46:26.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.695576 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:46:26.696193 systemd[1]: Mounted usr-share-oem.mount. Jul 2 08:46:26.697686 systemd[1]: Finished systemd-sysext.service. Jul 2 08:46:26.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:26.699258 systemd[1]: Starting ensure-sysext.service... Jul 2 08:46:26.700816 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 08:46:26.709934 systemd[1]: Reloading. Jul 2 08:46:26.739033 systemd-tmpfiles[1016]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 08:46:26.752651 systemd-tmpfiles[1016]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:46:26.768009 systemd-tmpfiles[1016]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:46:26.790104 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-07-02T08:46:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:46:26.790134 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-07-02T08:46:26Z" level=info msg="torcx already run" Jul 2 08:46:26.902257 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:46:26.902284 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:46:26.929441 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:46:27.000000 audit: BPF prog-id=24 op=LOAD Jul 2 08:46:27.000000 audit: BPF prog-id=15 op=UNLOAD Jul 2 08:46:27.000000 audit: BPF prog-id=25 op=LOAD Jul 2 08:46:27.000000 audit: BPF prog-id=26 op=LOAD Jul 2 08:46:27.000000 audit: BPF prog-id=16 op=UNLOAD Jul 2 08:46:27.000000 audit: BPF prog-id=17 op=UNLOAD Jul 2 08:46:27.002000 audit: BPF prog-id=27 op=LOAD Jul 2 08:46:27.002000 audit: BPF prog-id=21 op=UNLOAD Jul 2 08:46:27.002000 audit: BPF prog-id=28 op=LOAD Jul 2 08:46:27.003000 audit: BPF prog-id=29 op=LOAD Jul 2 08:46:27.003000 audit: BPF prog-id=22 op=UNLOAD Jul 2 08:46:27.003000 audit: BPF prog-id=23 op=UNLOAD Jul 2 08:46:27.005000 audit: BPF prog-id=30 op=LOAD Jul 2 08:46:27.005000 audit: BPF prog-id=20 op=UNLOAD Jul 2 08:46:27.007000 audit: BPF prog-id=31 op=LOAD Jul 2 08:46:27.007000 audit: BPF prog-id=32 op=LOAD Jul 2 08:46:27.007000 audit: BPF prog-id=18 op=UNLOAD Jul 2 08:46:27.007000 audit: BPF prog-id=19 op=UNLOAD Jul 2 08:46:27.017842 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 08:46:27.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.019996 systemd[1]: Starting audit-rules.service... Jul 2 08:46:27.021670 systemd[1]: Starting clean-ca-certificates.service... Jul 2 08:46:27.023771 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 08:46:27.027000 audit: BPF prog-id=33 op=LOAD Jul 2 08:46:27.029731 systemd[1]: Starting systemd-resolved.service... Jul 2 08:46:27.030000 audit: BPF prog-id=34 op=LOAD Jul 2 08:46:27.032893 systemd[1]: Starting systemd-timesyncd.service... Jul 2 08:46:27.036461 systemd[1]: Starting systemd-update-utmp.service... Jul 2 08:46:27.048599 systemd[1]: Finished clean-ca-certificates.service. Jul 2 08:46:27.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.049673 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:27.050000 audit[1088]: SYSTEM_BOOT pid=1088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.049945 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:46:27.051279 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:46:27.053983 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:46:27.056769 systemd[1]: Starting modprobe@loop.service... Jul 2 08:46:27.058416 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:46:27.058577 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:46:27.058737 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:46:27.058831 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:27.066078 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:27.066710 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:46:27.066978 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:46:27.067162 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:46:27.067360 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:46:27.067528 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:27.070701 systemd[1]: Finished systemd-update-utmp.service. Jul 2 08:46:27.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.071790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:46:27.071965 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:46:27.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.072975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:46:27.073099 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:46:27.073170 ldconfig[994]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:46:27.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.079842 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:27.080137 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:46:27.082282 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:46:27.084531 systemd[1]: Starting modprobe@drm.service... Jul 2 08:46:27.088393 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:46:27.089161 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:46:27.089326 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:46:27.090744 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:46:27.091488 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:46:27.091629 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:46:27.094674 systemd[1]: Finished ldconfig.service. Jul 2 08:46:27.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.095712 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:46:27.095883 systemd[1]: Finished modprobe@loop.service. Jul 2 08:46:27.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.096813 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:46:27.096963 systemd[1]: Finished modprobe@drm.service. Jul 2 08:46:27.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.098462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:46:27.098600 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:46:27.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.101755 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:46:27.103254 systemd[1]: Finished ensure-sysext.service. Jul 2 08:46:27.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.108720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:46:27.108883 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:46:27.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.109518 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:46:27.130727 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 08:46:27.134337 systemd[1]: Starting systemd-update-done.service... Jul 2 08:46:27.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:46:27.138000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:46:27.138000 audit[1110]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc637d7f0 a2=420 a3=0 items=0 ppid=1083 pid=1110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:46:27.138000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 08:46:27.140099 augenrules[1110]: No rules Jul 2 08:46:27.140579 systemd[1]: Finished audit-rules.service. Jul 2 08:46:27.145791 systemd[1]: Finished systemd-update-done.service. Jul 2 08:46:27.164446 systemd[1]: Started systemd-timesyncd.service. Jul 2 08:46:27.165056 systemd[1]: Reached target time-set.target. Jul 2 08:46:28.040838 systemd-timesyncd[1087]: Contacted time server 178.32.222.29:123 (0.flatcar.pool.ntp.org). Jul 2 08:46:28.041258 systemd-timesyncd[1087]: Initial clock synchronization to Tue 2024-07-02 08:46:28.040707 UTC. Jul 2 08:46:28.042695 systemd-resolved[1086]: Positive Trust Anchors: Jul 2 08:46:28.042711 systemd-resolved[1086]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:46:28.042749 systemd-resolved[1086]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:46:28.049809 systemd-resolved[1086]: Using system hostname 'ci-3510-3-5-a-cacadfe6a6.novalocal'. Jul 2 08:46:28.051200 systemd[1]: Started systemd-resolved.service. Jul 2 08:46:28.051833 systemd[1]: Reached target network.target. Jul 2 08:46:28.052266 systemd[1]: Reached target nss-lookup.target. Jul 2 08:46:28.052737 systemd[1]: Reached target sysinit.target. Jul 2 08:46:28.053259 systemd[1]: Started motdgen.path. Jul 2 08:46:28.053726 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 08:46:28.054382 systemd[1]: Started logrotate.timer. Jul 2 08:46:28.054978 systemd[1]: Started mdadm.timer. Jul 2 08:46:28.055741 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 08:46:28.056236 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:46:28.056275 systemd[1]: Reached target paths.target. Jul 2 08:46:28.056755 systemd[1]: Reached target timers.target. Jul 2 08:46:28.057639 systemd[1]: Listening on dbus.socket. Jul 2 08:46:28.059405 systemd[1]: Starting docker.socket... Jul 2 08:46:28.063288 systemd[1]: Listening on sshd.socket. Jul 2 08:46:28.063902 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:46:28.064352 systemd[1]: Listening on docker.socket. Jul 2 08:46:28.064877 systemd[1]: Reached target sockets.target. Jul 2 08:46:28.065319 systemd[1]: Reached target basic.target. Jul 2 08:46:28.065809 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:46:28.065842 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:46:28.066980 systemd[1]: Starting containerd.service... Jul 2 08:46:28.069418 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 08:46:28.071032 systemd[1]: Starting dbus.service... Jul 2 08:46:28.073787 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 08:46:28.079371 systemd[1]: Starting extend-filesystems.service... Jul 2 08:46:28.080651 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 08:46:28.084193 systemd[1]: Starting motdgen.service... Jul 2 08:46:28.091775 systemd[1]: Starting prepare-helm.service... Jul 2 08:46:28.095176 jq[1123]: false Jul 2 08:46:28.097883 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 08:46:28.102899 systemd[1]: Starting sshd-keygen.service... Jul 2 08:46:28.106412 systemd[1]: Starting systemd-logind.service... Jul 2 08:46:28.106890 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:46:28.106965 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:46:28.107366 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:46:28.108894 systemd[1]: Starting update-engine.service... Jul 2 08:46:28.115317 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 08:46:28.118477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:46:28.118797 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 08:46:28.139654 jq[1137]: true Jul 2 08:46:28.143683 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:46:28.143892 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 08:46:28.153811 tar[1139]: linux-amd64/helm Jul 2 08:46:28.163201 dbus-daemon[1121]: [system] SELinux support is enabled Jul 2 08:46:28.163456 systemd[1]: Started dbus.service. Jul 2 08:46:28.166209 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:46:28.166245 systemd[1]: Reached target system-config.target. Jul 2 08:46:28.166867 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:46:28.166894 systemd[1]: Reached target user-config.target. Jul 2 08:46:28.172998 jq[1143]: true Jul 2 08:46:28.176515 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:46:28.176744 systemd[1]: Finished motdgen.service. Jul 2 08:46:28.206854 systemd-networkd[973]: eth0: Gained IPv6LL Jul 2 08:46:28.209643 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:46:28.210534 systemd[1]: Reached target network-online.target. Jul 2 08:46:28.213077 extend-filesystems[1125]: Found loop1 Jul 2 08:46:28.214404 extend-filesystems[1125]: Found vda Jul 2 08:46:28.214404 extend-filesystems[1125]: Found vda1 Jul 2 08:46:28.213842 systemd[1]: Starting kubelet.service... Jul 2 08:46:28.215928 extend-filesystems[1125]: Found vda2 Jul 2 08:46:28.215928 extend-filesystems[1125]: Found vda3 Jul 2 08:46:28.215928 extend-filesystems[1125]: Found usr Jul 2 08:46:28.215928 extend-filesystems[1125]: Found vda4 Jul 2 08:46:28.215928 extend-filesystems[1125]: Found vda6 Jul 2 08:46:28.215928 extend-filesystems[1125]: Found vda7 Jul 2 08:46:28.215928 extend-filesystems[1125]: Found vda9 Jul 2 08:46:28.215928 extend-filesystems[1125]: Checking size of /dev/vda9 Jul 2 08:46:28.258089 extend-filesystems[1125]: Resized partition /dev/vda9 Jul 2 08:46:28.271784 extend-filesystems[1176]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 08:46:28.295155 env[1140]: time="2024-07-02T08:46:28.295066204Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 08:46:28.330329 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jul 2 08:46:28.349457 systemd-logind[1134]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 08:46:28.349497 systemd-logind[1134]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:46:28.355634 bash[1172]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:46:28.356561 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 08:46:28.358794 systemd-logind[1134]: New seat seat0. Jul 2 08:46:28.364869 update_engine[1135]: I0702 08:46:28.362905 1135 main.cc:92] Flatcar Update Engine starting Jul 2 08:46:28.365364 systemd[1]: Started systemd-logind.service. Jul 2 08:46:28.371230 systemd[1]: Started update-engine.service. Jul 2 08:46:28.371459 update_engine[1135]: I0702 08:46:28.371229 1135 update_check_scheduler.cc:74] Next update check in 5m15s Jul 2 08:46:28.374708 systemd[1]: Started locksmithd.service. Jul 2 08:46:28.389795 env[1140]: time="2024-07-02T08:46:28.389676105Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:46:28.390459 env[1140]: time="2024-07-02T08:46:28.390440048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:46:28.395654 env[1140]: time="2024-07-02T08:46:28.395585128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:46:28.395707 env[1140]: time="2024-07-02T08:46:28.395651743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:46:28.396003 env[1140]: time="2024-07-02T08:46:28.395952877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:46:28.396003 env[1140]: time="2024-07-02T08:46:28.395990668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:46:28.396085 env[1140]: time="2024-07-02T08:46:28.396007700Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:46:28.396085 env[1140]: time="2024-07-02T08:46:28.396020935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:46:28.396141 env[1140]: time="2024-07-02T08:46:28.396105444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:46:28.396384 env[1140]: time="2024-07-02T08:46:28.396359881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:46:28.396515 env[1140]: time="2024-07-02T08:46:28.396487280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:46:28.396515 env[1140]: time="2024-07-02T08:46:28.396510834Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:46:28.396639 env[1140]: time="2024-07-02T08:46:28.396566548Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:46:28.396639 env[1140]: time="2024-07-02T08:46:28.396608046Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:46:28.485949 env[1140]: time="2024-07-02T08:46:28.485774007Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:46:28.485949 env[1140]: time="2024-07-02T08:46:28.485851081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:46:28.485949 env[1140]: time="2024-07-02T08:46:28.485868584Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486255610Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486296827Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486315042Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486332094Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486349085Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486365386Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486382087Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486398388Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486414037Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:46:28.486635 env[1140]: time="2024-07-02T08:46:28.486575169Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:46:28.487198 env[1140]: time="2024-07-02T08:46:28.487023130Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487526183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487567992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487585514Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487672688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487691213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487708565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487723573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487738642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487754411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487767856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487781752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487799025Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487939759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487960007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.488661 env[1140]: time="2024-07-02T08:46:28.487991736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.489074 env[1140]: time="2024-07-02T08:46:28.488007856Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:46:28.489074 env[1140]: time="2024-07-02T08:46:28.488027874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 08:46:28.489074 env[1140]: time="2024-07-02T08:46:28.488042491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:46:28.489074 env[1140]: time="2024-07-02T08:46:28.488063160Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 08:46:28.489074 env[1140]: time="2024-07-02T08:46:28.488110709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:46:28.489282 env[1140]: time="2024-07-02T08:46:28.488350539Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:46:28.489282 env[1140]: time="2024-07-02T08:46:28.488422003Z" level=info msg="Connect containerd service" Jul 2 08:46:28.489282 env[1140]: time="2024-07-02T08:46:28.488460375Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:46:28.575229 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.578573853Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.578796270Z" level=info msg="Start subscribing containerd event" Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.578866692Z" level=info msg="Start recovering state" Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.578947273Z" level=info msg="Start event monitor" Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.578963424Z" level=info msg="Start snapshots syncer" Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.578977260Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.578986467Z" level=info msg="Start streaming server" Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.579073901Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.579132581Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:46:28.579285 env[1140]: time="2024-07-02T08:46:28.579262384Z" level=info msg="containerd successfully booted in 0.303704s" Jul 2 08:46:28.579356 systemd[1]: Started containerd.service. Jul 2 08:46:28.580623 extend-filesystems[1176]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 08:46:28.580623 extend-filesystems[1176]: old_desc_blocks = 1, new_desc_blocks = 3 Jul 2 08:46:28.580623 extend-filesystems[1176]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jul 2 08:46:28.583811 extend-filesystems[1125]: Resized filesystem in /dev/vda9 Jul 2 08:46:28.582287 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:46:28.582503 systemd[1]: Finished extend-filesystems.service. Jul 2 08:46:29.002539 tar[1139]: linux-amd64/LICENSE Jul 2 08:46:29.002906 tar[1139]: linux-amd64/README.md Jul 2 08:46:29.009817 systemd[1]: Finished prepare-helm.service. Jul 2 08:46:29.036364 locksmithd[1181]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:46:29.406084 sshd_keygen[1151]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:46:29.440214 systemd[1]: Finished sshd-keygen.service. Jul 2 08:46:29.442357 systemd[1]: Starting issuegen.service... Jul 2 08:46:29.452271 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:46:29.452459 systemd[1]: Finished issuegen.service. Jul 2 08:46:29.454689 systemd[1]: Starting systemd-user-sessions.service... Jul 2 08:46:29.463726 systemd[1]: Finished systemd-user-sessions.service. Jul 2 08:46:29.465985 systemd[1]: Started getty@tty1.service. Jul 2 08:46:29.468013 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 08:46:29.469072 systemd[1]: Reached target getty.target. Jul 2 08:46:30.124037 systemd[1]: Started kubelet.service. Jul 2 08:46:31.870199 kubelet[1206]: E0702 08:46:31.870132 1206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:46:31.875021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:46:31.875376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:46:31.876142 systemd[1]: kubelet.service: Consumed 2.056s CPU time. Jul 2 08:46:35.225148 coreos-metadata[1120]: Jul 02 08:46:35.225 WARN failed to locate config-drive, using the metadata service API instead Jul 2 08:46:35.318634 coreos-metadata[1120]: Jul 02 08:46:35.318 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 08:46:35.636037 coreos-metadata[1120]: Jul 02 08:46:35.635 INFO Fetch successful Jul 2 08:46:35.636037 coreos-metadata[1120]: Jul 02 08:46:35.635 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:46:35.652806 coreos-metadata[1120]: Jul 02 08:46:35.652 INFO Fetch successful Jul 2 08:46:35.659244 unknown[1120]: wrote ssh authorized keys file for user: core Jul 2 08:46:35.706683 update-ssh-keys[1216]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:46:35.708322 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 08:46:35.709234 systemd[1]: Reached target multi-user.target. Jul 2 08:46:35.712129 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 08:46:35.728430 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 08:46:35.728823 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 08:46:35.730052 systemd[1]: Startup finished in 948ms (kernel) + 9.291s (initrd) + 15.792s (userspace) = 26.031s. Jul 2 08:46:38.009146 systemd[1]: Created slice system-sshd.slice. Jul 2 08:46:38.013106 systemd[1]: Started sshd@0-172.24.4.86:22-172.24.4.1:46048.service. Jul 2 08:46:39.065847 sshd[1219]: Accepted publickey for core from 172.24.4.1 port 46048 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:46:39.070837 sshd[1219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:46:39.107851 systemd-logind[1134]: New session 1 of user core. Jul 2 08:46:39.113719 systemd[1]: Created slice user-500.slice. Jul 2 08:46:39.117119 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 08:46:39.143111 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 08:46:39.147368 systemd[1]: Starting user@500.service... Jul 2 08:46:39.156340 (systemd)[1222]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:46:39.291814 systemd[1222]: Queued start job for default target default.target. Jul 2 08:46:39.292440 systemd[1222]: Reached target paths.target. Jul 2 08:46:39.292464 systemd[1222]: Reached target sockets.target. Jul 2 08:46:39.292481 systemd[1222]: Reached target timers.target. Jul 2 08:46:39.292496 systemd[1222]: Reached target basic.target. Jul 2 08:46:39.292645 systemd[1]: Started user@500.service. Jul 2 08:46:39.293829 systemd[1]: Started session-1.scope. Jul 2 08:46:39.294418 systemd[1222]: Reached target default.target. Jul 2 08:46:39.294629 systemd[1222]: Startup finished in 124ms. Jul 2 08:46:39.890314 systemd[1]: Started sshd@1-172.24.4.86:22-172.24.4.1:46060.service. Jul 2 08:46:41.934728 sshd[1231]: Accepted publickey for core from 172.24.4.1 port 46060 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:46:41.938590 sshd[1231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:46:41.940579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:46:41.941214 systemd[1]: Stopped kubelet.service. Jul 2 08:46:41.941298 systemd[1]: kubelet.service: Consumed 2.056s CPU time. Jul 2 08:46:41.944250 systemd[1]: Starting kubelet.service... Jul 2 08:46:41.956726 systemd-logind[1134]: New session 2 of user core. Jul 2 08:46:41.958793 systemd[1]: Started session-2.scope. Jul 2 08:46:42.120119 systemd[1]: Started kubelet.service. Jul 2 08:46:42.570365 kubelet[1238]: E0702 08:46:42.570221 1238 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:46:42.579348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:46:42.579820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:46:42.609384 sshd[1231]: pam_unix(sshd:session): session closed for user core Jul 2 08:46:42.617209 systemd[1]: Started sshd@2-172.24.4.86:22-172.24.4.1:46062.service. Jul 2 08:46:42.618419 systemd[1]: sshd@1-172.24.4.86:22-172.24.4.1:46060.service: Deactivated successfully. Jul 2 08:46:42.624081 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:46:42.626919 systemd-logind[1134]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:46:42.630505 systemd-logind[1134]: Removed session 2. Jul 2 08:46:43.900246 sshd[1246]: Accepted publickey for core from 172.24.4.1 port 46062 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:46:43.903526 sshd[1246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:46:43.914902 systemd[1]: Started session-3.scope. Jul 2 08:46:43.917165 systemd-logind[1134]: New session 3 of user core. Jul 2 08:46:44.686946 sshd[1246]: pam_unix(sshd:session): session closed for user core Jul 2 08:46:44.692876 systemd[1]: Started sshd@3-172.24.4.86:22-172.24.4.1:52004.service. Jul 2 08:46:44.697538 systemd[1]: sshd@2-172.24.4.86:22-172.24.4.1:46062.service: Deactivated successfully. Jul 2 08:46:44.699010 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:46:44.702023 systemd-logind[1134]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:46:44.704458 systemd-logind[1134]: Removed session 3. Jul 2 08:46:46.482917 sshd[1253]: Accepted publickey for core from 172.24.4.1 port 52004 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:46:46.486471 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:46:46.494372 systemd-logind[1134]: New session 4 of user core. Jul 2 08:46:46.494569 systemd[1]: Started session-4.scope. Jul 2 08:46:47.124481 sshd[1253]: pam_unix(sshd:session): session closed for user core Jul 2 08:46:47.130818 systemd[1]: Started sshd@4-172.24.4.86:22-172.24.4.1:52006.service. Jul 2 08:46:47.136298 systemd[1]: sshd@3-172.24.4.86:22-172.24.4.1:52004.service: Deactivated successfully. Jul 2 08:46:47.137939 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:46:47.141353 systemd-logind[1134]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:46:47.143862 systemd-logind[1134]: Removed session 4. Jul 2 08:46:48.722226 sshd[1259]: Accepted publickey for core from 172.24.4.1 port 52006 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:46:48.725729 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:46:48.736355 systemd[1]: Started session-5.scope. Jul 2 08:46:48.737557 systemd-logind[1134]: New session 5 of user core. Jul 2 08:46:49.353647 sudo[1263]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:46:49.354877 sudo[1263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:46:49.428200 systemd[1]: Starting docker.service... Jul 2 08:46:49.484172 env[1273]: time="2024-07-02T08:46:49.484126341Z" level=info msg="Starting up" Jul 2 08:46:49.486393 env[1273]: time="2024-07-02T08:46:49.486328982Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:46:49.486454 env[1273]: time="2024-07-02T08:46:49.486390137Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:46:49.486485 env[1273]: time="2024-07-02T08:46:49.486438498Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:46:49.486485 env[1273]: time="2024-07-02T08:46:49.486468474Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:46:49.492437 env[1273]: time="2024-07-02T08:46:49.492374120Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:46:49.492437 env[1273]: time="2024-07-02T08:46:49.492420878Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:46:49.492676 env[1273]: time="2024-07-02T08:46:49.492453940Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:46:49.492676 env[1273]: time="2024-07-02T08:46:49.492478115Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:46:49.557966 env[1273]: time="2024-07-02T08:46:49.557840622Z" level=info msg="Loading containers: start." Jul 2 08:46:49.773664 kernel: Initializing XFRM netlink socket Jul 2 08:46:49.853927 env[1273]: time="2024-07-02T08:46:49.853859856Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 08:46:49.955803 systemd-networkd[973]: docker0: Link UP Jul 2 08:46:49.969840 env[1273]: time="2024-07-02T08:46:49.969802327Z" level=info msg="Loading containers: done." Jul 2 08:46:49.990067 env[1273]: time="2024-07-02T08:46:49.990030845Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:46:49.990412 env[1273]: time="2024-07-02T08:46:49.990393415Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 08:46:49.990589 env[1273]: time="2024-07-02T08:46:49.990572802Z" level=info msg="Daemon has completed initialization" Jul 2 08:46:50.017519 systemd[1]: Started docker.service. Jul 2 08:46:50.036848 env[1273]: time="2024-07-02T08:46:50.036366678Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:46:52.260270 env[1140]: time="2024-07-02T08:46:52.260181993Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 08:46:52.722419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:46:52.722929 systemd[1]: Stopped kubelet.service. Jul 2 08:46:52.726126 systemd[1]: Starting kubelet.service... Jul 2 08:46:52.938284 systemd[1]: Started kubelet.service. Jul 2 08:46:53.151765 kubelet[1405]: E0702 08:46:53.151259 1405 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:46:53.155927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:46:53.156216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:46:54.175639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177793843.mount: Deactivated successfully. Jul 2 08:46:58.038730 env[1140]: time="2024-07-02T08:46:58.038474669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:58.045864 env[1140]: time="2024-07-02T08:46:58.045809546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:58.050338 env[1140]: time="2024-07-02T08:46:58.050286152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:58.054933 env[1140]: time="2024-07-02T08:46:58.054841335Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:58.058662 env[1140]: time="2024-07-02T08:46:58.058551163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 08:46:58.088505 env[1140]: time="2024-07-02T08:46:58.088433167Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 08:47:01.508378 env[1140]: time="2024-07-02T08:47:01.508252575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:01.514121 env[1140]: time="2024-07-02T08:47:01.514051639Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:01.519889 env[1140]: time="2024-07-02T08:47:01.519822030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:01.524298 env[1140]: time="2024-07-02T08:47:01.524243556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:01.526630 env[1140]: time="2024-07-02T08:47:01.526500030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 08:47:01.553757 env[1140]: time="2024-07-02T08:47:01.553653408Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 08:47:03.222543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:47:03.222998 systemd[1]: Stopped kubelet.service. Jul 2 08:47:03.225899 systemd[1]: Starting kubelet.service... Jul 2 08:47:03.348791 systemd[1]: Started kubelet.service. Jul 2 08:47:03.798339 kubelet[1426]: E0702 08:47:03.798271 1426 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:47:03.801649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:47:03.801953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:47:04.489554 env[1140]: time="2024-07-02T08:47:04.489460959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:04.494306 env[1140]: time="2024-07-02T08:47:04.494246307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:04.498664 env[1140]: time="2024-07-02T08:47:04.498537071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:04.504041 env[1140]: time="2024-07-02T08:47:04.503970035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:04.507834 env[1140]: time="2024-07-02T08:47:04.507144055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 08:47:04.530577 env[1140]: time="2024-07-02T08:47:04.530506867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 08:47:06.162267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943378542.mount: Deactivated successfully. Jul 2 08:47:07.342058 env[1140]: time="2024-07-02T08:47:07.341929816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:07.346095 env[1140]: time="2024-07-02T08:47:07.345999642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:07.349261 env[1140]: time="2024-07-02T08:47:07.349206677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:07.352046 env[1140]: time="2024-07-02T08:47:07.351969967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:07.353374 env[1140]: time="2024-07-02T08:47:07.353268078Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 08:47:07.375334 env[1140]: time="2024-07-02T08:47:07.375247967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:47:08.099410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517453581.mount: Deactivated successfully. Jul 2 08:47:09.959229 env[1140]: time="2024-07-02T08:47:09.959110859Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:09.963108 env[1140]: time="2024-07-02T08:47:09.963035055Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:09.967564 env[1140]: time="2024-07-02T08:47:09.967477357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:09.972249 env[1140]: time="2024-07-02T08:47:09.972191634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:09.974517 env[1140]: time="2024-07-02T08:47:09.974417303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 08:47:10.000507 env[1140]: time="2024-07-02T08:47:10.000411696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:47:10.595851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245648221.mount: Deactivated successfully. Jul 2 08:47:10.613130 env[1140]: time="2024-07-02T08:47:10.613053155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:10.619322 env[1140]: time="2024-07-02T08:47:10.619282141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:10.623027 env[1140]: time="2024-07-02T08:47:10.622942851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:10.626008 env[1140]: time="2024-07-02T08:47:10.625937988Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:10.629506 env[1140]: time="2024-07-02T08:47:10.629438594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 08:47:10.658135 env[1140]: time="2024-07-02T08:47:10.658061176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 08:47:11.352187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2153171966.mount: Deactivated successfully. Jul 2 08:47:13.724811 update_engine[1135]: I0702 08:47:13.724516 1135 update_attempter.cc:509] Updating boot flags... Jul 2 08:47:13.807028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 08:47:13.807263 systemd[1]: Stopped kubelet.service. Jul 2 08:47:13.808824 systemd[1]: Starting kubelet.service... Jul 2 08:47:14.211221 systemd[1]: Started kubelet.service. Jul 2 08:47:14.299883 kubelet[1473]: E0702 08:47:14.299844 1473 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:47:14.301949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:47:14.302086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:47:15.891478 env[1140]: time="2024-07-02T08:47:15.891399899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:15.895919 env[1140]: time="2024-07-02T08:47:15.895894144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:15.900761 env[1140]: time="2024-07-02T08:47:15.900725385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:15.904302 env[1140]: time="2024-07-02T08:47:15.904264121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:15.905440 env[1140]: time="2024-07-02T08:47:15.905399560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 08:47:20.584569 systemd[1]: Stopped kubelet.service. Jul 2 08:47:20.587039 systemd[1]: Starting kubelet.service... Jul 2 08:47:20.616785 systemd[1]: Reloading. Jul 2 08:47:20.758886 /usr/lib/systemd/system-generators/torcx-generator[1566]: time="2024-07-02T08:47:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:47:20.758922 /usr/lib/systemd/system-generators/torcx-generator[1566]: time="2024-07-02T08:47:20Z" level=info msg="torcx already run" Jul 2 08:47:20.858988 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:47:20.859007 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:47:20.882816 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:47:20.994961 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:47:20.995039 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:47:20.995230 systemd[1]: Stopped kubelet.service. Jul 2 08:47:20.997018 systemd[1]: Starting kubelet.service... Jul 2 08:47:21.217511 systemd[1]: Started kubelet.service. Jul 2 08:47:21.645816 kubelet[1617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:47:21.645816 kubelet[1617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:47:21.645816 kubelet[1617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:47:21.649113 kubelet[1617]: I0702 08:47:21.645996 1617 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:47:22.230280 kubelet[1617]: I0702 08:47:22.230248 1617 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:47:22.230468 kubelet[1617]: I0702 08:47:22.230457 1617 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:47:22.231267 kubelet[1617]: I0702 08:47:22.231232 1617 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:47:22.251424 kubelet[1617]: I0702 08:47:22.251399 1617 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:47:22.254691 kubelet[1617]: E0702 08:47:22.254516 1617 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.275071 kubelet[1617]: I0702 08:47:22.275035 1617 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:47:22.275771 kubelet[1617]: I0702 08:47:22.275721 1617 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:47:22.276295 kubelet[1617]: I0702 08:47:22.275932 1617 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-5-a-cacadfe6a6.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:47:22.276574 kubelet[1617]: I0702 08:47:22.276546 1617 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:47:22.276771 kubelet[1617]: I0702 08:47:22.276750 1617 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:47:22.277077 kubelet[1617]: I0702 08:47:22.277051 1617 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:47:22.279102 kubelet[1617]: I0702 08:47:22.279076 1617 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:47:22.279324 kubelet[1617]: I0702 08:47:22.279301 1617 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:47:22.279488 kubelet[1617]: I0702 08:47:22.279468 1617 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:47:22.279664 kubelet[1617]: I0702 08:47:22.279640 1617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:47:22.288828 kubelet[1617]: W0702 08:47:22.288705 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-a-cacadfe6a6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.289011 kubelet[1617]: E0702 08:47:22.288836 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-a-cacadfe6a6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.300160 kubelet[1617]: W0702 08:47:22.300044 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.300427 kubelet[1617]: E0702 08:47:22.300399 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.301241 kubelet[1617]: I0702 08:47:22.301208 1617 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:47:22.305504 kubelet[1617]: I0702 08:47:22.305473 1617 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:47:22.305855 kubelet[1617]: W0702 08:47:22.305829 1617 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:47:22.307289 kubelet[1617]: I0702 08:47:22.307258 1617 server.go:1264] "Started kubelet" Jul 2 08:47:22.334098 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 08:47:22.334519 kubelet[1617]: I0702 08:47:22.334483 1617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:47:22.343475 kubelet[1617]: E0702 08:47:22.343200 1617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.86:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.86:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-5-a-cacadfe6a6.novalocal.17de59122d5942d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-5-a-cacadfe6a6.novalocal,UID:ci-3510-3-5-a-cacadfe6a6.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-5-a-cacadfe6a6.novalocal,},FirstTimestamp:2024-07-02 08:47:22.307216088 +0000 UTC m=+1.081050140,LastTimestamp:2024-07-02 08:47:22.307216088 +0000 UTC m=+1.081050140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-5-a-cacadfe6a6.novalocal,}" Jul 2 08:47:22.346480 kubelet[1617]: I0702 08:47:22.346393 1617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:47:22.348338 kubelet[1617]: I0702 08:47:22.348310 1617 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:47:22.348583 kubelet[1617]: I0702 08:47:22.348494 1617 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:47:22.351653 kubelet[1617]: I0702 08:47:22.351557 1617 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:47:22.352555 kubelet[1617]: I0702 08:47:22.352526 1617 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:47:22.355355 kubelet[1617]: I0702 08:47:22.355248 1617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:47:22.355763 kubelet[1617]: I0702 08:47:22.355717 1617 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:47:22.360263 kubelet[1617]: E0702 08:47:22.360192 1617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-a-cacadfe6a6.novalocal?timeout=10s\": dial tcp 172.24.4.86:6443: connect: connection refused" interval="200ms" Jul 2 08:47:22.360966 kubelet[1617]: W0702 08:47:22.360839 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.361323 kubelet[1617]: E0702 08:47:22.361255 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.365447 kubelet[1617]: I0702 08:47:22.365403 1617 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:47:22.374350 kubelet[1617]: I0702 08:47:22.374230 1617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:47:22.381873 kubelet[1617]: E0702 08:47:22.381694 1617 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:47:22.383394 kubelet[1617]: I0702 08:47:22.383373 1617 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:47:22.394405 kubelet[1617]: I0702 08:47:22.394370 1617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:47:22.396728 kubelet[1617]: I0702 08:47:22.396701 1617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:47:22.396846 kubelet[1617]: I0702 08:47:22.396835 1617 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:47:22.397228 kubelet[1617]: I0702 08:47:22.397217 1617 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:47:22.397333 kubelet[1617]: E0702 08:47:22.397316 1617 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:47:22.397887 kubelet[1617]: W0702 08:47:22.397851 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.397980 kubelet[1617]: E0702 08:47:22.397969 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:22.409519 kubelet[1617]: I0702 08:47:22.409502 1617 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:47:22.409642 kubelet[1617]: I0702 08:47:22.409630 1617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:47:22.409728 kubelet[1617]: I0702 08:47:22.409719 1617 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:47:22.415913 kubelet[1617]: I0702 08:47:22.415898 1617 policy_none.go:49] "None policy: Start" Jul 2 08:47:22.416829 kubelet[1617]: I0702 08:47:22.416816 1617 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:47:22.416914 kubelet[1617]: I0702 08:47:22.416904 1617 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:47:22.426679 systemd[1]: Created slice kubepods.slice. Jul 2 08:47:22.431510 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 08:47:22.436108 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 08:47:22.446546 kubelet[1617]: I0702 08:47:22.446527 1617 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:47:22.447728 kubelet[1617]: I0702 08:47:22.447651 1617 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:47:22.448726 kubelet[1617]: I0702 08:47:22.448714 1617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:47:22.450944 kubelet[1617]: E0702 08:47:22.450826 1617 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" not found" Jul 2 08:47:22.452122 kubelet[1617]: I0702 08:47:22.452107 1617 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.452547 kubelet[1617]: E0702 08:47:22.452529 1617 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.86:6443/api/v1/nodes\": dial tcp 172.24.4.86:6443: connect: connection refused" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.498411 kubelet[1617]: I0702 08:47:22.498263 1617 topology_manager.go:215] "Topology Admit Handler" podUID="ba36d22ec825510607717f91abb645b0" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.504349 kubelet[1617]: I0702 08:47:22.504292 1617 topology_manager.go:215] "Topology Admit Handler" podUID="74489d34a7fe774fb5777c3df995559f" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.506691 kubelet[1617]: I0702 08:47:22.506549 1617 topology_manager.go:215] "Topology Admit Handler" podUID="5639d9ba9225e19976ca8861c43e66ca" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.521314 systemd[1]: Created slice kubepods-burstable-podba36d22ec825510607717f91abb645b0.slice. Jul 2 08:47:22.536081 systemd[1]: Created slice kubepods-burstable-pod74489d34a7fe774fb5777c3df995559f.slice. Jul 2 08:47:22.540511 kubelet[1617]: W0702 08:47:22.540426 1617 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74489d34a7fe774fb5777c3df995559f.slice/cpuset.cpus.effective": open /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74489d34a7fe774fb5777c3df995559f.slice/cpuset.cpus.effective: no such device Jul 2 08:47:22.550239 systemd[1]: Created slice kubepods-burstable-pod5639d9ba9225e19976ca8861c43e66ca.slice. Jul 2 08:47:22.554230 kubelet[1617]: I0702 08:47:22.554183 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555037 kubelet[1617]: I0702 08:47:22.554963 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555162 kubelet[1617]: I0702 08:47:22.555054 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5639d9ba9225e19976ca8861c43e66ca-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"5639d9ba9225e19976ca8861c43e66ca\") " pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555162 kubelet[1617]: I0702 08:47:22.555107 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5639d9ba9225e19976ca8861c43e66ca-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"5639d9ba9225e19976ca8861c43e66ca\") " pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555318 kubelet[1617]: I0702 08:47:22.555156 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555318 kubelet[1617]: I0702 08:47:22.555234 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555318 kubelet[1617]: I0702 08:47:22.555291 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555517 kubelet[1617]: I0702 08:47:22.555336 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74489d34a7fe774fb5777c3df995559f-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"74489d34a7fe774fb5777c3df995559f\") " pod="kube-system/kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.555517 kubelet[1617]: I0702 08:47:22.555385 1617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5639d9ba9225e19976ca8861c43e66ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"5639d9ba9225e19976ca8861c43e66ca\") " pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.561573 kubelet[1617]: E0702 08:47:22.561424 1617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-a-cacadfe6a6.novalocal?timeout=10s\": dial tcp 172.24.4.86:6443: connect: connection refused" interval="400ms" Jul 2 08:47:22.659160 kubelet[1617]: I0702 08:47:22.659120 1617 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.660720 kubelet[1617]: E0702 08:47:22.660674 1617 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.86:6443/api/v1/nodes\": dial tcp 172.24.4.86:6443: connect: connection refused" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:22.833195 env[1140]: time="2024-07-02T08:47:22.832978122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal,Uid:ba36d22ec825510607717f91abb645b0,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:22.842138 env[1140]: time="2024-07-02T08:47:22.842044943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal,Uid:74489d34a7fe774fb5777c3df995559f,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:22.857742 env[1140]: time="2024-07-02T08:47:22.857652822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal,Uid:5639d9ba9225e19976ca8861c43e66ca,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:22.962975 kubelet[1617]: E0702 08:47:22.962846 1617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-a-cacadfe6a6.novalocal?timeout=10s\": dial tcp 172.24.4.86:6443: connect: connection refused" interval="800ms" Jul 2 08:47:23.064376 kubelet[1617]: I0702 08:47:23.063702 1617 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:23.064376 kubelet[1617]: E0702 08:47:23.064288 1617 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.86:6443/api/v1/nodes\": dial tcp 172.24.4.86:6443: connect: connection refused" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:23.183422 kubelet[1617]: W0702 08:47:23.182182 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-a-cacadfe6a6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.183422 kubelet[1617]: E0702 08:47:23.182366 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-a-cacadfe6a6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.428771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416210141.mount: Deactivated successfully. Jul 2 08:47:23.446132 env[1140]: time="2024-07-02T08:47:23.445938779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.452272 env[1140]: time="2024-07-02T08:47:23.452104334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.458705 env[1140]: time="2024-07-02T08:47:23.458647121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.465885 env[1140]: time="2024-07-02T08:47:23.465793464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.467014 kubelet[1617]: W0702 08:47:23.466866 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.467014 kubelet[1617]: E0702 08:47:23.466946 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.475503 env[1140]: time="2024-07-02T08:47:23.475413744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.487284 env[1140]: time="2024-07-02T08:47:23.487195819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.489822 env[1140]: time="2024-07-02T08:47:23.489744825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.491583 env[1140]: time="2024-07-02T08:47:23.491535722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.495262 env[1140]: time="2024-07-02T08:47:23.495141420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.508954 env[1140]: time="2024-07-02T08:47:23.508870076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.513439 env[1140]: time="2024-07-02T08:47:23.513405256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.514639 env[1140]: time="2024-07-02T08:47:23.514618804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:23.541215 env[1140]: time="2024-07-02T08:47:23.541075809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:23.541469 env[1140]: time="2024-07-02T08:47:23.541168474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:23.541469 env[1140]: time="2024-07-02T08:47:23.541398668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:23.541832 env[1140]: time="2024-07-02T08:47:23.541773214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/681984f0168586f5b002b6a2ae9f89467dd161c3704653f1132b9d978551263f pid=1655 runtime=io.containerd.runc.v2 Jul 2 08:47:23.563114 env[1140]: time="2024-07-02T08:47:23.562946896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:23.563114 env[1140]: time="2024-07-02T08:47:23.562998694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:23.563114 env[1140]: time="2024-07-02T08:47:23.563014183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:23.563351 env[1140]: time="2024-07-02T08:47:23.563220602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e885af696120b02cbd4a8932d5670cc91b2a50cc6671fcc634203581121d4f63 pid=1682 runtime=io.containerd.runc.v2 Jul 2 08:47:23.572403 systemd[1]: Started cri-containerd-681984f0168586f5b002b6a2ae9f89467dd161c3704653f1132b9d978551263f.scope. Jul 2 08:47:23.578850 env[1140]: time="2024-07-02T08:47:23.578757005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:23.579110 env[1140]: time="2024-07-02T08:47:23.579083462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:23.579232 env[1140]: time="2024-07-02T08:47:23.579208287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:23.579517 env[1140]: time="2024-07-02T08:47:23.579489247Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd845de0f30ac8779e30064baf1d5652f96a3ab1908139f6a086b2b83c57df7e pid=1703 runtime=io.containerd.runc.v2 Jul 2 08:47:23.602523 systemd[1]: Started cri-containerd-e885af696120b02cbd4a8932d5670cc91b2a50cc6671fcc634203581121d4f63.scope. Jul 2 08:47:23.611888 systemd[1]: Started cri-containerd-fd845de0f30ac8779e30064baf1d5652f96a3ab1908139f6a086b2b83c57df7e.scope. Jul 2 08:47:23.682585 env[1140]: time="2024-07-02T08:47:23.682534977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal,Uid:ba36d22ec825510607717f91abb645b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e885af696120b02cbd4a8932d5670cc91b2a50cc6671fcc634203581121d4f63\"" Jul 2 08:47:23.688926 env[1140]: time="2024-07-02T08:47:23.688880973Z" level=info msg="CreateContainer within sandbox \"e885af696120b02cbd4a8932d5670cc91b2a50cc6671fcc634203581121d4f63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:47:23.691210 env[1140]: time="2024-07-02T08:47:23.691166332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal,Uid:74489d34a7fe774fb5777c3df995559f,Namespace:kube-system,Attempt:0,} returns sandbox id \"681984f0168586f5b002b6a2ae9f89467dd161c3704653f1132b9d978551263f\"" Jul 2 08:47:23.695917 env[1140]: time="2024-07-02T08:47:23.695851184Z" level=info msg="CreateContainer within sandbox \"681984f0168586f5b002b6a2ae9f89467dd161c3704653f1132b9d978551263f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:47:23.701316 env[1140]: time="2024-07-02T08:47:23.699765476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal,Uid:5639d9ba9225e19976ca8861c43e66ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd845de0f30ac8779e30064baf1d5652f96a3ab1908139f6a086b2b83c57df7e\"" Jul 2 08:47:23.707171 env[1140]: time="2024-07-02T08:47:23.707124530Z" level=info msg="CreateContainer within sandbox \"fd845de0f30ac8779e30064baf1d5652f96a3ab1908139f6a086b2b83c57df7e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:47:23.735624 env[1140]: time="2024-07-02T08:47:23.735555646Z" level=info msg="CreateContainer within sandbox \"e885af696120b02cbd4a8932d5670cc91b2a50cc6671fcc634203581121d4f63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"56c91a337dab3989a00d3c24380e40668646ff6eda97d2fe87f4ba45796fcded\"" Jul 2 08:47:23.736349 env[1140]: time="2024-07-02T08:47:23.736293037Z" level=info msg="StartContainer for \"56c91a337dab3989a00d3c24380e40668646ff6eda97d2fe87f4ba45796fcded\"" Jul 2 08:47:23.751192 env[1140]: time="2024-07-02T08:47:23.751150019Z" level=info msg="CreateContainer within sandbox \"fd845de0f30ac8779e30064baf1d5652f96a3ab1908139f6a086b2b83c57df7e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"130678843d9d7640c10dd992ebf35337d359f5956909d39414353696111e1fe9\"" Jul 2 08:47:23.751913 env[1140]: time="2024-07-02T08:47:23.751885647Z" level=info msg="StartContainer for \"130678843d9d7640c10dd992ebf35337d359f5956909d39414353696111e1fe9\"" Jul 2 08:47:23.757444 env[1140]: time="2024-07-02T08:47:23.755986599Z" level=info msg="CreateContainer within sandbox \"681984f0168586f5b002b6a2ae9f89467dd161c3704653f1132b9d978551263f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc9ecc76249e63bedfcfa3430b306321bf1d5d1b118ad6878ad8473f75b6f84c\"" Jul 2 08:47:23.757253 systemd[1]: Started cri-containerd-56c91a337dab3989a00d3c24380e40668646ff6eda97d2fe87f4ba45796fcded.scope. Jul 2 08:47:23.758529 env[1140]: time="2024-07-02T08:47:23.758503124Z" level=info msg="StartContainer for \"dc9ecc76249e63bedfcfa3430b306321bf1d5d1b118ad6878ad8473f75b6f84c\"" Jul 2 08:47:23.763590 kubelet[1617]: E0702 08:47:23.763492 1617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-a-cacadfe6a6.novalocal?timeout=10s\": dial tcp 172.24.4.86:6443: connect: connection refused" interval="1.6s" Jul 2 08:47:23.797001 systemd[1]: Started cri-containerd-130678843d9d7640c10dd992ebf35337d359f5956909d39414353696111e1fe9.scope. Jul 2 08:47:23.804759 systemd[1]: Started cri-containerd-dc9ecc76249e63bedfcfa3430b306321bf1d5d1b118ad6878ad8473f75b6f84c.scope. Jul 2 08:47:23.828892 kubelet[1617]: W0702 08:47:23.828755 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.828892 kubelet[1617]: E0702 08:47:23.828854 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.837239 kubelet[1617]: W0702 08:47:23.837102 1617 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.837239 kubelet[1617]: E0702 08:47:23.837190 1617 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:23.845273 env[1140]: time="2024-07-02T08:47:23.845185993Z" level=info msg="StartContainer for \"56c91a337dab3989a00d3c24380e40668646ff6eda97d2fe87f4ba45796fcded\" returns successfully" Jul 2 08:47:23.867440 kubelet[1617]: I0702 08:47:23.867041 1617 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:23.867440 kubelet[1617]: E0702 08:47:23.867395 1617 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.86:6443/api/v1/nodes\": dial tcp 172.24.4.86:6443: connect: connection refused" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:23.891543 env[1140]: time="2024-07-02T08:47:23.891477875Z" level=info msg="StartContainer for \"dc9ecc76249e63bedfcfa3430b306321bf1d5d1b118ad6878ad8473f75b6f84c\" returns successfully" Jul 2 08:47:23.915850 env[1140]: time="2024-07-02T08:47:23.915776821Z" level=info msg="StartContainer for \"130678843d9d7640c10dd992ebf35337d359f5956909d39414353696111e1fe9\" returns successfully" Jul 2 08:47:24.387369 kubelet[1617]: E0702 08:47:24.387331 1617 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.86:6443: connect: connection refused Jul 2 08:47:25.470693 kubelet[1617]: I0702 08:47:25.470548 1617 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:26.666221 kubelet[1617]: E0702 08:47:26.666183 1617 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-5-a-cacadfe6a6.novalocal\" not found" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:26.723134 kubelet[1617]: I0702 08:47:26.723103 1617 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:27.292586 kubelet[1617]: I0702 08:47:27.292540 1617 apiserver.go:52] "Watching apiserver" Jul 2 08:47:27.352910 kubelet[1617]: I0702 08:47:27.352864 1617 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:47:29.116557 systemd[1]: Reloading. Jul 2 08:47:29.298417 /usr/lib/systemd/system-generators/torcx-generator[1909]: time="2024-07-02T08:47:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:47:29.298451 /usr/lib/systemd/system-generators/torcx-generator[1909]: time="2024-07-02T08:47:29Z" level=info msg="torcx already run" Jul 2 08:47:29.379193 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:47:29.379213 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:47:29.402284 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:47:29.526189 systemd[1]: Stopping kubelet.service... Jul 2 08:47:29.540059 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:47:29.540331 systemd[1]: Stopped kubelet.service. Jul 2 08:47:29.540391 systemd[1]: kubelet.service: Consumed 1.279s CPU time. Jul 2 08:47:29.542141 systemd[1]: Starting kubelet.service... Jul 2 08:47:32.197004 systemd[1]: Started kubelet.service. Jul 2 08:47:32.322370 kubelet[1960]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:47:32.322758 kubelet[1960]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:47:32.322819 kubelet[1960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:47:32.324930 kubelet[1960]: I0702 08:47:32.324866 1960 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:47:32.330576 kubelet[1960]: I0702 08:47:32.330553 1960 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:47:32.330920 kubelet[1960]: I0702 08:47:32.330908 1960 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:47:32.331491 kubelet[1960]: I0702 08:47:32.331476 1960 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:47:32.333339 kubelet[1960]: I0702 08:47:32.333324 1960 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:47:32.337830 kubelet[1960]: I0702 08:47:32.337804 1960 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:47:32.345855 sudo[1973]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:47:32.346129 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:47:32.350328 kubelet[1960]: I0702 08:47:32.349741 1960 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:47:32.350328 kubelet[1960]: I0702 08:47:32.349955 1960 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:47:32.350328 kubelet[1960]: I0702 08:47:32.349986 1960 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-5-a-cacadfe6a6.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:47:32.350328 kubelet[1960]: I0702 08:47:32.350323 1960 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:47:32.350592 kubelet[1960]: I0702 08:47:32.350335 1960 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:47:32.350592 kubelet[1960]: I0702 08:47:32.350372 1960 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:47:32.350592 kubelet[1960]: I0702 08:47:32.350457 1960 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:47:32.350592 kubelet[1960]: I0702 08:47:32.350470 1960 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:47:32.350592 kubelet[1960]: I0702 08:47:32.350492 1960 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:47:32.350592 kubelet[1960]: I0702 08:47:32.350507 1960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:47:32.360447 kubelet[1960]: I0702 08:47:32.360417 1960 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:47:32.361868 kubelet[1960]: I0702 08:47:32.361714 1960 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:47:32.378639 kubelet[1960]: I0702 08:47:32.374953 1960 server.go:1264] "Started kubelet" Jul 2 08:47:32.389278 kubelet[1960]: I0702 08:47:32.388688 1960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:47:32.394741 kubelet[1960]: I0702 08:47:32.394694 1960 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:47:32.395687 kubelet[1960]: I0702 08:47:32.395659 1960 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:47:32.400188 kubelet[1960]: I0702 08:47:32.399990 1960 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:47:32.400306 kubelet[1960]: I0702 08:47:32.400233 1960 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:47:32.405967 kubelet[1960]: I0702 08:47:32.405949 1960 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:47:32.408584 kubelet[1960]: I0702 08:47:32.406326 1960 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:47:32.408584 kubelet[1960]: I0702 08:47:32.406441 1960 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:47:32.410583 kubelet[1960]: I0702 08:47:32.410552 1960 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:47:32.420634 kubelet[1960]: I0702 08:47:32.419919 1960 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:47:32.420634 kubelet[1960]: I0702 08:47:32.419942 1960 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:47:32.438532 kubelet[1960]: I0702 08:47:32.438471 1960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:47:32.440720 kubelet[1960]: I0702 08:47:32.440702 1960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:47:32.440897 kubelet[1960]: I0702 08:47:32.440885 1960 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:47:32.441050 kubelet[1960]: I0702 08:47:32.441039 1960 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:47:32.441180 kubelet[1960]: E0702 08:47:32.441163 1960 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:47:32.510500 kubelet[1960]: I0702 08:47:32.510477 1960 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.521993 kubelet[1960]: I0702 08:47:32.521940 1960 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.522841 kubelet[1960]: I0702 08:47:32.522827 1960 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.538311 kubelet[1960]: I0702 08:47:32.538289 1960 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:47:32.538903 kubelet[1960]: I0702 08:47:32.538890 1960 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:47:32.539035 kubelet[1960]: I0702 08:47:32.539025 1960 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:47:32.539361 kubelet[1960]: I0702 08:47:32.539347 1960 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:47:32.539453 kubelet[1960]: I0702 08:47:32.539425 1960 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:47:32.539680 kubelet[1960]: I0702 08:47:32.539670 1960 policy_none.go:49] "None policy: Start" Jul 2 08:47:32.540743 kubelet[1960]: I0702 08:47:32.540729 1960 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:47:32.540908 kubelet[1960]: I0702 08:47:32.540898 1960 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:47:32.541492 kubelet[1960]: I0702 08:47:32.541409 1960 state_mem.go:75] "Updated machine memory state" Jul 2 08:47:32.544548 kubelet[1960]: E0702 08:47:32.544526 1960 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:47:32.554852 kubelet[1960]: I0702 08:47:32.554830 1960 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:47:32.556258 kubelet[1960]: I0702 08:47:32.556222 1960 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:47:32.558665 kubelet[1960]: I0702 08:47:32.558647 1960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:47:32.745054 kubelet[1960]: I0702 08:47:32.745012 1960 topology_manager.go:215] "Topology Admit Handler" podUID="5639d9ba9225e19976ca8861c43e66ca" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.745314 kubelet[1960]: I0702 08:47:32.745298 1960 topology_manager.go:215] "Topology Admit Handler" podUID="ba36d22ec825510607717f91abb645b0" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.745433 kubelet[1960]: I0702 08:47:32.745419 1960 topology_manager.go:215] "Topology Admit Handler" podUID="74489d34a7fe774fb5777c3df995559f" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.763666 kubelet[1960]: W0702 08:47:32.763549 1960 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:47:32.771987 kubelet[1960]: W0702 08:47:32.771946 1960 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:47:32.772831 kubelet[1960]: W0702 08:47:32.772816 1960 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:47:32.808405 kubelet[1960]: I0702 08:47:32.808379 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5639d9ba9225e19976ca8861c43e66ca-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"5639d9ba9225e19976ca8861c43e66ca\") " pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.808592 kubelet[1960]: I0702 08:47:32.808574 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5639d9ba9225e19976ca8861c43e66ca-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"5639d9ba9225e19976ca8861c43e66ca\") " pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.808730 kubelet[1960]: I0702 08:47:32.808711 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5639d9ba9225e19976ca8861c43e66ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"5639d9ba9225e19976ca8861c43e66ca\") " pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.808848 kubelet[1960]: I0702 08:47:32.808834 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.808987 kubelet[1960]: I0702 08:47:32.808972 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.809123 kubelet[1960]: I0702 08:47:32.809108 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74489d34a7fe774fb5777c3df995559f-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"74489d34a7fe774fb5777c3df995559f\") " pod="kube-system/kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.809248 kubelet[1960]: I0702 08:47:32.809234 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.809384 kubelet[1960]: I0702 08:47:32.809367 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:32.809513 kubelet[1960]: I0702 08:47:32.809497 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba36d22ec825510607717f91abb645b0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal\" (UID: \"ba36d22ec825510607717f91abb645b0\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:33.146799 sudo[1973]: pam_unix(sudo:session): session closed for user root Jul 2 08:47:33.360866 kubelet[1960]: I0702 08:47:33.360818 1960 apiserver.go:52] "Watching apiserver" Jul 2 08:47:33.406977 kubelet[1960]: I0702 08:47:33.406898 1960 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:47:33.536140 kubelet[1960]: W0702 08:47:33.535932 1960 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:47:33.536140 kubelet[1960]: E0702 08:47:33.536066 1960 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" Jul 2 08:47:33.587346 kubelet[1960]: I0702 08:47:33.587271 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-5-a-cacadfe6a6.novalocal" podStartSLOduration=1.587254756 podStartE2EDuration="1.587254756s" podCreationTimestamp="2024-07-02 08:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:47:33.574678834 +0000 UTC m=+1.345683264" watchObservedRunningTime="2024-07-02 08:47:33.587254756 +0000 UTC m=+1.358259206" Jul 2 08:47:33.587539 kubelet[1960]: I0702 08:47:33.587410 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-5-a-cacadfe6a6.novalocal" podStartSLOduration=1.5874032850000002 podStartE2EDuration="1.587403285s" podCreationTimestamp="2024-07-02 08:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:47:33.584697876 +0000 UTC m=+1.355702316" watchObservedRunningTime="2024-07-02 08:47:33.587403285 +0000 UTC m=+1.358407715" Jul 2 08:47:35.343984 sudo[1263]: pam_unix(sudo:session): session closed for user root Jul 2 08:47:35.523947 sshd[1259]: pam_unix(sshd:session): session closed for user core Jul 2 08:47:35.529309 systemd[1]: sshd@4-172.24.4.86:22-172.24.4.1:52006.service: Deactivated successfully. Jul 2 08:47:35.530946 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:47:35.531312 systemd[1]: session-5.scope: Consumed 7.909s CPU time. Jul 2 08:47:35.532234 systemd-logind[1134]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:47:35.533581 systemd-logind[1134]: Removed session 5. Jul 2 08:47:37.674758 kubelet[1960]: I0702 08:47:37.674638 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-5-a-cacadfe6a6.novalocal" podStartSLOduration=5.674576549 podStartE2EDuration="5.674576549s" podCreationTimestamp="2024-07-02 08:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:47:33.597495426 +0000 UTC m=+1.368499866" watchObservedRunningTime="2024-07-02 08:47:37.674576549 +0000 UTC m=+5.445581029" Jul 2 08:47:43.701966 kubelet[1960]: I0702 08:47:43.701940 1960 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:47:43.702932 env[1140]: time="2024-07-02T08:47:43.702890756Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:47:43.703409 kubelet[1960]: I0702 08:47:43.703392 1960 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:47:44.589004 kubelet[1960]: I0702 08:47:44.588925 1960 topology_manager.go:215] "Topology Admit Handler" podUID="8fcb03b0-c88c-4a28-b335-1fa3ed500c7c" podNamespace="kube-system" podName="kube-proxy-dg6w6" Jul 2 08:47:44.599400 systemd[1]: Created slice kubepods-besteffort-pod8fcb03b0_c88c_4a28_b335_1fa3ed500c7c.slice. Jul 2 08:47:44.626036 kubelet[1960]: I0702 08:47:44.625972 1960 topology_manager.go:215] "Topology Admit Handler" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" podNamespace="kube-system" podName="cilium-rlrq9" Jul 2 08:47:44.633526 systemd[1]: Created slice kubepods-burstable-pod2d06b33d_1667_4cbc_a3e8_6dc3e332dd25.slice. Jul 2 08:47:44.701528 kubelet[1960]: I0702 08:47:44.701488 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8fcb03b0-c88c-4a28-b335-1fa3ed500c7c-kube-proxy\") pod \"kube-proxy-dg6w6\" (UID: \"8fcb03b0-c88c-4a28-b335-1fa3ed500c7c\") " pod="kube-system/kube-proxy-dg6w6" Jul 2 08:47:44.701690 kubelet[1960]: I0702 08:47:44.701535 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fcb03b0-c88c-4a28-b335-1fa3ed500c7c-lib-modules\") pod \"kube-proxy-dg6w6\" (UID: \"8fcb03b0-c88c-4a28-b335-1fa3ed500c7c\") " pod="kube-system/kube-proxy-dg6w6" Jul 2 08:47:44.701690 kubelet[1960]: I0702 08:47:44.701560 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghbc\" (UniqueName: \"kubernetes.io/projected/8fcb03b0-c88c-4a28-b335-1fa3ed500c7c-kube-api-access-bghbc\") pod \"kube-proxy-dg6w6\" (UID: \"8fcb03b0-c88c-4a28-b335-1fa3ed500c7c\") " pod="kube-system/kube-proxy-dg6w6" Jul 2 08:47:44.701690 kubelet[1960]: I0702 08:47:44.701584 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fcb03b0-c88c-4a28-b335-1fa3ed500c7c-xtables-lock\") pod \"kube-proxy-dg6w6\" (UID: \"8fcb03b0-c88c-4a28-b335-1fa3ed500c7c\") " pod="kube-system/kube-proxy-dg6w6" Jul 2 08:47:44.785802 kubelet[1960]: I0702 08:47:44.785735 1960 topology_manager.go:215] "Topology Admit Handler" podUID="509bc215-5550-4aad-9cde-2a1717d70a67" podNamespace="kube-system" podName="cilium-operator-599987898-56jjr" Jul 2 08:47:44.793393 systemd[1]: Created slice kubepods-besteffort-pod509bc215_5550_4aad_9cde_2a1717d70a67.slice. Jul 2 08:47:44.802260 kubelet[1960]: I0702 08:47:44.802205 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cni-path\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802497 kubelet[1960]: I0702 08:47:44.802272 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-kernel\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802497 kubelet[1960]: I0702 08:47:44.802319 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-cgroup\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802497 kubelet[1960]: I0702 08:47:44.802362 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcx6x\" (UniqueName: \"kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-kube-api-access-tcx6x\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802497 kubelet[1960]: I0702 08:47:44.802384 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-clustermesh-secrets\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802497 kubelet[1960]: I0702 08:47:44.802402 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-config-path\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802975 kubelet[1960]: I0702 08:47:44.802444 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-net\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802975 kubelet[1960]: I0702 08:47:44.802463 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hostproc\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802975 kubelet[1960]: I0702 08:47:44.802482 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-etc-cni-netd\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802975 kubelet[1960]: I0702 08:47:44.802524 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-lib-modules\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802975 kubelet[1960]: I0702 08:47:44.802547 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-run\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.802975 kubelet[1960]: I0702 08:47:44.802567 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-bpf-maps\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.803425 kubelet[1960]: I0702 08:47:44.802648 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-xtables-lock\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.803425 kubelet[1960]: I0702 08:47:44.802702 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hubble-tls\") pod \"cilium-rlrq9\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " pod="kube-system/cilium-rlrq9" Jul 2 08:47:44.905114 kubelet[1960]: I0702 08:47:44.905012 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/509bc215-5550-4aad-9cde-2a1717d70a67-cilium-config-path\") pod \"cilium-operator-599987898-56jjr\" (UID: \"509bc215-5550-4aad-9cde-2a1717d70a67\") " pod="kube-system/cilium-operator-599987898-56jjr" Jul 2 08:47:44.905328 kubelet[1960]: I0702 08:47:44.905305 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cc9q\" (UniqueName: \"kubernetes.io/projected/509bc215-5550-4aad-9cde-2a1717d70a67-kube-api-access-7cc9q\") pod \"cilium-operator-599987898-56jjr\" (UID: \"509bc215-5550-4aad-9cde-2a1717d70a67\") " pod="kube-system/cilium-operator-599987898-56jjr" Jul 2 08:47:44.922394 env[1140]: time="2024-07-02T08:47:44.921986058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dg6w6,Uid:8fcb03b0-c88c-4a28-b335-1fa3ed500c7c,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:44.937286 env[1140]: time="2024-07-02T08:47:44.937237591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rlrq9,Uid:2d06b33d-1667-4cbc-a3e8-6dc3e332dd25,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:44.973861 env[1140]: time="2024-07-02T08:47:44.973756817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:44.973861 env[1140]: time="2024-07-02T08:47:44.973831256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:44.974120 env[1140]: time="2024-07-02T08:47:44.974063172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:44.974427 env[1140]: time="2024-07-02T08:47:44.974385267Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38edd6b52df28707813508c3f49462e502010fb61e99e9d165c780e0889ce738 pid=2051 runtime=io.containerd.runc.v2 Jul 2 08:47:44.978145 env[1140]: time="2024-07-02T08:47:44.978047615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:44.978253 env[1140]: time="2024-07-02T08:47:44.978118549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:44.978253 env[1140]: time="2024-07-02T08:47:44.978160718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:44.978450 env[1140]: time="2024-07-02T08:47:44.978388074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65 pid=2053 runtime=io.containerd.runc.v2 Jul 2 08:47:44.992122 systemd[1]: Started cri-containerd-38edd6b52df28707813508c3f49462e502010fb61e99e9d165c780e0889ce738.scope. Jul 2 08:47:45.032040 systemd[1]: Started cri-containerd-f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65.scope. Jul 2 08:47:45.067586 env[1140]: time="2024-07-02T08:47:45.067529147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rlrq9,Uid:2d06b33d-1667-4cbc-a3e8-6dc3e332dd25,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\"" Jul 2 08:47:45.070918 env[1140]: time="2024-07-02T08:47:45.069693813Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:47:45.073374 env[1140]: time="2024-07-02T08:47:45.073342584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dg6w6,Uid:8fcb03b0-c88c-4a28-b335-1fa3ed500c7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"38edd6b52df28707813508c3f49462e502010fb61e99e9d165c780e0889ce738\"" Jul 2 08:47:45.081690 env[1140]: time="2024-07-02T08:47:45.081662168Z" level=info msg="CreateContainer within sandbox \"38edd6b52df28707813508c3f49462e502010fb61e99e9d165c780e0889ce738\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:47:45.097827 env[1140]: time="2024-07-02T08:47:45.097777551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-56jjr,Uid:509bc215-5550-4aad-9cde-2a1717d70a67,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:45.128384 env[1140]: time="2024-07-02T08:47:45.128340405Z" level=info msg="CreateContainer within sandbox \"38edd6b52df28707813508c3f49462e502010fb61e99e9d165c780e0889ce738\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d94127b9157a417e485b440a92be574e522cdf40fc6eedfaea47cca76e0173be\"" Jul 2 08:47:45.132261 env[1140]: time="2024-07-02T08:47:45.132220833Z" level=info msg="StartContainer for \"d94127b9157a417e485b440a92be574e522cdf40fc6eedfaea47cca76e0173be\"" Jul 2 08:47:45.149499 env[1140]: time="2024-07-02T08:47:45.149299715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:45.149499 env[1140]: time="2024-07-02T08:47:45.149339861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:45.149499 env[1140]: time="2024-07-02T08:47:45.149353907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:45.150721 env[1140]: time="2024-07-02T08:47:45.150654099Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf pid=2131 runtime=io.containerd.runc.v2 Jul 2 08:47:45.162381 systemd[1]: Started cri-containerd-d94127b9157a417e485b440a92be574e522cdf40fc6eedfaea47cca76e0173be.scope. Jul 2 08:47:45.179270 systemd[1]: Started cri-containerd-a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf.scope. Jul 2 08:47:45.228329 env[1140]: time="2024-07-02T08:47:45.228061513Z" level=info msg="StartContainer for \"d94127b9157a417e485b440a92be574e522cdf40fc6eedfaea47cca76e0173be\" returns successfully" Jul 2 08:47:45.241296 env[1140]: time="2024-07-02T08:47:45.241255189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-56jjr,Uid:509bc215-5550-4aad-9cde-2a1717d70a67,Namespace:kube-system,Attempt:0,} returns sandbox id \"a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf\"" Jul 2 08:47:45.547171 kubelet[1960]: I0702 08:47:45.547074 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dg6w6" podStartSLOduration=1.547019519 podStartE2EDuration="1.547019519s" podCreationTimestamp="2024-07-02 08:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:47:45.546907098 +0000 UTC m=+13.317911548" watchObservedRunningTime="2024-07-02 08:47:45.547019519 +0000 UTC m=+13.318023979" Jul 2 08:47:54.223185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3688655168.mount: Deactivated successfully. Jul 2 08:47:59.017714 env[1140]: time="2024-07-02T08:47:59.017322364Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:59.024085 env[1140]: time="2024-07-02T08:47:59.024000856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:59.028083 env[1140]: time="2024-07-02T08:47:59.028022072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:59.029815 env[1140]: time="2024-07-02T08:47:59.029715199Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:47:59.040979 env[1140]: time="2024-07-02T08:47:59.040748873Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:47:59.041631 env[1140]: time="2024-07-02T08:47:59.040973855Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:47:59.082049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999414144.mount: Deactivated successfully. Jul 2 08:47:59.102777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830736455.mount: Deactivated successfully. Jul 2 08:47:59.111222 env[1140]: time="2024-07-02T08:47:59.111088324Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\"" Jul 2 08:47:59.116431 env[1140]: time="2024-07-02T08:47:59.113801866Z" level=info msg="StartContainer for \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\"" Jul 2 08:47:59.146316 systemd[1]: Started cri-containerd-aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77.scope. Jul 2 08:47:59.189118 env[1140]: time="2024-07-02T08:47:59.189058823Z" level=info msg="StartContainer for \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\" returns successfully" Jul 2 08:47:59.196261 systemd[1]: cri-containerd-aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77.scope: Deactivated successfully. Jul 2 08:47:59.659683 env[1140]: time="2024-07-02T08:47:59.659486981Z" level=info msg="shim disconnected" id=aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77 Jul 2 08:47:59.660375 env[1140]: time="2024-07-02T08:47:59.659691976Z" level=warning msg="cleaning up after shim disconnected" id=aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77 namespace=k8s.io Jul 2 08:47:59.660375 env[1140]: time="2024-07-02T08:47:59.659728554Z" level=info msg="cleaning up dead shim" Jul 2 08:47:59.684312 env[1140]: time="2024-07-02T08:47:59.684220733Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:47:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2368 runtime=io.containerd.runc.v2\n" Jul 2 08:47:59.953357 env[1140]: time="2024-07-02T08:47:59.953144583Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:47:59.999705 env[1140]: time="2024-07-02T08:47:59.999559520Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\"" Jul 2 08:48:00.001148 env[1140]: time="2024-07-02T08:48:00.001042403Z" level=info msg="StartContainer for \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\"" Jul 2 08:48:00.033041 systemd[1]: Started cri-containerd-cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89.scope. Jul 2 08:48:00.073873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77-rootfs.mount: Deactivated successfully. Jul 2 08:48:00.101151 env[1140]: time="2024-07-02T08:48:00.101082348Z" level=info msg="StartContainer for \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\" returns successfully" Jul 2 08:48:00.111764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:48:00.112081 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:48:00.113797 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:48:00.115242 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:48:00.118095 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:48:00.124645 systemd[1]: cri-containerd-cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89.scope: Deactivated successfully. Jul 2 08:48:00.148977 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:48:00.156116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89-rootfs.mount: Deactivated successfully. Jul 2 08:48:00.165861 env[1140]: time="2024-07-02T08:48:00.165801910Z" level=info msg="shim disconnected" id=cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89 Jul 2 08:48:00.165861 env[1140]: time="2024-07-02T08:48:00.165861061Z" level=warning msg="cleaning up after shim disconnected" id=cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89 namespace=k8s.io Jul 2 08:48:00.166133 env[1140]: time="2024-07-02T08:48:00.165873163Z" level=info msg="cleaning up dead shim" Jul 2 08:48:00.173949 env[1140]: time="2024-07-02T08:48:00.173897961Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:48:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2431 runtime=io.containerd.runc.v2\n" Jul 2 08:48:01.070243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840272885.mount: Deactivated successfully. Jul 2 08:48:01.465760 env[1140]: time="2024-07-02T08:48:01.227787604Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:48:01.939170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980713098.mount: Deactivated successfully. Jul 2 08:48:01.954543 env[1140]: time="2024-07-02T08:48:01.954454552Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\"" Jul 2 08:48:01.958287 env[1140]: time="2024-07-02T08:48:01.958218986Z" level=info msg="StartContainer for \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\"" Jul 2 08:48:02.017746 systemd[1]: Started cri-containerd-3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8.scope. Jul 2 08:48:02.059006 env[1140]: time="2024-07-02T08:48:02.058942697Z" level=info msg="StartContainer for \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\" returns successfully" Jul 2 08:48:02.062492 systemd[1]: cri-containerd-3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8.scope: Deactivated successfully. Jul 2 08:48:02.085031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8-rootfs.mount: Deactivated successfully. Jul 2 08:48:02.118750 env[1140]: time="2024-07-02T08:48:02.118692596Z" level=info msg="shim disconnected" id=3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8 Jul 2 08:48:02.118999 env[1140]: time="2024-07-02T08:48:02.118753420Z" level=warning msg="cleaning up after shim disconnected" id=3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8 namespace=k8s.io Jul 2 08:48:02.118999 env[1140]: time="2024-07-02T08:48:02.118764581Z" level=info msg="cleaning up dead shim" Jul 2 08:48:02.136475 env[1140]: time="2024-07-02T08:48:02.136422793Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:48:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2488 runtime=io.containerd.runc.v2\n" Jul 2 08:48:02.968138 env[1140]: time="2024-07-02T08:48:02.968099463Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:48:02.995230 env[1140]: time="2024-07-02T08:48:02.995138565Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:48:03.003677 env[1140]: time="2024-07-02T08:48:03.003645698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:48:03.009974 env[1140]: time="2024-07-02T08:48:03.009947772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:48:03.010428 env[1140]: time="2024-07-02T08:48:03.010386565Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:48:03.013590 env[1140]: time="2024-07-02T08:48:03.013532428Z" level=info msg="CreateContainer within sandbox \"a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:48:03.022838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272311496.mount: Deactivated successfully. Jul 2 08:48:03.034111 env[1140]: time="2024-07-02T08:48:03.034066575Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\"" Jul 2 08:48:03.037801 env[1140]: time="2024-07-02T08:48:03.037734918Z" level=info msg="StartContainer for \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\"" Jul 2 08:48:03.055698 env[1140]: time="2024-07-02T08:48:03.055620225Z" level=info msg="CreateContainer within sandbox \"a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\"" Jul 2 08:48:03.056556 env[1140]: time="2024-07-02T08:48:03.056520424Z" level=info msg="StartContainer for \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\"" Jul 2 08:48:03.061621 systemd[1]: Started cri-containerd-1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773.scope. Jul 2 08:48:03.068739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578610995.mount: Deactivated successfully. Jul 2 08:48:03.106948 systemd[1]: run-containerd-runc-k8s.io-ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617-runc.CdACzo.mount: Deactivated successfully. Jul 2 08:48:03.107720 systemd[1]: cri-containerd-1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773.scope: Deactivated successfully. Jul 2 08:48:03.109727 systemd[1]: Started cri-containerd-ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617.scope. Jul 2 08:48:03.114331 env[1140]: time="2024-07-02T08:48:03.114287931Z" level=info msg="StartContainer for \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\" returns successfully" Jul 2 08:48:03.142080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773-rootfs.mount: Deactivated successfully. Jul 2 08:48:03.525736 env[1140]: time="2024-07-02T08:48:03.525553941Z" level=info msg="StartContainer for \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\" returns successfully" Jul 2 08:48:03.528571 env[1140]: time="2024-07-02T08:48:03.528499587Z" level=info msg="shim disconnected" id=1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773 Jul 2 08:48:03.529655 env[1140]: time="2024-07-02T08:48:03.529528628Z" level=warning msg="cleaning up after shim disconnected" id=1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773 namespace=k8s.io Jul 2 08:48:03.529852 env[1140]: time="2024-07-02T08:48:03.529814304Z" level=info msg="cleaning up dead shim" Jul 2 08:48:03.545908 env[1140]: time="2024-07-02T08:48:03.545834271Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:48:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2580 runtime=io.containerd.runc.v2\n" Jul 2 08:48:03.965027 env[1140]: time="2024-07-02T08:48:03.964927488Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:48:03.992731 env[1140]: time="2024-07-02T08:48:03.992669539Z" level=info msg="CreateContainer within sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\"" Jul 2 08:48:03.993707 env[1140]: time="2024-07-02T08:48:03.993677449Z" level=info msg="StartContainer for \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\"" Jul 2 08:48:04.020428 systemd[1]: Started cri-containerd-495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173.scope. Jul 2 08:48:04.200699 env[1140]: time="2024-07-02T08:48:04.200628703Z" level=info msg="StartContainer for \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\" returns successfully" Jul 2 08:48:04.223036 systemd[1]: run-containerd-runc-k8s.io-495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173-runc.LMXjwY.mount: Deactivated successfully. Jul 2 08:48:04.559273 kubelet[1960]: I0702 08:48:04.559172 1960 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:48:04.598836 kubelet[1960]: I0702 08:48:04.597829 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-56jjr" podStartSLOduration=2.8285976440000002 podStartE2EDuration="20.597807884s" podCreationTimestamp="2024-07-02 08:47:44 +0000 UTC" firstStartedPulling="2024-07-02 08:47:45.242664126 +0000 UTC m=+13.013668556" lastFinishedPulling="2024-07-02 08:48:03.011874366 +0000 UTC m=+30.782878796" observedRunningTime="2024-07-02 08:48:04.199744194 +0000 UTC m=+31.970748624" watchObservedRunningTime="2024-07-02 08:48:04.597807884 +0000 UTC m=+32.368812314" Jul 2 08:48:04.598836 kubelet[1960]: I0702 08:48:04.598252 1960 topology_manager.go:215] "Topology Admit Handler" podUID="c80debf0-7d75-455a-af03-61ca08969acd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jrnjf" Jul 2 08:48:04.604112 systemd[1]: Created slice kubepods-burstable-podc80debf0_7d75_455a_af03_61ca08969acd.slice. Jul 2 08:48:04.610722 kubelet[1960]: I0702 08:48:04.610695 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtwq\" (UniqueName: \"kubernetes.io/projected/c80debf0-7d75-455a-af03-61ca08969acd-kube-api-access-hjtwq\") pod \"coredns-7db6d8ff4d-jrnjf\" (UID: \"c80debf0-7d75-455a-af03-61ca08969acd\") " pod="kube-system/coredns-7db6d8ff4d-jrnjf" Jul 2 08:48:04.610940 kubelet[1960]: I0702 08:48:04.610924 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c80debf0-7d75-455a-af03-61ca08969acd-config-volume\") pod \"coredns-7db6d8ff4d-jrnjf\" (UID: \"c80debf0-7d75-455a-af03-61ca08969acd\") " pod="kube-system/coredns-7db6d8ff4d-jrnjf" Jul 2 08:48:04.618482 kubelet[1960]: I0702 08:48:04.618446 1960 topology_manager.go:215] "Topology Admit Handler" podUID="4ef87d98-f8eb-4e28-b53a-db3ee0e0acc5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2k9jw" Jul 2 08:48:04.623666 systemd[1]: Created slice kubepods-burstable-pod4ef87d98_f8eb_4e28_b53a_db3ee0e0acc5.slice. Jul 2 08:48:04.711649 kubelet[1960]: I0702 08:48:04.711592 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fn7\" (UniqueName: \"kubernetes.io/projected/4ef87d98-f8eb-4e28-b53a-db3ee0e0acc5-kube-api-access-b4fn7\") pod \"coredns-7db6d8ff4d-2k9jw\" (UID: \"4ef87d98-f8eb-4e28-b53a-db3ee0e0acc5\") " pod="kube-system/coredns-7db6d8ff4d-2k9jw" Jul 2 08:48:04.711816 kubelet[1960]: I0702 08:48:04.711657 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef87d98-f8eb-4e28-b53a-db3ee0e0acc5-config-volume\") pod \"coredns-7db6d8ff4d-2k9jw\" (UID: \"4ef87d98-f8eb-4e28-b53a-db3ee0e0acc5\") " pod="kube-system/coredns-7db6d8ff4d-2k9jw" Jul 2 08:48:04.907958 env[1140]: time="2024-07-02T08:48:04.907767032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jrnjf,Uid:c80debf0-7d75-455a-af03-61ca08969acd,Namespace:kube-system,Attempt:0,}" Jul 2 08:48:04.929547 env[1140]: time="2024-07-02T08:48:04.929478035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2k9jw,Uid:4ef87d98-f8eb-4e28-b53a-db3ee0e0acc5,Namespace:kube-system,Attempt:0,}" Jul 2 08:48:08.026937 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 08:48:08.030250 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:48:08.031423 systemd-networkd[973]: cilium_host: Link UP Jul 2 08:48:08.033307 systemd-networkd[973]: cilium_net: Link UP Jul 2 08:48:08.034713 systemd-networkd[973]: cilium_net: Gained carrier Jul 2 08:48:08.036915 systemd-networkd[973]: cilium_host: Gained carrier Jul 2 08:48:08.209808 systemd-networkd[973]: cilium_vxlan: Link UP Jul 2 08:48:08.209837 systemd-networkd[973]: cilium_vxlan: Gained carrier Jul 2 08:48:08.398328 systemd-networkd[973]: cilium_net: Gained IPv6LL Jul 2 08:48:09.084759 systemd-networkd[973]: cilium_host: Gained IPv6LL Jul 2 08:48:09.497645 kernel: NET: Registered PF_ALG protocol family Jul 2 08:48:10.158940 systemd-networkd[973]: cilium_vxlan: Gained IPv6LL Jul 2 08:48:10.702272 systemd-networkd[973]: lxc_health: Link UP Jul 2 08:48:10.712344 systemd-networkd[973]: lxc_health: Gained carrier Jul 2 08:48:10.712665 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:48:10.992592 kubelet[1960]: I0702 08:48:10.992527 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rlrq9" podStartSLOduration=13.028630999 podStartE2EDuration="26.992512329s" podCreationTimestamp="2024-07-02 08:47:44 +0000 UTC" firstStartedPulling="2024-07-02 08:47:45.069251903 +0000 UTC m=+12.840256333" lastFinishedPulling="2024-07-02 08:47:59.033133183 +0000 UTC m=+26.804137663" observedRunningTime="2024-07-02 08:48:05.010465767 +0000 UTC m=+32.781470248" watchObservedRunningTime="2024-07-02 08:48:10.992512329 +0000 UTC m=+38.763516779" Jul 2 08:48:11.245553 systemd-networkd[973]: lxc0bf85dbd32f4: Link UP Jul 2 08:48:11.252917 kernel: eth0: renamed from tmpe1ad6 Jul 2 08:48:11.269758 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0bf85dbd32f4: link becomes ready Jul 2 08:48:11.269235 systemd-networkd[973]: lxc0bf85dbd32f4: Gained carrier Jul 2 08:48:11.270750 systemd-networkd[973]: lxc6695bdd913f8: Link UP Jul 2 08:48:11.274632 kernel: eth0: renamed from tmpddcbd Jul 2 08:48:11.285642 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6695bdd913f8: link becomes ready Jul 2 08:48:11.285871 systemd-networkd[973]: lxc6695bdd913f8: Gained carrier Jul 2 08:48:11.897909 systemd-networkd[973]: lxc_health: Gained IPv6LL Jul 2 08:48:12.551012 systemd-networkd[973]: lxc0bf85dbd32f4: Gained IPv6LL Jul 2 08:48:12.973343 systemd-networkd[973]: lxc6695bdd913f8: Gained IPv6LL Jul 2 08:48:15.776336 env[1140]: time="2024-07-02T08:48:15.776226256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:48:15.776816 env[1140]: time="2024-07-02T08:48:15.776274425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:48:15.776816 env[1140]: time="2024-07-02T08:48:15.776314941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:48:15.776816 env[1140]: time="2024-07-02T08:48:15.776505227Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddcbdede66d5473d510e1263522e0fccd6e13cec3c915a05ab897ba7c86b545b pid=3132 runtime=io.containerd.runc.v2 Jul 2 08:48:15.799479 systemd[1]: run-containerd-runc-k8s.io-ddcbdede66d5473d510e1263522e0fccd6e13cec3c915a05ab897ba7c86b545b-runc.MFNgOH.mount: Deactivated successfully. Jul 2 08:48:15.804548 systemd[1]: Started cri-containerd-ddcbdede66d5473d510e1263522e0fccd6e13cec3c915a05ab897ba7c86b545b.scope. Jul 2 08:48:15.814583 env[1140]: time="2024-07-02T08:48:15.814059614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:48:15.814583 env[1140]: time="2024-07-02T08:48:15.814111130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:48:15.814583 env[1140]: time="2024-07-02T08:48:15.814128924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:48:15.819562 env[1140]: time="2024-07-02T08:48:15.815001675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1ad6a28c902b3a3e47c1b1b961235f5e0f44bb830395cdffe69c3bcb57a146a pid=3137 runtime=io.containerd.runc.v2 Jul 2 08:48:15.843688 systemd[1]: Started cri-containerd-e1ad6a28c902b3a3e47c1b1b961235f5e0f44bb830395cdffe69c3bcb57a146a.scope. Jul 2 08:48:15.899013 env[1140]: time="2024-07-02T08:48:15.898954475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jrnjf,Uid:c80debf0-7d75-455a-af03-61ca08969acd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddcbdede66d5473d510e1263522e0fccd6e13cec3c915a05ab897ba7c86b545b\"" Jul 2 08:48:15.904714 env[1140]: time="2024-07-02T08:48:15.904634761Z" level=info msg="CreateContainer within sandbox \"ddcbdede66d5473d510e1263522e0fccd6e13cec3c915a05ab897ba7c86b545b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:48:15.921324 env[1140]: time="2024-07-02T08:48:15.921280496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2k9jw,Uid:4ef87d98-f8eb-4e28-b53a-db3ee0e0acc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ad6a28c902b3a3e47c1b1b961235f5e0f44bb830395cdffe69c3bcb57a146a\"" Jul 2 08:48:15.926529 env[1140]: time="2024-07-02T08:48:15.926473299Z" level=info msg="CreateContainer within sandbox \"e1ad6a28c902b3a3e47c1b1b961235f5e0f44bb830395cdffe69c3bcb57a146a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:48:15.940240 env[1140]: time="2024-07-02T08:48:15.940195522Z" level=info msg="CreateContainer within sandbox \"ddcbdede66d5473d510e1263522e0fccd6e13cec3c915a05ab897ba7c86b545b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4a74632af600eb271f89a6a3cbf0d0ead04bde17d21645860ca738ac49a28c2\"" Jul 2 08:48:15.942513 env[1140]: time="2024-07-02T08:48:15.941416042Z" level=info msg="StartContainer for \"a4a74632af600eb271f89a6a3cbf0d0ead04bde17d21645860ca738ac49a28c2\"" Jul 2 08:48:15.950515 env[1140]: time="2024-07-02T08:48:15.950470141Z" level=info msg="CreateContainer within sandbox \"e1ad6a28c902b3a3e47c1b1b961235f5e0f44bb830395cdffe69c3bcb57a146a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3d2e57161ad54ce4655da524629ebee5e74905a828629d2f38e2f5cb989e371\"" Jul 2 08:48:15.958781 env[1140]: time="2024-07-02T08:48:15.958738533Z" level=info msg="StartContainer for \"c3d2e57161ad54ce4655da524629ebee5e74905a828629d2f38e2f5cb989e371\"" Jul 2 08:48:15.977208 systemd[1]: Started cri-containerd-a4a74632af600eb271f89a6a3cbf0d0ead04bde17d21645860ca738ac49a28c2.scope. Jul 2 08:48:16.012447 systemd[1]: Started cri-containerd-c3d2e57161ad54ce4655da524629ebee5e74905a828629d2f38e2f5cb989e371.scope. Jul 2 08:48:16.085162 env[1140]: time="2024-07-02T08:48:16.085038412Z" level=info msg="StartContainer for \"a4a74632af600eb271f89a6a3cbf0d0ead04bde17d21645860ca738ac49a28c2\" returns successfully" Jul 2 08:48:16.090240 env[1140]: time="2024-07-02T08:48:16.090203084Z" level=info msg="StartContainer for \"c3d2e57161ad54ce4655da524629ebee5e74905a828629d2f38e2f5cb989e371\" returns successfully" Jul 2 08:48:17.063758 kubelet[1960]: I0702 08:48:17.063565 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2k9jw" podStartSLOduration=33.063501938 podStartE2EDuration="33.063501938s" podCreationTimestamp="2024-07-02 08:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:48:17.032118982 +0000 UTC m=+44.803123492" watchObservedRunningTime="2024-07-02 08:48:17.063501938 +0000 UTC m=+44.834506418" Jul 2 08:48:18.049932 kubelet[1960]: I0702 08:48:18.049799 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jrnjf" podStartSLOduration=34.049731147 podStartE2EDuration="34.049731147s" podCreationTimestamp="2024-07-02 08:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:48:17.066070819 +0000 UTC m=+44.837075329" watchObservedRunningTime="2024-07-02 08:48:18.049731147 +0000 UTC m=+45.820735627" Jul 2 08:48:40.261154 systemd[1]: Started sshd@5-172.24.4.86:22-172.24.4.1:38892.service. Jul 2 08:48:41.522370 sshd[3292]: Accepted publickey for core from 172.24.4.1 port 38892 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:41.527135 sshd[3292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:41.539728 systemd-logind[1134]: New session 6 of user core. Jul 2 08:48:41.543175 systemd[1]: Started session-6.scope. Jul 2 08:48:42.455367 sshd[3292]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:42.463778 systemd[1]: sshd@5-172.24.4.86:22-172.24.4.1:38892.service: Deactivated successfully. Jul 2 08:48:42.465359 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:48:42.466898 systemd-logind[1134]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:48:42.469809 systemd-logind[1134]: Removed session 6. Jul 2 08:48:47.463898 systemd[1]: Started sshd@6-172.24.4.86:22-172.24.4.1:54148.service. Jul 2 08:48:49.026593 sshd[3308]: Accepted publickey for core from 172.24.4.1 port 54148 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:49.029816 sshd[3308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:49.040941 systemd-logind[1134]: New session 7 of user core. Jul 2 08:48:49.041383 systemd[1]: Started session-7.scope. Jul 2 08:48:49.944422 sshd[3308]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:49.950121 systemd[1]: sshd@6-172.24.4.86:22-172.24.4.1:54148.service: Deactivated successfully. Jul 2 08:48:49.951782 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:48:49.953208 systemd-logind[1134]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:48:49.956511 systemd-logind[1134]: Removed session 7. Jul 2 08:48:54.954731 systemd[1]: Started sshd@7-172.24.4.86:22-172.24.4.1:54156.service. Jul 2 08:48:56.163956 sshd[3321]: Accepted publickey for core from 172.24.4.1 port 54156 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:56.170816 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:56.186793 systemd-logind[1134]: New session 8 of user core. Jul 2 08:48:56.189536 systemd[1]: Started session-8.scope. Jul 2 08:48:56.930824 sshd[3321]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:56.936270 systemd[1]: sshd@7-172.24.4.86:22-172.24.4.1:54156.service: Deactivated successfully. Jul 2 08:48:56.938700 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:48:56.940821 systemd-logind[1134]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:48:56.944553 systemd-logind[1134]: Removed session 8. Jul 2 08:49:01.927789 systemd[1]: Started sshd@8-172.24.4.86:22-172.24.4.1:54158.service. Jul 2 08:49:03.462766 sshd[3333]: Accepted publickey for core from 172.24.4.1 port 54158 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:03.465149 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:03.476904 systemd-logind[1134]: New session 9 of user core. Jul 2 08:49:03.478140 systemd[1]: Started session-9.scope. Jul 2 08:49:04.306146 sshd[3333]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:04.313559 systemd[1]: Started sshd@9-172.24.4.86:22-172.24.4.1:54170.service. Jul 2 08:49:04.317574 systemd[1]: sshd@8-172.24.4.86:22-172.24.4.1:54158.service: Deactivated successfully. Jul 2 08:49:04.319462 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:49:04.324835 systemd-logind[1134]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:49:04.327569 systemd-logind[1134]: Removed session 9. Jul 2 08:49:05.577917 sshd[3345]: Accepted publickey for core from 172.24.4.1 port 54170 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:05.580351 sshd[3345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:05.589148 systemd[1]: Started session-10.scope. Jul 2 08:49:05.589773 systemd-logind[1134]: New session 10 of user core. Jul 2 08:49:06.400885 sshd[3345]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:06.408464 systemd[1]: sshd@9-172.24.4.86:22-172.24.4.1:54170.service: Deactivated successfully. Jul 2 08:49:06.410549 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:49:06.414150 systemd-logind[1134]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:49:06.417800 systemd[1]: Started sshd@10-172.24.4.86:22-172.24.4.1:41280.service. Jul 2 08:49:06.430458 systemd-logind[1134]: Removed session 10. Jul 2 08:49:07.766350 sshd[3356]: Accepted publickey for core from 172.24.4.1 port 41280 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:07.769213 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:07.780684 systemd-logind[1134]: New session 11 of user core. Jul 2 08:49:07.781828 systemd[1]: Started session-11.scope. Jul 2 08:49:08.595538 sshd[3356]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:08.601937 systemd[1]: sshd@10-172.24.4.86:22-172.24.4.1:41280.service: Deactivated successfully. Jul 2 08:49:08.604192 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:49:08.606244 systemd-logind[1134]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:49:08.609030 systemd-logind[1134]: Removed session 11. Jul 2 08:49:13.602946 systemd[1]: Started sshd@11-172.24.4.86:22-172.24.4.1:41296.service. Jul 2 08:49:15.206922 sshd[3368]: Accepted publickey for core from 172.24.4.1 port 41296 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:15.211774 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:15.224113 systemd-logind[1134]: New session 12 of user core. Jul 2 08:49:15.225004 systemd[1]: Started session-12.scope. Jul 2 08:49:15.986574 sshd[3368]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:15.992534 systemd[1]: sshd@11-172.24.4.86:22-172.24.4.1:41296.service: Deactivated successfully. Jul 2 08:49:15.993016 systemd-logind[1134]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:49:15.994110 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:49:15.996076 systemd-logind[1134]: Removed session 12. Jul 2 08:49:20.997877 systemd[1]: Started sshd@12-172.24.4.86:22-172.24.4.1:40844.service. Jul 2 08:49:22.375664 sshd[3382]: Accepted publickey for core from 172.24.4.1 port 40844 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:22.378383 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:22.389845 systemd-logind[1134]: New session 13 of user core. Jul 2 08:49:22.390524 systemd[1]: Started session-13.scope. Jul 2 08:49:23.311975 sshd[3382]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:23.318739 systemd[1]: sshd@12-172.24.4.86:22-172.24.4.1:40844.service: Deactivated successfully. Jul 2 08:49:23.320159 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:49:23.322275 systemd-logind[1134]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:49:23.325513 systemd[1]: Started sshd@13-172.24.4.86:22-172.24.4.1:40858.service. Jul 2 08:49:23.330022 systemd-logind[1134]: Removed session 13. Jul 2 08:49:24.626825 sshd[3394]: Accepted publickey for core from 172.24.4.1 port 40858 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:24.629588 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:24.642043 systemd-logind[1134]: New session 14 of user core. Jul 2 08:49:24.644205 systemd[1]: Started session-14.scope. Jul 2 08:49:26.213302 sshd[3394]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:26.226933 systemd[1]: Started sshd@14-172.24.4.86:22-172.24.4.1:54704.service. Jul 2 08:49:26.232333 systemd[1]: sshd@13-172.24.4.86:22-172.24.4.1:40858.service: Deactivated successfully. Jul 2 08:49:26.234131 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:49:26.235524 systemd-logind[1134]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:49:26.238424 systemd-logind[1134]: Removed session 14. Jul 2 08:49:27.684725 sshd[3403]: Accepted publickey for core from 172.24.4.1 port 54704 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:27.688549 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:27.704768 systemd-logind[1134]: New session 15 of user core. Jul 2 08:49:27.708408 systemd[1]: Started session-15.scope. Jul 2 08:49:30.263959 sshd[3403]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:30.274197 systemd[1]: Started sshd@15-172.24.4.86:22-172.24.4.1:54706.service. Jul 2 08:49:30.277840 systemd[1]: sshd@14-172.24.4.86:22-172.24.4.1:54704.service: Deactivated successfully. Jul 2 08:49:30.280076 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:49:30.285963 systemd-logind[1134]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:49:30.288328 systemd-logind[1134]: Removed session 15. Jul 2 08:49:31.968362 sshd[3420]: Accepted publickey for core from 172.24.4.1 port 54706 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:31.970240 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:31.977727 systemd-logind[1134]: New session 16 of user core. Jul 2 08:49:31.979587 systemd[1]: Started session-16.scope. Jul 2 08:49:33.875092 sshd[3420]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:33.881675 systemd[1]: sshd@15-172.24.4.86:22-172.24.4.1:54706.service: Deactivated successfully. Jul 2 08:49:33.884423 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:49:33.887334 systemd-logind[1134]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:49:33.889922 systemd[1]: Started sshd@16-172.24.4.86:22-172.24.4.1:54722.service. Jul 2 08:49:33.893955 systemd-logind[1134]: Removed session 16. Jul 2 08:49:35.188016 sshd[3433]: Accepted publickey for core from 172.24.4.1 port 54722 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:35.191489 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:35.203253 systemd[1]: Started session-17.scope. Jul 2 08:49:35.204427 systemd-logind[1134]: New session 17 of user core. Jul 2 08:49:36.018168 sshd[3433]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:36.024344 systemd[1]: sshd@16-172.24.4.86:22-172.24.4.1:54722.service: Deactivated successfully. Jul 2 08:49:36.025957 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:49:36.026955 systemd-logind[1134]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:49:36.028848 systemd-logind[1134]: Removed session 17. Jul 2 08:49:41.027061 systemd[1]: Started sshd@17-172.24.4.86:22-172.24.4.1:56622.service. Jul 2 08:49:42.533006 sshd[3448]: Accepted publickey for core from 172.24.4.1 port 56622 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:42.535081 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:42.546217 systemd-logind[1134]: New session 18 of user core. Jul 2 08:49:42.547027 systemd[1]: Started session-18.scope. Jul 2 08:49:43.355473 sshd[3448]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:43.362969 systemd[1]: sshd@17-172.24.4.86:22-172.24.4.1:56622.service: Deactivated successfully. Jul 2 08:49:43.365944 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:49:43.369744 systemd-logind[1134]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:49:43.372453 systemd-logind[1134]: Removed session 18. Jul 2 08:49:48.362334 systemd[1]: Started sshd@18-172.24.4.86:22-172.24.4.1:54338.service. Jul 2 08:49:49.532735 sshd[3462]: Accepted publickey for core from 172.24.4.1 port 54338 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:49.536138 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:49.551142 systemd-logind[1134]: New session 19 of user core. Jul 2 08:49:49.552084 systemd[1]: Started session-19.scope. Jul 2 08:49:50.451728 sshd[3462]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:50.457004 systemd[1]: Started sshd@19-172.24.4.86:22-172.24.4.1:54340.service. Jul 2 08:49:50.460758 systemd[1]: sshd@18-172.24.4.86:22-172.24.4.1:54338.service: Deactivated successfully. Jul 2 08:49:50.461567 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:49:50.465791 systemd-logind[1134]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:49:50.468125 systemd-logind[1134]: Removed session 19. Jul 2 08:49:52.252310 sshd[3473]: Accepted publickey for core from 172.24.4.1 port 54340 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:52.255293 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:52.267804 systemd-logind[1134]: New session 20 of user core. Jul 2 08:49:52.268431 systemd[1]: Started session-20.scope. Jul 2 08:49:54.640232 systemd[1]: run-containerd-runc-k8s.io-495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173-runc.YoVrim.mount: Deactivated successfully. Jul 2 08:49:54.678327 env[1140]: time="2024-07-02T08:49:54.678232295Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:49:54.680640 env[1140]: time="2024-07-02T08:49:54.680391021Z" level=info msg="StopContainer for \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\" with timeout 30 (s)" Jul 2 08:49:54.681767 env[1140]: time="2024-07-02T08:49:54.681579508Z" level=info msg="Stop container \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\" with signal terminated" Jul 2 08:49:54.693868 env[1140]: time="2024-07-02T08:49:54.693823286Z" level=info msg="StopContainer for \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\" with timeout 2 (s)" Jul 2 08:49:54.694528 env[1140]: time="2024-07-02T08:49:54.694480097Z" level=info msg="Stop container \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\" with signal terminated" Jul 2 08:49:54.702398 systemd[1]: cri-containerd-ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617.scope: Deactivated successfully. Jul 2 08:49:54.708265 systemd-networkd[973]: lxc_health: Link DOWN Jul 2 08:49:54.708279 systemd-networkd[973]: lxc_health: Lost carrier Jul 2 08:49:54.743097 systemd[1]: cri-containerd-495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173.scope: Deactivated successfully. Jul 2 08:49:54.743434 systemd[1]: cri-containerd-495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173.scope: Consumed 8.835s CPU time. Jul 2 08:49:54.762998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617-rootfs.mount: Deactivated successfully. Jul 2 08:49:54.774623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173-rootfs.mount: Deactivated successfully. Jul 2 08:49:54.791562 env[1140]: time="2024-07-02T08:49:54.791509735Z" level=info msg="shim disconnected" id=ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617 Jul 2 08:49:54.791946 env[1140]: time="2024-07-02T08:49:54.791925404Z" level=warning msg="cleaning up after shim disconnected" id=ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617 namespace=k8s.io Jul 2 08:49:54.792035 env[1140]: time="2024-07-02T08:49:54.792018418Z" level=info msg="cleaning up dead shim" Jul 2 08:49:54.797397 env[1140]: time="2024-07-02T08:49:54.797348167Z" level=info msg="shim disconnected" id=495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173 Jul 2 08:49:54.797665 env[1140]: time="2024-07-02T08:49:54.797587857Z" level=warning msg="cleaning up after shim disconnected" id=495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173 namespace=k8s.io Jul 2 08:49:54.797784 env[1140]: time="2024-07-02T08:49:54.797766863Z" level=info msg="cleaning up dead shim" Jul 2 08:49:54.802530 env[1140]: time="2024-07-02T08:49:54.802498361Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3545 runtime=io.containerd.runc.v2\n" Jul 2 08:49:54.810154 env[1140]: time="2024-07-02T08:49:54.810106029Z" level=info msg="StopContainer for \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\" returns successfully" Jul 2 08:49:54.810496 env[1140]: time="2024-07-02T08:49:54.810345819Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3554 runtime=io.containerd.runc.v2\n" Jul 2 08:49:54.811581 env[1140]: time="2024-07-02T08:49:54.811518577Z" level=info msg="StopPodSandbox for \"a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf\"" Jul 2 08:49:54.811677 env[1140]: time="2024-07-02T08:49:54.811639794Z" level=info msg="Container to stop \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:54.815191 env[1140]: time="2024-07-02T08:49:54.815149883Z" level=info msg="StopContainer for \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\" returns successfully" Jul 2 08:49:54.815806 env[1140]: time="2024-07-02T08:49:54.815771287Z" level=info msg="StopPodSandbox for \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\"" Jul 2 08:49:54.815895 env[1140]: time="2024-07-02T08:49:54.815832933Z" level=info msg="Container to stop \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:54.815895 env[1140]: time="2024-07-02T08:49:54.815851067Z" level=info msg="Container to stop \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:54.815895 env[1140]: time="2024-07-02T08:49:54.815864993Z" level=info msg="Container to stop \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:54.815895 env[1140]: time="2024-07-02T08:49:54.815881764Z" level=info msg="Container to stop \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:54.816057 env[1140]: time="2024-07-02T08:49:54.815895871Z" level=info msg="Container to stop \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:54.823912 systemd[1]: cri-containerd-a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf.scope: Deactivated successfully. Jul 2 08:49:54.825294 systemd[1]: cri-containerd-f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65.scope: Deactivated successfully. Jul 2 08:49:54.876390 env[1140]: time="2024-07-02T08:49:54.876214453Z" level=info msg="shim disconnected" id=a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf Jul 2 08:49:54.876390 env[1140]: time="2024-07-02T08:49:54.876320402Z" level=warning msg="cleaning up after shim disconnected" id=a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf namespace=k8s.io Jul 2 08:49:54.876390 env[1140]: time="2024-07-02T08:49:54.876334488Z" level=info msg="cleaning up dead shim" Jul 2 08:49:54.876853 env[1140]: time="2024-07-02T08:49:54.876558978Z" level=info msg="shim disconnected" id=f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65 Jul 2 08:49:54.876853 env[1140]: time="2024-07-02T08:49:54.876619372Z" level=warning msg="cleaning up after shim disconnected" id=f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65 namespace=k8s.io Jul 2 08:49:54.876853 env[1140]: time="2024-07-02T08:49:54.876632937Z" level=info msg="cleaning up dead shim" Jul 2 08:49:54.886656 env[1140]: time="2024-07-02T08:49:54.886577566Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3607 runtime=io.containerd.runc.v2\n" Jul 2 08:49:54.887113 env[1140]: time="2024-07-02T08:49:54.887073105Z" level=info msg="TearDown network for sandbox \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" successfully" Jul 2 08:49:54.887172 env[1140]: time="2024-07-02T08:49:54.887113641Z" level=info msg="StopPodSandbox for \"f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65\" returns successfully" Jul 2 08:49:54.893837 env[1140]: time="2024-07-02T08:49:54.893741924Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3606 runtime=io.containerd.runc.v2\n" Jul 2 08:49:54.894238 env[1140]: time="2024-07-02T08:49:54.894212145Z" level=info msg="TearDown network for sandbox \"a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf\" successfully" Jul 2 08:49:54.894513 env[1140]: time="2024-07-02T08:49:54.894494675Z" level=info msg="StopPodSandbox for \"a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf\" returns successfully" Jul 2 08:49:54.988815 kubelet[1960]: I0702 08:49:54.988736 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-kernel\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.989274 kubelet[1960]: I0702 08:49:54.989256 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-config-path\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.989376 kubelet[1960]: I0702 08:49:54.989361 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-run\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993506 kubelet[1960]: I0702 08:49:54.989470 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cni-path\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993506 kubelet[1960]: I0702 08:49:54.989496 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/509bc215-5550-4aad-9cde-2a1717d70a67-cilium-config-path\") pod \"509bc215-5550-4aad-9cde-2a1717d70a67\" (UID: \"509bc215-5550-4aad-9cde-2a1717d70a67\") " Jul 2 08:49:54.993506 kubelet[1960]: I0702 08:49:54.989517 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-bpf-maps\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993506 kubelet[1960]: I0702 08:49:54.989538 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cc9q\" (UniqueName: \"kubernetes.io/projected/509bc215-5550-4aad-9cde-2a1717d70a67-kube-api-access-7cc9q\") pod \"509bc215-5550-4aad-9cde-2a1717d70a67\" (UID: \"509bc215-5550-4aad-9cde-2a1717d70a67\") " Jul 2 08:49:54.993506 kubelet[1960]: I0702 08:49:54.989558 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-etc-cni-netd\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993506 kubelet[1960]: I0702 08:49:54.989615 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-clustermesh-secrets\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993723 kubelet[1960]: I0702 08:49:54.989636 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-cgroup\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993723 kubelet[1960]: I0702 08:49:54.989652 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-net\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993723 kubelet[1960]: I0702 08:49:54.989672 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hostproc\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993723 kubelet[1960]: I0702 08:49:54.989690 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-xtables-lock\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993723 kubelet[1960]: I0702 08:49:54.989708 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-lib-modules\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993723 kubelet[1960]: I0702 08:49:54.989729 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hubble-tls\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.993926 kubelet[1960]: I0702 08:49:54.989753 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcx6x\" (UniqueName: \"kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-kube-api-access-tcx6x\") pod \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\" (UID: \"2d06b33d-1667-4cbc-a3e8-6dc3e332dd25\") " Jul 2 08:49:54.997950 kubelet[1960]: I0702 08:49:54.997918 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:49:54.998073 kubelet[1960]: I0702 08:49:54.998058 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:54.998170 kubelet[1960]: I0702 08:49:54.998155 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cni-path" (OuterVolumeSpecName: "cni-path") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.000538 kubelet[1960]: I0702 08:49:55.000516 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/509bc215-5550-4aad-9cde-2a1717d70a67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "509bc215-5550-4aad-9cde-2a1717d70a67" (UID: "509bc215-5550-4aad-9cde-2a1717d70a67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:49:55.000686 kubelet[1960]: I0702 08:49:55.000669 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.003402 kubelet[1960]: I0702 08:49:55.003332 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-kube-api-access-tcx6x" (OuterVolumeSpecName: "kube-api-access-tcx6x") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "kube-api-access-tcx6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:55.003514 kubelet[1960]: I0702 08:49:54.988729 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.003514 kubelet[1960]: I0702 08:49:55.003458 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.003514 kubelet[1960]: I0702 08:49:55.003481 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.004177 kubelet[1960]: I0702 08:49:55.004019 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509bc215-5550-4aad-9cde-2a1717d70a67-kube-api-access-7cc9q" (OuterVolumeSpecName: "kube-api-access-7cc9q") pod "509bc215-5550-4aad-9cde-2a1717d70a67" (UID: "509bc215-5550-4aad-9cde-2a1717d70a67"). InnerVolumeSpecName "kube-api-access-7cc9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:55.004177 kubelet[1960]: I0702 08:49:55.004069 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hostproc" (OuterVolumeSpecName: "hostproc") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.004177 kubelet[1960]: I0702 08:49:55.004091 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.005100 kubelet[1960]: I0702 08:49:55.004264 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.006432 kubelet[1960]: I0702 08:49:55.006393 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:49:55.006552 kubelet[1960]: I0702 08:49:55.006456 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:55.007391 kubelet[1960]: I0702 08:49:55.007361 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" (UID: "2d06b33d-1667-4cbc-a3e8-6dc3e332dd25"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:55.094334 kubelet[1960]: I0702 08:49:55.094219 1960 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cni-path\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.094799 kubelet[1960]: I0702 08:49:55.094758 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/509bc215-5550-4aad-9cde-2a1717d70a67-cilium-config-path\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.095048 kubelet[1960]: I0702 08:49:55.095004 1960 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-etc-cni-netd\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.095413 kubelet[1960]: I0702 08:49:55.095378 1960 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-clustermesh-secrets\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.095664 kubelet[1960]: I0702 08:49:55.095582 1960 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-bpf-maps\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.095895 kubelet[1960]: I0702 08:49:55.095859 1960 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7cc9q\" (UniqueName: \"kubernetes.io/projected/509bc215-5550-4aad-9cde-2a1717d70a67-kube-api-access-7cc9q\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.096111 kubelet[1960]: I0702 08:49:55.096067 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-cgroup\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.096307 kubelet[1960]: I0702 08:49:55.096275 1960 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-net\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.096530 kubelet[1960]: I0702 08:49:55.096489 1960 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hostproc\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.096780 kubelet[1960]: I0702 08:49:55.096749 1960 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-lib-modules\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.096974 kubelet[1960]: I0702 08:49:55.096937 1960 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-xtables-lock\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.097188 kubelet[1960]: I0702 08:49:55.097156 1960 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-hubble-tls\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.097388 kubelet[1960]: I0702 08:49:55.097355 1960 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tcx6x\" (UniqueName: \"kubernetes.io/projected/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-kube-api-access-tcx6x\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.097682 kubelet[1960]: I0702 08:49:55.097568 1960 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-host-proc-sys-kernel\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.097892 kubelet[1960]: I0702 08:49:55.097857 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-config-path\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.098117 kubelet[1960]: I0702 08:49:55.098068 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25-cilium-run\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:49:55.376051 kubelet[1960]: I0702 08:49:55.375953 1960 scope.go:117] "RemoveContainer" containerID="495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173" Jul 2 08:49:55.412105 systemd[1]: Removed slice kubepods-burstable-pod2d06b33d_1667_4cbc_a3e8_6dc3e332dd25.slice. Jul 2 08:49:55.412337 systemd[1]: kubepods-burstable-pod2d06b33d_1667_4cbc_a3e8_6dc3e332dd25.slice: Consumed 8.946s CPU time. Jul 2 08:49:55.418776 env[1140]: time="2024-07-02T08:49:55.417476206Z" level=info msg="RemoveContainer for \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\"" Jul 2 08:49:55.430196 systemd[1]: Removed slice kubepods-besteffort-pod509bc215_5550_4aad_9cde_2a1717d70a67.slice. Jul 2 08:49:55.435309 env[1140]: time="2024-07-02T08:49:55.435145412Z" level=info msg="RemoveContainer for \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\" returns successfully" Jul 2 08:49:55.436158 kubelet[1960]: I0702 08:49:55.436120 1960 scope.go:117] "RemoveContainer" containerID="1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773" Jul 2 08:49:55.439155 env[1140]: time="2024-07-02T08:49:55.439083313Z" level=info msg="RemoveContainer for \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\"" Jul 2 08:49:55.446190 env[1140]: time="2024-07-02T08:49:55.446113850Z" level=info msg="RemoveContainer for \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\" returns successfully" Jul 2 08:49:55.446745 kubelet[1960]: I0702 08:49:55.446694 1960 scope.go:117] "RemoveContainer" containerID="3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8" Jul 2 08:49:55.450178 env[1140]: time="2024-07-02T08:49:55.449292237Z" level=info msg="RemoveContainer for \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\"" Jul 2 08:49:55.455265 env[1140]: time="2024-07-02T08:49:55.454894807Z" level=info msg="RemoveContainer for \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\" returns successfully" Jul 2 08:49:55.455571 kubelet[1960]: I0702 08:49:55.455485 1960 scope.go:117] "RemoveContainer" containerID="cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89" Jul 2 08:49:55.459810 env[1140]: time="2024-07-02T08:49:55.459015361Z" level=info msg="RemoveContainer for \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\"" Jul 2 08:49:55.468886 env[1140]: time="2024-07-02T08:49:55.468841177Z" level=info msg="RemoveContainer for \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\" returns successfully" Jul 2 08:49:55.469235 kubelet[1960]: I0702 08:49:55.469215 1960 scope.go:117] "RemoveContainer" containerID="aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77" Jul 2 08:49:55.472529 env[1140]: time="2024-07-02T08:49:55.472231921Z" level=info msg="RemoveContainer for \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\"" Jul 2 08:49:55.479482 env[1140]: time="2024-07-02T08:49:55.479426616Z" level=info msg="RemoveContainer for \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\" returns successfully" Jul 2 08:49:55.479911 kubelet[1960]: I0702 08:49:55.479857 1960 scope.go:117] "RemoveContainer" containerID="495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173" Jul 2 08:49:55.480370 env[1140]: time="2024-07-02T08:49:55.480250159Z" level=error msg="ContainerStatus for \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\": not found" Jul 2 08:49:55.484078 kubelet[1960]: E0702 08:49:55.484030 1960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\": not found" containerID="495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173" Jul 2 08:49:55.486742 kubelet[1960]: I0702 08:49:55.486639 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173"} err="failed to get container status \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\": rpc error: code = NotFound desc = an error occurred when try to find container \"495694a021be8b607e2c3349502f669d3f79204f5e63e703dd059a45405df173\": not found" Jul 2 08:49:55.489652 kubelet[1960]: I0702 08:49:55.486856 1960 scope.go:117] "RemoveContainer" containerID="1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773" Jul 2 08:49:55.489700 env[1140]: time="2024-07-02T08:49:55.487140995Z" level=error msg="ContainerStatus for \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\": not found" Jul 2 08:49:55.489846 kubelet[1960]: E0702 08:49:55.489826 1960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\": not found" containerID="1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773" Jul 2 08:49:55.489937 kubelet[1960]: I0702 08:49:55.489917 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773"} err="failed to get container status \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\": rpc error: code = NotFound desc = an error occurred when try to find container \"1af2126173e0f692b1ffd273ccf1c1b552c4d01d8a688fd287924b2714a70773\": not found" Jul 2 08:49:55.490009 kubelet[1960]: I0702 08:49:55.489998 1960 scope.go:117] "RemoveContainer" containerID="3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8" Jul 2 08:49:55.490846 env[1140]: time="2024-07-02T08:49:55.490353807Z" level=error msg="ContainerStatus for \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\": not found" Jul 2 08:49:55.490906 kubelet[1960]: E0702 08:49:55.490672 1960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\": not found" containerID="3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8" Jul 2 08:49:55.490906 kubelet[1960]: I0702 08:49:55.490735 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8"} err="failed to get container status \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c56c8ffbea707fb0d1d7420c21e7c457b879f0c55ddf0cbc04ba58c36e349f8\": not found" Jul 2 08:49:55.490906 kubelet[1960]: I0702 08:49:55.490771 1960 scope.go:117] "RemoveContainer" containerID="cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89" Jul 2 08:49:55.491082 env[1140]: time="2024-07-02T08:49:55.491024454Z" level=error msg="ContainerStatus for \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\": not found" Jul 2 08:49:55.491281 kubelet[1960]: E0702 08:49:55.491251 1960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\": not found" containerID="cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89" Jul 2 08:49:55.491332 kubelet[1960]: I0702 08:49:55.491282 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89"} err="failed to get container status \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb3c282c748aacbc9a5ebbe3623d5d4a9a0af7eb65f376f41d14236d52847c89\": not found" Jul 2 08:49:55.491332 kubelet[1960]: I0702 08:49:55.491321 1960 scope.go:117] "RemoveContainer" containerID="aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77" Jul 2 08:49:55.491573 env[1140]: time="2024-07-02T08:49:55.491513800Z" level=error msg="ContainerStatus for \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\": not found" Jul 2 08:49:55.491741 kubelet[1960]: E0702 08:49:55.491684 1960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\": not found" containerID="aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77" Jul 2 08:49:55.491800 kubelet[1960]: I0702 08:49:55.491741 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77"} err="failed to get container status \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\": rpc error: code = NotFound desc = an error occurred when try to find container \"aeb242994bb58661431677f331429d94aa8c3042be7b9949cbddd3fb87b3ac77\": not found" Jul 2 08:49:55.491800 kubelet[1960]: I0702 08:49:55.491760 1960 scope.go:117] "RemoveContainer" containerID="ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617" Jul 2 08:49:55.493699 env[1140]: time="2024-07-02T08:49:55.493665534Z" level=info msg="RemoveContainer for \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\"" Jul 2 08:49:55.498711 env[1140]: time="2024-07-02T08:49:55.498669191Z" level=info msg="RemoveContainer for \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\" returns successfully" Jul 2 08:49:55.498890 kubelet[1960]: I0702 08:49:55.498872 1960 scope.go:117] "RemoveContainer" containerID="ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617" Jul 2 08:49:55.499291 env[1140]: time="2024-07-02T08:49:55.499231426Z" level=error msg="ContainerStatus for \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\": not found" Jul 2 08:49:55.499515 kubelet[1960]: E0702 08:49:55.499490 1960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\": not found" containerID="ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617" Jul 2 08:49:55.499652 kubelet[1960]: I0702 08:49:55.499588 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617"} err="failed to get container status \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad5ccc8789bfd7d083e584a97de8dec01002375e60b38acb111e7baebaa9b617\": not found" Jul 2 08:49:55.634428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf-rootfs.mount: Deactivated successfully. Jul 2 08:49:55.636652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a46ce9fcffa723c083ff230aa9d0f3393786052e52d88d254f406064eb699dcf-shm.mount: Deactivated successfully. Jul 2 08:49:55.636860 systemd[1]: var-lib-kubelet-pods-509bc215\x2d5550\x2d4aad\x2d9cde\x2d2a1717d70a67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7cc9q.mount: Deactivated successfully. Jul 2 08:49:55.637074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65-rootfs.mount: Deactivated successfully. Jul 2 08:49:55.637260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6bb889e30d1008f7015550e7aaf895aeb727d3ebc63371725880ff322e40a65-shm.mount: Deactivated successfully. Jul 2 08:49:55.637472 systemd[1]: var-lib-kubelet-pods-2d06b33d\x2d1667\x2d4cbc\x2da3e8\x2d6dc3e332dd25-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtcx6x.mount: Deactivated successfully. Jul 2 08:49:55.637708 systemd[1]: var-lib-kubelet-pods-2d06b33d\x2d1667\x2d4cbc\x2da3e8\x2d6dc3e332dd25-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:49:55.637901 systemd[1]: var-lib-kubelet-pods-2d06b33d\x2d1667\x2d4cbc\x2da3e8\x2d6dc3e332dd25-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:49:56.446925 kubelet[1960]: I0702 08:49:56.446779 1960 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" path="/var/lib/kubelet/pods/2d06b33d-1667-4cbc-a3e8-6dc3e332dd25/volumes" Jul 2 08:49:56.456125 kubelet[1960]: I0702 08:49:56.456056 1960 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="509bc215-5550-4aad-9cde-2a1717d70a67" path="/var/lib/kubelet/pods/509bc215-5550-4aad-9cde-2a1717d70a67/volumes" Jul 2 08:49:56.715575 sshd[3473]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:56.725368 systemd[1]: Started sshd@20-172.24.4.86:22-172.24.4.1:50478.service. Jul 2 08:49:56.726900 systemd[1]: sshd@19-172.24.4.86:22-172.24.4.1:54340.service: Deactivated successfully. Jul 2 08:49:56.730171 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:49:56.730553 systemd[1]: session-20.scope: Consumed 1.067s CPU time. Jul 2 08:49:56.733964 systemd-logind[1134]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:49:56.736829 systemd-logind[1134]: Removed session 20. Jul 2 08:49:57.663580 kubelet[1960]: E0702 08:49:57.663446 1960 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:49:58.131113 sshd[3637]: Accepted publickey for core from 172.24.4.1 port 50478 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:58.134015 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:58.148854 systemd-logind[1134]: New session 21 of user core. Jul 2 08:49:58.149514 systemd[1]: Started session-21.scope. Jul 2 08:49:59.520164 kubelet[1960]: I0702 08:49:59.515065 1960 topology_manager.go:215] "Topology Admit Handler" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" podNamespace="kube-system" podName="cilium-kwchv" Jul 2 08:49:59.521089 kubelet[1960]: E0702 08:49:59.521057 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" containerName="mount-cgroup" Jul 2 08:49:59.521089 kubelet[1960]: E0702 08:49:59.521082 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" containerName="clean-cilium-state" Jul 2 08:49:59.521089 kubelet[1960]: E0702 08:49:59.521091 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" containerName="mount-bpf-fs" Jul 2 08:49:59.521242 kubelet[1960]: E0702 08:49:59.521099 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="509bc215-5550-4aad-9cde-2a1717d70a67" containerName="cilium-operator" Jul 2 08:49:59.521242 kubelet[1960]: E0702 08:49:59.521107 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" containerName="cilium-agent" Jul 2 08:49:59.521242 kubelet[1960]: E0702 08:49:59.521116 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" containerName="apply-sysctl-overwrites" Jul 2 08:49:59.521242 kubelet[1960]: I0702 08:49:59.521143 1960 memory_manager.go:354] "RemoveStaleState removing state" podUID="509bc215-5550-4aad-9cde-2a1717d70a67" containerName="cilium-operator" Jul 2 08:49:59.521242 kubelet[1960]: I0702 08:49:59.521150 1960 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d06b33d-1667-4cbc-a3e8-6dc3e332dd25" containerName="cilium-agent" Jul 2 08:49:59.545044 systemd[1]: Created slice kubepods-burstable-pod99108173_7867_45eb_b9d3_342ef812bbc5.slice. Jul 2 08:49:59.658447 kubelet[1960]: I0702 08:49:59.658361 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-etc-cni-netd\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658447 kubelet[1960]: I0702 08:49:59.658434 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-ipsec-secrets\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658658 kubelet[1960]: I0702 08:49:59.658476 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-net\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658658 kubelet[1960]: I0702 08:49:59.658509 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-hubble-tls\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658658 kubelet[1960]: I0702 08:49:59.658544 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-run\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658658 kubelet[1960]: I0702 08:49:59.658575 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p282b\" (UniqueName: \"kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-kube-api-access-p282b\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658658 kubelet[1960]: I0702 08:49:59.658640 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-xtables-lock\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658814 kubelet[1960]: I0702 08:49:59.658674 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-kernel\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658814 kubelet[1960]: I0702 08:49:59.658705 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-hostproc\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658814 kubelet[1960]: I0702 08:49:59.658737 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cni-path\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658814 kubelet[1960]: I0702 08:49:59.658770 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-bpf-maps\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658814 kubelet[1960]: I0702 08:49:59.658799 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-cgroup\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658955 kubelet[1960]: I0702 08:49:59.658828 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-lib-modules\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658955 kubelet[1960]: I0702 08:49:59.658858 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-config-path\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.658955 kubelet[1960]: I0702 08:49:59.658894 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-clustermesh-secrets\") pod \"cilium-kwchv\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " pod="kube-system/cilium-kwchv" Jul 2 08:49:59.737699 sshd[3637]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:59.743734 systemd[1]: Started sshd@21-172.24.4.86:22-172.24.4.1:50494.service. Jul 2 08:49:59.747576 systemd[1]: sshd@20-172.24.4.86:22-172.24.4.1:50478.service: Deactivated successfully. Jul 2 08:49:59.749631 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:49:59.753392 systemd-logind[1134]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:49:59.756742 systemd-logind[1134]: Removed session 21. Jul 2 08:49:59.849689 env[1140]: time="2024-07-02T08:49:59.848030203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwchv,Uid:99108173-7867-45eb-b9d3-342ef812bbc5,Namespace:kube-system,Attempt:0,}" Jul 2 08:49:59.885378 env[1140]: time="2024-07-02T08:49:59.885279648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:49:59.885695 env[1140]: time="2024-07-02T08:49:59.885362143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:49:59.885695 env[1140]: time="2024-07-02T08:49:59.885380738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:49:59.885931 env[1140]: time="2024-07-02T08:49:59.885708262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f pid=3662 runtime=io.containerd.runc.v2 Jul 2 08:49:59.903840 systemd[1]: Started cri-containerd-23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f.scope. Jul 2 08:49:59.941465 env[1140]: time="2024-07-02T08:49:59.941403861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwchv,Uid:99108173-7867-45eb-b9d3-342ef812bbc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\"" Jul 2 08:49:59.945653 env[1140]: time="2024-07-02T08:49:59.945616257Z" level=info msg="CreateContainer within sandbox \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:49:59.962718 env[1140]: time="2024-07-02T08:49:59.962664510Z" level=info msg="CreateContainer within sandbox \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\"" Jul 2 08:49:59.964777 env[1140]: time="2024-07-02T08:49:59.964743567Z" level=info msg="StartContainer for \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\"" Jul 2 08:49:59.982237 systemd[1]: Started cri-containerd-7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b.scope. Jul 2 08:49:59.998099 systemd[1]: cri-containerd-7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b.scope: Deactivated successfully. Jul 2 08:50:00.020834 env[1140]: time="2024-07-02T08:50:00.020774034Z" level=info msg="shim disconnected" id=7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b Jul 2 08:50:00.021042 env[1140]: time="2024-07-02T08:50:00.020839686Z" level=warning msg="cleaning up after shim disconnected" id=7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b namespace=k8s.io Jul 2 08:50:00.021042 env[1140]: time="2024-07-02T08:50:00.020853522Z" level=info msg="cleaning up dead shim" Jul 2 08:50:00.032923 env[1140]: time="2024-07-02T08:50:00.032865807Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:50:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3723 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:50:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:50:00.033247 env[1140]: time="2024-07-02T08:50:00.033139090Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Jul 2 08:50:00.033720 env[1140]: time="2024-07-02T08:50:00.033678891Z" level=error msg="Failed to pipe stderr of container \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\"" error="reading from a closed fifo" Jul 2 08:50:00.034210 env[1140]: time="2024-07-02T08:50:00.034165383Z" level=error msg="Failed to pipe stdout of container \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\"" error="reading from a closed fifo" Jul 2 08:50:00.038972 env[1140]: time="2024-07-02T08:50:00.038917571Z" level=error msg="StartContainer for \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:50:00.039670 kubelet[1960]: E0702 08:50:00.039157 1960 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b" Jul 2 08:50:00.046101 kubelet[1960]: E0702 08:50:00.045929 1960 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:50:00.046101 kubelet[1960]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:50:00.046101 kubelet[1960]: rm /hostbin/cilium-mount Jul 2 08:50:00.046419 kubelet[1960]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p282b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kwchv_kube-system(99108173-7867-45eb-b9d3-342ef812bbc5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:50:00.049086 kubelet[1960]: E0702 08:50:00.049014 1960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kwchv" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" Jul 2 08:50:00.458500 env[1140]: time="2024-07-02T08:50:00.457106789Z" level=info msg="CreateContainer within sandbox \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 2 08:50:00.496496 env[1140]: time="2024-07-02T08:50:00.496414142Z" level=info msg="CreateContainer within sandbox \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\"" Jul 2 08:50:00.498153 env[1140]: time="2024-07-02T08:50:00.498065788Z" level=info msg="StartContainer for \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\"" Jul 2 08:50:00.540774 systemd[1]: Started cri-containerd-c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1.scope. Jul 2 08:50:00.552345 systemd[1]: cri-containerd-c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1.scope: Deactivated successfully. Jul 2 08:50:00.562854 env[1140]: time="2024-07-02T08:50:00.562798576Z" level=info msg="shim disconnected" id=c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1 Jul 2 08:50:00.563016 env[1140]: time="2024-07-02T08:50:00.562859020Z" level=warning msg="cleaning up after shim disconnected" id=c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1 namespace=k8s.io Jul 2 08:50:00.563016 env[1140]: time="2024-07-02T08:50:00.562871954Z" level=info msg="cleaning up dead shim" Jul 2 08:50:00.570751 env[1140]: time="2024-07-02T08:50:00.570706910Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:50:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3759 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:50:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:50:00.571006 env[1140]: time="2024-07-02T08:50:00.570952018Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Jul 2 08:50:00.572652 env[1140]: time="2024-07-02T08:50:00.572609575Z" level=error msg="Failed to pipe stderr of container \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\"" error="reading from a closed fifo" Jul 2 08:50:00.572717 env[1140]: time="2024-07-02T08:50:00.572689194Z" level=error msg="Failed to pipe stdout of container \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\"" error="reading from a closed fifo" Jul 2 08:50:00.575787 env[1140]: time="2024-07-02T08:50:00.575743909Z" level=error msg="StartContainer for \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:50:00.576813 kubelet[1960]: E0702 08:50:00.575984 1960 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1" Jul 2 08:50:00.576813 kubelet[1960]: E0702 08:50:00.576155 1960 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:50:00.576813 kubelet[1960]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:50:00.576813 kubelet[1960]: rm /hostbin/cilium-mount Jul 2 08:50:00.576813 kubelet[1960]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p282b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kwchv_kube-system(99108173-7867-45eb-b9d3-342ef812bbc5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:50:00.576813 kubelet[1960]: E0702 08:50:00.576187 1960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kwchv" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" Jul 2 08:50:01.327391 sshd[3647]: Accepted publickey for core from 172.24.4.1 port 50494 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:50:01.330018 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:50:01.340803 systemd-logind[1134]: New session 22 of user core. Jul 2 08:50:01.341547 systemd[1]: Started session-22.scope. Jul 2 08:50:01.454997 kubelet[1960]: I0702 08:50:01.454919 1960 scope.go:117] "RemoveContainer" containerID="7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b" Jul 2 08:50:01.455798 kubelet[1960]: I0702 08:50:01.455753 1960 scope.go:117] "RemoveContainer" containerID="7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b" Jul 2 08:50:01.458442 env[1140]: time="2024-07-02T08:50:01.458335418Z" level=info msg="RemoveContainer for \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\"" Jul 2 08:50:01.460547 env[1140]: time="2024-07-02T08:50:01.460488633Z" level=info msg="RemoveContainer for \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\"" Jul 2 08:50:01.461045 env[1140]: time="2024-07-02T08:50:01.460975426Z" level=error msg="RemoveContainer for \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\" failed" error="failed to set removing state for container \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\": container is already in removing state" Jul 2 08:50:01.462226 kubelet[1960]: E0702 08:50:01.462140 1960 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\": container is already in removing state" containerID="7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b" Jul 2 08:50:01.462575 kubelet[1960]: E0702 08:50:01.462536 1960 kuberuntime_container.go:867] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b": container is already in removing state; Skipping pod "cilium-kwchv_kube-system(99108173-7867-45eb-b9d3-342ef812bbc5)" Jul 2 08:50:01.463788 kubelet[1960]: E0702 08:50:01.463545 1960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-kwchv_kube-system(99108173-7867-45eb-b9d3-342ef812bbc5)\"" pod="kube-system/cilium-kwchv" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" Jul 2 08:50:01.467889 env[1140]: time="2024-07-02T08:50:01.467798554Z" level=info msg="RemoveContainer for \"7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b\" returns successfully" Jul 2 08:50:02.165473 sshd[3647]: pam_unix(sshd:session): session closed for user core Jul 2 08:50:02.173056 systemd[1]: sshd@21-172.24.4.86:22-172.24.4.1:50494.service: Deactivated successfully. Jul 2 08:50:02.177108 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:50:02.180176 systemd-logind[1134]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:50:02.184146 systemd[1]: Started sshd@22-172.24.4.86:22-172.24.4.1:50498.service. Jul 2 08:50:02.187650 systemd-logind[1134]: Removed session 22. Jul 2 08:50:02.462552 env[1140]: time="2024-07-02T08:50:02.462328834Z" level=info msg="StopPodSandbox for \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\"" Jul 2 08:50:02.463769 env[1140]: time="2024-07-02T08:50:02.463550324Z" level=info msg="Container to stop \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:50:02.471768 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f-shm.mount: Deactivated successfully. Jul 2 08:50:02.515280 systemd[1]: cri-containerd-23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f.scope: Deactivated successfully. Jul 2 08:50:02.568041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f-rootfs.mount: Deactivated successfully. Jul 2 08:50:02.582918 env[1140]: time="2024-07-02T08:50:02.582838767Z" level=info msg="shim disconnected" id=23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f Jul 2 08:50:02.582918 env[1140]: time="2024-07-02T08:50:02.582916583Z" level=warning msg="cleaning up after shim disconnected" id=23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f namespace=k8s.io Jul 2 08:50:02.583140 env[1140]: time="2024-07-02T08:50:02.582929206Z" level=info msg="cleaning up dead shim" Jul 2 08:50:02.592647 env[1140]: time="2024-07-02T08:50:02.592537656Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:50:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3803 runtime=io.containerd.runc.v2\n" Jul 2 08:50:02.593149 env[1140]: time="2024-07-02T08:50:02.593110760Z" level=info msg="TearDown network for sandbox \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\" successfully" Jul 2 08:50:02.593149 env[1140]: time="2024-07-02T08:50:02.593142700Z" level=info msg="StopPodSandbox for \"23e0e7534b46c07a01ea1950249df6222351e7e19b637575fb877025704ef30f\" returns successfully" Jul 2 08:50:02.664808 kubelet[1960]: E0702 08:50:02.664750 1960 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.693885 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-clustermesh-secrets\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.693998 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-net\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694079 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-hostproc\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694122 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-etc-cni-netd\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694198 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-lib-modules\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694277 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-kernel\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694354 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-bpf-maps\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694404 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-hubble-tls\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694484 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p282b\" (UniqueName: \"kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-kube-api-access-p282b\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694555 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-cgroup\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694645 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-config-path\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694691 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cni-path\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694788 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-run\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694879 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694885 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-ipsec-secrets\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.696618 kubelet[1960]: I0702 08:50:02.694948 1960 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-xtables-lock\") pod \"99108173-7867-45eb-b9d3-342ef812bbc5\" (UID: \"99108173-7867-45eb-b9d3-342ef812bbc5\") " Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.694992 1960 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-bpf-maps\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.695011 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.695029 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.695045 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-hostproc" (OuterVolumeSpecName: "hostproc") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.695061 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.695075 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.695090 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.697273 kubelet[1960]: I0702 08:50:02.695106 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.701618 systemd[1]: var-lib-kubelet-pods-99108173\x2d7867\x2d45eb\x2db9d3\x2d342ef812bbc5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:50:02.702969 kubelet[1960]: I0702 08:50:02.702941 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cni-path" (OuterVolumeSpecName: "cni-path") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.703182 kubelet[1960]: I0702 08:50:02.703151 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:50:02.703285 kubelet[1960]: I0702 08:50:02.703268 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:50:02.705406 kubelet[1960]: I0702 08:50:02.705336 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:50:02.709569 systemd[1]: var-lib-kubelet-pods-99108173\x2d7867\x2d45eb\x2db9d3\x2d342ef812bbc5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:50:02.712242 kubelet[1960]: I0702 08:50:02.712170 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:50:02.716967 systemd[1]: var-lib-kubelet-pods-99108173\x2d7867\x2d45eb\x2db9d3\x2d342ef812bbc5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp282b.mount: Deactivated successfully. Jul 2 08:50:02.718215 kubelet[1960]: I0702 08:50:02.716993 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:50:02.717070 systemd[1]: var-lib-kubelet-pods-99108173\x2d7867\x2d45eb\x2db9d3\x2d342ef812bbc5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:50:02.719729 kubelet[1960]: I0702 08:50:02.719697 1960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-kube-api-access-p282b" (OuterVolumeSpecName: "kube-api-access-p282b") pod "99108173-7867-45eb-b9d3-342ef812bbc5" (UID: "99108173-7867-45eb-b9d3-342ef812bbc5"). InnerVolumeSpecName "kube-api-access-p282b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:50:02.795469 kubelet[1960]: I0702 08:50:02.795437 1960 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-etc-cni-netd\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.795678 kubelet[1960]: I0702 08:50:02.795663 1960 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-lib-modules\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.795784 kubelet[1960]: I0702 08:50:02.795768 1960 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-kernel\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.795902 kubelet[1960]: I0702 08:50:02.795889 1960 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-hubble-tls\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796003 kubelet[1960]: I0702 08:50:02.795990 1960 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-p282b\" (UniqueName: \"kubernetes.io/projected/99108173-7867-45eb-b9d3-342ef812bbc5-kube-api-access-p282b\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796088 kubelet[1960]: I0702 08:50:02.796076 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-cgroup\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796201 kubelet[1960]: I0702 08:50:02.796187 1960 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cni-path\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796295 kubelet[1960]: I0702 08:50:02.796282 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-config-path\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796387 kubelet[1960]: I0702 08:50:02.796375 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-ipsec-secrets\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796483 kubelet[1960]: I0702 08:50:02.796470 1960 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-cilium-run\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796572 kubelet[1960]: I0702 08:50:02.796560 1960 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-xtables-lock\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796673 kubelet[1960]: I0702 08:50:02.796659 1960 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-host-proc-sys-net\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796760 kubelet[1960]: I0702 08:50:02.796748 1960 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99108173-7867-45eb-b9d3-342ef812bbc5-hostproc\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:02.796853 kubelet[1960]: I0702 08:50:02.796840 1960 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99108173-7867-45eb-b9d3-342ef812bbc5-clustermesh-secrets\") on node \"ci-3510-3-5-a-cacadfe6a6.novalocal\" DevicePath \"\"" Jul 2 08:50:03.140128 kubelet[1960]: W0702 08:50:03.140028 1960 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99108173_7867_45eb_b9d3_342ef812bbc5.slice/cri-containerd-7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b.scope WatchSource:0}: container "7af46f9ad383151d2b59ac3235a642baf75b253acaaab16f88c4173590ddf53b" in namespace "k8s.io": not found Jul 2 08:50:03.468315 kubelet[1960]: I0702 08:50:03.465943 1960 scope.go:117] "RemoveContainer" containerID="c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1" Jul 2 08:50:03.472834 env[1140]: time="2024-07-02T08:50:03.472495689Z" level=info msg="RemoveContainer for \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\"" Jul 2 08:50:03.477814 env[1140]: time="2024-07-02T08:50:03.477702799Z" level=info msg="RemoveContainer for \"c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1\" returns successfully" Jul 2 08:50:03.480475 systemd[1]: Removed slice kubepods-burstable-pod99108173_7867_45eb_b9d3_342ef812bbc5.slice. Jul 2 08:50:03.533150 kubelet[1960]: I0702 08:50:03.533093 1960 topology_manager.go:215] "Topology Admit Handler" podUID="63b3f90f-24a6-4099-940c-e69e3134a829" podNamespace="kube-system" podName="cilium-tnjjz" Jul 2 08:50:03.533402 kubelet[1960]: E0702 08:50:03.533182 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" containerName="mount-cgroup" Jul 2 08:50:03.533402 kubelet[1960]: E0702 08:50:03.533199 1960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" containerName="mount-cgroup" Jul 2 08:50:03.533402 kubelet[1960]: I0702 08:50:03.533228 1960 memory_manager.go:354] "RemoveStaleState removing state" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" containerName="mount-cgroup" Jul 2 08:50:03.533402 kubelet[1960]: I0702 08:50:03.533237 1960 memory_manager.go:354] "RemoveStaleState removing state" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" containerName="mount-cgroup" Jul 2 08:50:03.544883 systemd[1]: Created slice kubepods-burstable-pod63b3f90f_24a6_4099_940c_e69e3134a829.slice. Jul 2 08:50:03.602812 kubelet[1960]: I0702 08:50:03.602744 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-bpf-maps\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.602812 kubelet[1960]: I0702 08:50:03.602816 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-lib-modules\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.602840 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-host-proc-sys-net\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.602878 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-hostproc\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.602900 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63b3f90f-24a6-4099-940c-e69e3134a829-clustermesh-secrets\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.602919 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63b3f90f-24a6-4099-940c-e69e3134a829-cilium-config-path\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.602953 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63b3f90f-24a6-4099-940c-e69e3134a829-hubble-tls\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.602976 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-host-proc-sys-kernel\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.602995 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-cilium-run\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603029 kubelet[1960]: I0702 08:50:03.603012 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-xtables-lock\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603364 kubelet[1960]: I0702 08:50:03.603050 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/63b3f90f-24a6-4099-940c-e69e3134a829-cilium-ipsec-secrets\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603364 kubelet[1960]: I0702 08:50:03.603070 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-etc-cni-netd\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603364 kubelet[1960]: I0702 08:50:03.603087 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-cilium-cgroup\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603364 kubelet[1960]: I0702 08:50:03.603104 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63b3f90f-24a6-4099-940c-e69e3134a829-cni-path\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.603364 kubelet[1960]: I0702 08:50:03.603143 1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsvzn\" (UniqueName: \"kubernetes.io/projected/63b3f90f-24a6-4099-940c-e69e3134a829-kube-api-access-nsvzn\") pod \"cilium-tnjjz\" (UID: \"63b3f90f-24a6-4099-940c-e69e3134a829\") " pod="kube-system/cilium-tnjjz" Jul 2 08:50:03.817712 sshd[3782]: Accepted publickey for core from 172.24.4.1 port 50498 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:50:03.820850 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:50:03.831614 systemd-logind[1134]: New session 23 of user core. Jul 2 08:50:03.833167 systemd[1]: Started session-23.scope. Jul 2 08:50:03.851034 env[1140]: time="2024-07-02T08:50:03.850939101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tnjjz,Uid:63b3f90f-24a6-4099-940c-e69e3134a829,Namespace:kube-system,Attempt:0,}" Jul 2 08:50:03.988050 env[1140]: time="2024-07-02T08:50:03.987860203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:50:03.988350 env[1140]: time="2024-07-02T08:50:03.987949431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:50:03.988350 env[1140]: time="2024-07-02T08:50:03.987982382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:50:03.988350 env[1140]: time="2024-07-02T08:50:03.988227151Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919 pid=3832 runtime=io.containerd.runc.v2 Jul 2 08:50:04.020142 systemd[1]: Started cri-containerd-48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919.scope. Jul 2 08:50:04.085064 env[1140]: time="2024-07-02T08:50:04.084835155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tnjjz,Uid:63b3f90f-24a6-4099-940c-e69e3134a829,Namespace:kube-system,Attempt:0,} returns sandbox id \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\"" Jul 2 08:50:04.092345 env[1140]: time="2024-07-02T08:50:04.091477715Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:50:04.128695 env[1140]: time="2024-07-02T08:50:04.128564528Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49\"" Jul 2 08:50:04.134517 env[1140]: time="2024-07-02T08:50:04.134397120Z" level=info msg="StartContainer for \"335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49\"" Jul 2 08:50:04.166756 systemd[1]: Started cri-containerd-335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49.scope. Jul 2 08:50:04.268911 env[1140]: time="2024-07-02T08:50:04.268853043Z" level=info msg="StartContainer for \"335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49\" returns successfully" Jul 2 08:50:04.296784 systemd[1]: cri-containerd-335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49.scope: Deactivated successfully. Jul 2 08:50:04.350495 env[1140]: time="2024-07-02T08:50:04.350244686Z" level=info msg="shim disconnected" id=335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49 Jul 2 08:50:04.351037 env[1140]: time="2024-07-02T08:50:04.350989453Z" level=warning msg="cleaning up after shim disconnected" id=335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49 namespace=k8s.io Jul 2 08:50:04.351291 env[1140]: time="2024-07-02T08:50:04.351252936Z" level=info msg="cleaning up dead shim" Jul 2 08:50:04.360308 env[1140]: time="2024-07-02T08:50:04.360254408Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:50:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Jul 2 08:50:04.445212 kubelet[1960]: I0702 08:50:04.445161 1960 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99108173-7867-45eb-b9d3-342ef812bbc5" path="/var/lib/kubelet/pods/99108173-7867-45eb-b9d3-342ef812bbc5/volumes" Jul 2 08:50:04.480549 env[1140]: time="2024-07-02T08:50:04.480510436Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:50:04.506813 env[1140]: time="2024-07-02T08:50:04.506771417Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e\"" Jul 2 08:50:04.508615 env[1140]: time="2024-07-02T08:50:04.508011703Z" level=info msg="StartContainer for \"7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e\"" Jul 2 08:50:04.545287 systemd[1]: Started cri-containerd-7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e.scope. Jul 2 08:50:04.581495 env[1140]: time="2024-07-02T08:50:04.581429451Z" level=info msg="StartContainer for \"7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e\" returns successfully" Jul 2 08:50:04.598296 systemd[1]: cri-containerd-7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e.scope: Deactivated successfully. Jul 2 08:50:04.632863 env[1140]: time="2024-07-02T08:50:04.632757075Z" level=info msg="shim disconnected" id=7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e Jul 2 08:50:04.633095 env[1140]: time="2024-07-02T08:50:04.633073948Z" level=warning msg="cleaning up after shim disconnected" id=7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e namespace=k8s.io Jul 2 08:50:04.633205 env[1140]: time="2024-07-02T08:50:04.633188262Z" level=info msg="cleaning up dead shim" Jul 2 08:50:04.641467 env[1140]: time="2024-07-02T08:50:04.641421044Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:50:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3983 runtime=io.containerd.runc.v2\n" Jul 2 08:50:05.469072 systemd[1]: run-containerd-runc-k8s.io-7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e-runc.yLI5sI.mount: Deactivated successfully. Jul 2 08:50:05.469200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e-rootfs.mount: Deactivated successfully. Jul 2 08:50:05.483881 env[1140]: time="2024-07-02T08:50:05.483802257Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:50:05.529294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293140359.mount: Deactivated successfully. Jul 2 08:50:05.544083 env[1140]: time="2024-07-02T08:50:05.543648839Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50\"" Jul 2 08:50:05.546893 env[1140]: time="2024-07-02T08:50:05.546842405Z" level=info msg="StartContainer for \"c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50\"" Jul 2 08:50:05.594467 systemd[1]: Started cri-containerd-c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50.scope. Jul 2 08:50:05.645534 env[1140]: time="2024-07-02T08:50:05.645466278Z" level=info msg="StartContainer for \"c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50\" returns successfully" Jul 2 08:50:05.651263 systemd[1]: cri-containerd-c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50.scope: Deactivated successfully. Jul 2 08:50:05.686408 env[1140]: time="2024-07-02T08:50:05.686318828Z" level=info msg="shim disconnected" id=c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50 Jul 2 08:50:05.686408 env[1140]: time="2024-07-02T08:50:05.686392115Z" level=warning msg="cleaning up after shim disconnected" id=c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50 namespace=k8s.io Jul 2 08:50:05.686408 env[1140]: time="2024-07-02T08:50:05.686407023Z" level=info msg="cleaning up dead shim" Jul 2 08:50:05.696729 env[1140]: time="2024-07-02T08:50:05.696680509Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:50:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Jul 2 08:50:06.252768 kubelet[1960]: W0702 08:50:06.252681 1960 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99108173_7867_45eb_b9d3_342ef812bbc5.slice/cri-containerd-c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1.scope WatchSource:0}: container "c52912e9dc0a18342139f91579506a30aed3bf552534db663ea0c269611639c1" in namespace "k8s.io": not found Jul 2 08:50:06.471620 systemd[1]: run-containerd-runc-k8s.io-c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50-runc.0niNm3.mount: Deactivated successfully. Jul 2 08:50:06.472117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50-rootfs.mount: Deactivated successfully. Jul 2 08:50:06.506868 env[1140]: time="2024-07-02T08:50:06.506720410Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:50:06.540293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447039361.mount: Deactivated successfully. Jul 2 08:50:06.548262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691740399.mount: Deactivated successfully. Jul 2 08:50:06.555786 env[1140]: time="2024-07-02T08:50:06.555692419Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641\"" Jul 2 08:50:06.557005 env[1140]: time="2024-07-02T08:50:06.556747677Z" level=info msg="StartContainer for \"6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641\"" Jul 2 08:50:06.578837 systemd[1]: Started cri-containerd-6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641.scope. Jul 2 08:50:06.614891 systemd[1]: cri-containerd-6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641.scope: Deactivated successfully. Jul 2 08:50:06.617271 env[1140]: time="2024-07-02T08:50:06.616997755Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b3f90f_24a6_4099_940c_e69e3134a829.slice/cri-containerd-6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641.scope/memory.events\": no such file or directory" Jul 2 08:50:06.625863 env[1140]: time="2024-07-02T08:50:06.625755200Z" level=info msg="StartContainer for \"6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641\" returns successfully" Jul 2 08:50:06.666541 env[1140]: time="2024-07-02T08:50:06.666466516Z" level=info msg="shim disconnected" id=6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641 Jul 2 08:50:06.666541 env[1140]: time="2024-07-02T08:50:06.666533702Z" level=warning msg="cleaning up after shim disconnected" id=6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641 namespace=k8s.io Jul 2 08:50:06.666541 env[1140]: time="2024-07-02T08:50:06.666551395Z" level=info msg="cleaning up dead shim" Jul 2 08:50:06.678052 env[1140]: time="2024-07-02T08:50:06.677987370Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:50:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4097 runtime=io.containerd.runc.v2\n" Jul 2 08:50:06.990473 kubelet[1960]: I0702 08:50:06.990343 1960 setters.go:580] "Node became not ready" node="ci-3510-3-5-a-cacadfe6a6.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:50:06Z","lastTransitionTime":"2024-07-02T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:50:07.537803 env[1140]: time="2024-07-02T08:50:07.537682425Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:50:07.598870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793551197.mount: Deactivated successfully. Jul 2 08:50:07.614439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155784172.mount: Deactivated successfully. Jul 2 08:50:07.628402 env[1140]: time="2024-07-02T08:50:07.628361017Z" level=info msg="CreateContainer within sandbox \"48baf71b3aabda0f0ad3cfb92a06fb0e78d74272025560720120e67321ae8919\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99\"" Jul 2 08:50:07.629369 env[1140]: time="2024-07-02T08:50:07.629342086Z" level=info msg="StartContainer for \"c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99\"" Jul 2 08:50:07.648952 systemd[1]: Started cri-containerd-c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99.scope. Jul 2 08:50:07.665623 kubelet[1960]: E0702 08:50:07.665556 1960 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:50:07.704968 env[1140]: time="2024-07-02T08:50:07.704880741Z" level=info msg="StartContainer for \"c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99\" returns successfully" Jul 2 08:50:08.803686 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 08:50:08.856656 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 2 08:50:09.377264 kubelet[1960]: W0702 08:50:09.377189 1960 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b3f90f_24a6_4099_940c_e69e3134a829.slice/cri-containerd-335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49.scope WatchSource:0}: task 335e887709374a3d7dd51e4eec60456dddb0390f165bb7795b40bf5e20cdba49 not found: not found Jul 2 08:50:10.811496 systemd[1]: run-containerd-runc-k8s.io-c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99-runc.2j8XFo.mount: Deactivated successfully. Jul 2 08:50:11.977151 systemd-networkd[973]: lxc_health: Link UP Jul 2 08:50:11.983068 systemd-networkd[973]: lxc_health: Gained carrier Jul 2 08:50:11.983807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:50:12.492189 kubelet[1960]: W0702 08:50:12.492143 1960 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b3f90f_24a6_4099_940c_e69e3134a829.slice/cri-containerd-7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e.scope WatchSource:0}: task 7aebf03d787b99da0dc8b4547e69b9b5c9b0cad528bdec2ad5af56bf2c1d244e not found: not found Jul 2 08:50:12.999553 systemd[1]: run-containerd-runc-k8s.io-c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99-runc.JqQD1A.mount: Deactivated successfully. Jul 2 08:50:13.755703 systemd-networkd[973]: lxc_health: Gained IPv6LL Jul 2 08:50:13.868775 kubelet[1960]: I0702 08:50:13.868712 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tnjjz" podStartSLOduration=10.868689606 podStartE2EDuration="10.868689606s" podCreationTimestamp="2024-07-02 08:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:50:08.593280873 +0000 UTC m=+156.364285313" watchObservedRunningTime="2024-07-02 08:50:13.868689606 +0000 UTC m=+161.639694046" Jul 2 08:50:15.281263 systemd[1]: run-containerd-runc-k8s.io-c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99-runc.tEXxXI.mount: Deactivated successfully. Jul 2 08:50:15.600835 kubelet[1960]: W0702 08:50:15.600681 1960 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b3f90f_24a6_4099_940c_e69e3134a829.slice/cri-containerd-c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50.scope WatchSource:0}: task c868b8957c2d22ad2446f2495ad91e2232ecb1c0548cbac9d8a1bcddfc08bf50 not found: not found Jul 2 08:50:17.552094 systemd[1]: run-containerd-runc-k8s.io-c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99-runc.JY8vDA.mount: Deactivated successfully. Jul 2 08:50:18.709775 kubelet[1960]: W0702 08:50:18.709697 1960 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b3f90f_24a6_4099_940c_e69e3134a829.slice/cri-containerd-6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641.scope WatchSource:0}: task 6b3c17cb45fcbefb2ff1a6cec5a32d3670c9e4f8f18d8199de212336111a1641 not found: not found Jul 2 08:50:19.801419 systemd[1]: run-containerd-runc-k8s.io-c6f3ae2f9419d95f470a151a329e32084cd59346aa12193e9696f77230fc6a99-runc.74kZiZ.mount: Deactivated successfully. Jul 2 08:50:20.144227 sshd[3782]: pam_unix(sshd:session): session closed for user core Jul 2 08:50:20.151184 systemd[1]: sshd@22-172.24.4.86:22-172.24.4.1:50498.service: Deactivated successfully. Jul 2 08:50:20.153074 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:50:20.154840 systemd-logind[1134]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:50:20.158089 systemd-logind[1134]: Removed session 23.