Jul 2 08:50:55.027169 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 08:50:55.027218 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:50:55.027246 kernel: BIOS-provided physical RAM map: Jul 2 08:50:55.027264 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 08:50:55.027280 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 08:50:55.027297 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 08:50:55.027317 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 2 08:50:55.027334 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 2 08:50:55.027355 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 08:50:55.027371 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 08:50:55.027388 kernel: NX (Execute Disable) protection: active Jul 2 08:50:55.027404 kernel: SMBIOS 2.8 present. Jul 2 08:50:55.027420 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jul 2 08:50:55.027437 kernel: Hypervisor detected: KVM Jul 2 08:50:55.027457 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 08:50:55.027479 kernel: kvm-clock: cpu 0, msr 4e192001, primary cpu clock Jul 2 08:50:55.027497 kernel: kvm-clock: using sched offset of 5685727398 cycles Jul 2 08:50:55.027516 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 08:50:55.027535 kernel: tsc: Detected 1996.249 MHz processor Jul 2 08:50:55.027554 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 08:50:55.027573 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 08:50:55.027592 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 2 08:50:55.027610 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 08:50:55.027634 kernel: ACPI: Early table checksum verification disabled Jul 2 08:50:55.027652 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jul 2 08:50:55.027670 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:50:55.027688 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:50:55.027707 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:50:55.027724 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 08:50:55.027743 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:50:55.027761 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:50:55.027779 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jul 2 08:50:55.027859 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jul 2 08:50:55.027877 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 08:50:55.027895 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jul 2 08:50:55.027913 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jul 2 08:50:55.027931 kernel: No NUMA configuration found Jul 2 08:50:55.027948 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jul 2 08:50:55.027966 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jul 2 08:50:55.027984 kernel: Zone ranges: Jul 2 08:50:55.028015 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 08:50:55.028034 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jul 2 08:50:55.028053 kernel: Normal empty Jul 2 08:50:55.028072 kernel: Movable zone start for each node Jul 2 08:50:55.028090 kernel: Early memory node ranges Jul 2 08:50:55.028109 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 08:50:55.028132 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 2 08:50:55.028151 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jul 2 08:50:55.028170 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 08:50:55.028189 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 08:50:55.028207 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jul 2 08:50:55.028226 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 08:50:55.028244 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 08:50:55.028263 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 08:50:55.028282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 08:50:55.028306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 08:50:55.028326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 08:50:55.028345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 08:50:55.028364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 08:50:55.028383 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 08:50:55.028401 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 08:50:55.028420 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 08:50:55.028439 kernel: Booting paravirtualized kernel on KVM Jul 2 08:50:55.028458 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 08:50:55.028477 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 08:50:55.028501 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 08:50:55.028520 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 08:50:55.028538 kernel: pcpu-alloc: [0] 0 1 Jul 2 08:50:55.028557 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Jul 2 08:50:55.028575 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 08:50:55.028594 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jul 2 08:50:55.028613 kernel: Policy zone: DMA32 Jul 2 08:50:55.028635 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:50:55.028658 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:50:55.028677 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:50:55.028696 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 08:50:55.028715 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:50:55.028736 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 123076K reserved, 0K cma-reserved) Jul 2 08:50:55.028755 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:50:55.028774 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 08:50:55.031847 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 08:50:55.031875 kernel: rcu: Hierarchical RCU implementation. Jul 2 08:50:55.031891 kernel: rcu: RCU event tracing is enabled. Jul 2 08:50:55.031906 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:50:55.031921 kernel: Rude variant of Tasks RCU enabled. Jul 2 08:50:55.031936 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:50:55.031950 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:50:55.031965 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:50:55.031979 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 08:50:55.031993 kernel: Console: colour VGA+ 80x25 Jul 2 08:50:55.032010 kernel: printk: console [tty0] enabled Jul 2 08:50:55.032024 kernel: printk: console [ttyS0] enabled Jul 2 08:50:55.032039 kernel: ACPI: Core revision 20210730 Jul 2 08:50:55.032053 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 08:50:55.032067 kernel: x2apic enabled Jul 2 08:50:55.032081 kernel: Switched APIC routing to physical x2apic. Jul 2 08:50:55.032095 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 08:50:55.032109 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 08:50:55.032124 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 2 08:50:55.032138 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 08:50:55.032156 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 08:50:55.032171 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 08:50:55.032186 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 08:50:55.032200 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 08:50:55.032214 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 08:50:55.032228 kernel: Speculative Store Bypass: Vulnerable Jul 2 08:50:55.032242 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 2 08:50:55.032256 kernel: Freeing SMP alternatives memory: 32K Jul 2 08:50:55.032270 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:50:55.032289 kernel: LSM: Security Framework initializing Jul 2 08:50:55.032303 kernel: SELinux: Initializing. Jul 2 08:50:55.032317 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:50:55.032331 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:50:55.032346 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 2 08:50:55.032360 kernel: Performance Events: AMD PMU driver. Jul 2 08:50:55.032374 kernel: ... version: 0 Jul 2 08:50:55.032388 kernel: ... bit width: 48 Jul 2 08:50:55.032402 kernel: ... generic registers: 4 Jul 2 08:50:55.032429 kernel: ... value mask: 0000ffffffffffff Jul 2 08:50:55.032444 kernel: ... max period: 00007fffffffffff Jul 2 08:50:55.032461 kernel: ... fixed-purpose events: 0 Jul 2 08:50:55.032475 kernel: ... event mask: 000000000000000f Jul 2 08:50:55.032490 kernel: signal: max sigframe size: 1440 Jul 2 08:50:55.032504 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:50:55.032519 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:50:55.032534 kernel: x86: Booting SMP configuration: Jul 2 08:50:55.032551 kernel: .... node #0, CPUs: #1 Jul 2 08:50:55.032566 kernel: kvm-clock: cpu 1, msr 4e192041, secondary cpu clock Jul 2 08:50:55.032581 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Jul 2 08:50:55.032596 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:50:55.032610 kernel: smpboot: Max logical packages: 2 Jul 2 08:50:55.032625 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 2 08:50:55.032640 kernel: devtmpfs: initialized Jul 2 08:50:55.032655 kernel: x86/mm: Memory block size: 128MB Jul 2 08:50:55.032670 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:50:55.032688 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:50:55.032702 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:50:55.032717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:50:55.032732 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:50:55.032747 kernel: audit: type=2000 audit(1719910254.520:1): state=initialized audit_enabled=0 res=1 Jul 2 08:50:55.032762 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:50:55.032777 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 08:50:55.032814 kernel: cpuidle: using governor menu Jul 2 08:50:55.032830 kernel: ACPI: bus type PCI registered Jul 2 08:50:55.032848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:50:55.032863 kernel: dca service started, version 1.12.1 Jul 2 08:50:55.032878 kernel: PCI: Using configuration type 1 for base access Jul 2 08:50:55.032906 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 08:50:55.032922 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:50:55.032937 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:50:55.032951 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:50:55.032966 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:50:55.032981 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:50:55.033000 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 08:50:55.033014 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 08:50:55.033029 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 08:50:55.033044 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:50:55.033059 kernel: ACPI: Interpreter enabled Jul 2 08:50:55.033073 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 08:50:55.033088 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 08:50:55.033103 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 08:50:55.033118 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 08:50:55.033136 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 08:50:55.033357 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:50:55.033517 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 08:50:55.033541 kernel: acpiphp: Slot [3] registered Jul 2 08:50:55.033556 kernel: acpiphp: Slot [4] registered Jul 2 08:50:55.033571 kernel: acpiphp: Slot [5] registered Jul 2 08:50:55.033586 kernel: acpiphp: Slot [6] registered Jul 2 08:50:55.033605 kernel: acpiphp: Slot [7] registered Jul 2 08:50:55.033620 kernel: acpiphp: Slot [8] registered Jul 2 08:50:55.033635 kernel: acpiphp: Slot [9] registered Jul 2 08:50:55.033650 kernel: acpiphp: Slot [10] registered Jul 2 08:50:55.033665 kernel: acpiphp: Slot [11] registered Jul 2 08:50:55.033678 kernel: acpiphp: Slot [12] registered Jul 2 08:50:55.033688 kernel: acpiphp: Slot [13] registered Jul 2 08:50:55.033698 kernel: acpiphp: Slot [14] registered Jul 2 08:50:55.033707 kernel: acpiphp: Slot [15] registered Jul 2 08:50:55.033717 kernel: acpiphp: Slot [16] registered Jul 2 08:50:55.033729 kernel: acpiphp: Slot [17] registered Jul 2 08:50:55.033738 kernel: acpiphp: Slot [18] registered Jul 2 08:50:55.033747 kernel: acpiphp: Slot [19] registered Jul 2 08:50:55.033755 kernel: acpiphp: Slot [20] registered Jul 2 08:50:55.033763 kernel: acpiphp: Slot [21] registered Jul 2 08:50:55.033770 kernel: acpiphp: Slot [22] registered Jul 2 08:50:55.033778 kernel: acpiphp: Slot [23] registered Jul 2 08:50:55.035824 kernel: acpiphp: Slot [24] registered Jul 2 08:50:55.035835 kernel: acpiphp: Slot [25] registered Jul 2 08:50:55.035846 kernel: acpiphp: Slot [26] registered Jul 2 08:50:55.035854 kernel: acpiphp: Slot [27] registered Jul 2 08:50:55.035862 kernel: acpiphp: Slot [28] registered Jul 2 08:50:55.035870 kernel: acpiphp: Slot [29] registered Jul 2 08:50:55.035878 kernel: acpiphp: Slot [30] registered Jul 2 08:50:55.035886 kernel: acpiphp: Slot [31] registered Jul 2 08:50:55.035894 kernel: PCI host bridge to bus 0000:00 Jul 2 08:50:55.035997 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 08:50:55.036073 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 08:50:55.036150 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 08:50:55.036222 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 08:50:55.036293 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 08:50:55.036364 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 08:50:55.036467 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 08:50:55.036560 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 08:50:55.036659 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 08:50:55.036742 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 2 08:50:55.036845 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 08:50:55.036940 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 08:50:55.037022 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 08:50:55.037103 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 08:50:55.037194 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 08:50:55.037282 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 08:50:55.037364 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 08:50:55.037457 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 08:50:55.037541 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 08:50:55.037626 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 08:50:55.037706 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 2 08:50:55.037809 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 2 08:50:55.037896 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 08:50:55.037988 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 08:50:55.038072 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 2 08:50:55.038154 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 2 08:50:55.038237 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 08:50:55.038319 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 2 08:50:55.038438 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 08:50:55.038525 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 08:50:55.038608 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 2 08:50:55.038697 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 08:50:55.038804 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 08:50:55.038894 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 2 08:50:55.038977 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 08:50:55.039077 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 08:50:55.039160 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 2 08:50:55.039242 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 08:50:55.039254 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 08:50:55.039263 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 08:50:55.039271 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 08:50:55.039279 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 08:50:55.039288 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 08:50:55.039300 kernel: iommu: Default domain type: Translated Jul 2 08:50:55.039308 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 08:50:55.039390 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 08:50:55.039474 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 08:50:55.039557 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 08:50:55.039569 kernel: vgaarb: loaded Jul 2 08:50:55.039577 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:50:55.039585 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:50:55.039593 kernel: PTP clock support registered Jul 2 08:50:55.039606 kernel: PCI: Using ACPI for IRQ routing Jul 2 08:50:55.039614 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 08:50:55.039622 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 08:50:55.039630 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 2 08:50:55.039637 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 08:50:55.039645 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:50:55.039654 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:50:55.039661 kernel: pnp: PnP ACPI init Jul 2 08:50:55.039746 kernel: pnp 00:03: [dma 2] Jul 2 08:50:55.039762 kernel: pnp: PnP ACPI: found 5 devices Jul 2 08:50:55.039771 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 08:50:55.039779 kernel: NET: Registered PF_INET protocol family Jul 2 08:50:55.042817 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:50:55.042829 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 08:50:55.042838 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:50:55.042846 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:50:55.042854 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 08:50:55.042866 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 08:50:55.042874 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:50:55.042882 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:50:55.042891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:50:55.042899 kernel: NET: Registered PF_XDP protocol family Jul 2 08:50:55.042981 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 08:50:55.043060 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 08:50:55.043134 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 08:50:55.043207 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 08:50:55.043285 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 08:50:55.043369 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 08:50:55.043452 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 08:50:55.043535 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 08:50:55.043547 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:50:55.043555 kernel: Initialise system trusted keyrings Jul 2 08:50:55.043563 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 08:50:55.043574 kernel: Key type asymmetric registered Jul 2 08:50:55.043582 kernel: Asymmetric key parser 'x509' registered Jul 2 08:50:55.043590 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 08:50:55.043598 kernel: io scheduler mq-deadline registered Jul 2 08:50:55.043606 kernel: io scheduler kyber registered Jul 2 08:50:55.043614 kernel: io scheduler bfq registered Jul 2 08:50:55.043622 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 08:50:55.043631 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 08:50:55.043640 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 08:50:55.043650 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 08:50:55.043661 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 08:50:55.043670 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:50:55.043678 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 08:50:55.043686 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 08:50:55.043694 kernel: random: crng init done Jul 2 08:50:55.043702 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 08:50:55.043710 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 08:50:55.043718 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 08:50:55.043830 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 2 08:50:55.043916 kernel: rtc_cmos 00:04: registered as rtc0 Jul 2 08:50:55.043990 kernel: rtc_cmos 00:04: setting system clock to 2024-07-02T08:50:54 UTC (1719910254) Jul 2 08:50:55.044063 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 2 08:50:55.044075 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:50:55.044083 kernel: Segment Routing with IPv6 Jul 2 08:50:55.044091 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:50:55.044099 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:50:55.044107 kernel: Key type dns_resolver registered Jul 2 08:50:55.044118 kernel: IPI shorthand broadcast: enabled Jul 2 08:50:55.044126 kernel: sched_clock: Marking stable (696091177, 119927577)->(843697184, -27678430) Jul 2 08:50:55.044134 kernel: registered taskstats version 1 Jul 2 08:50:55.044142 kernel: Loading compiled-in X.509 certificates Jul 2 08:50:55.044150 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 08:50:55.044158 kernel: Key type .fscrypt registered Jul 2 08:50:55.044166 kernel: Key type fscrypt-provisioning registered Jul 2 08:50:55.044174 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:50:55.044184 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:50:55.044192 kernel: ima: No architecture policies found Jul 2 08:50:55.044200 kernel: clk: Disabling unused clocks Jul 2 08:50:55.044208 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 08:50:55.044216 kernel: Write protecting the kernel read-only data: 28672k Jul 2 08:50:55.044225 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 08:50:55.044233 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 08:50:55.044241 kernel: Run /init as init process Jul 2 08:50:55.044249 kernel: with arguments: Jul 2 08:50:55.044258 kernel: /init Jul 2 08:50:55.044266 kernel: with environment: Jul 2 08:50:55.044274 kernel: HOME=/ Jul 2 08:50:55.044281 kernel: TERM=linux Jul 2 08:50:55.044289 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:50:55.044301 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:50:55.044311 systemd[1]: Detected virtualization kvm. Jul 2 08:50:55.044322 systemd[1]: Detected architecture x86-64. Jul 2 08:50:55.044335 systemd[1]: Running in initrd. Jul 2 08:50:55.044343 systemd[1]: No hostname configured, using default hostname. Jul 2 08:50:55.044354 systemd[1]: Hostname set to . Jul 2 08:50:55.044368 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:50:55.044380 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:50:55.044393 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:50:55.044403 systemd[1]: Reached target cryptsetup.target. Jul 2 08:50:55.044413 systemd[1]: Reached target paths.target. Jul 2 08:50:55.044428 systemd[1]: Reached target slices.target. Jul 2 08:50:55.044438 systemd[1]: Reached target swap.target. Jul 2 08:50:55.044447 systemd[1]: Reached target timers.target. Jul 2 08:50:55.044457 systemd[1]: Listening on iscsid.socket. Jul 2 08:50:55.044466 systemd[1]: Listening on iscsiuio.socket. Jul 2 08:50:55.044476 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:50:55.044486 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:50:55.044495 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:50:55.044507 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:50:55.044516 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:50:55.044526 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:50:55.044535 systemd[1]: Reached target sockets.target. Jul 2 08:50:55.044563 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:50:55.044577 systemd[1]: Finished network-cleanup.service. Jul 2 08:50:55.044588 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:50:55.044598 systemd[1]: Starting systemd-journald.service... Jul 2 08:50:55.044608 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:50:55.044617 systemd[1]: Starting systemd-resolved.service... Jul 2 08:50:55.044627 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 08:50:55.044637 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:50:55.044647 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:50:55.044660 systemd-journald[185]: Journal started Jul 2 08:50:55.044710 systemd-journald[185]: Runtime Journal (/run/log/journal/6bac03a0fe1b4f0f89e73d7b98e8bc71) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:50:55.006750 systemd-modules-load[186]: Inserted module 'overlay' Jul 2 08:50:55.069710 systemd[1]: Started systemd-journald.service. Jul 2 08:50:55.069746 kernel: audit: type=1130 audit(1719910255.063:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.056767 systemd-resolved[187]: Positive Trust Anchors: Jul 2 08:50:55.076118 kernel: audit: type=1130 audit(1719910255.069:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.076136 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:50:55.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.056780 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:50:55.081102 kernel: audit: type=1130 audit(1719910255.075:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.056830 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:50:55.087910 kernel: audit: type=1130 audit(1719910255.080:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.087926 kernel: Bridge firewalling registered Jul 2 08:50:55.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.059600 systemd-resolved[187]: Defaulting to hostname 'linux'. Jul 2 08:50:55.070282 systemd[1]: Started systemd-resolved.service. Jul 2 08:50:55.076685 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 08:50:55.081682 systemd[1]: Reached target nss-lookup.target. Jul 2 08:50:55.082715 systemd-modules-load[186]: Inserted module 'br_netfilter' Jul 2 08:50:55.089092 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 08:50:55.090261 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:50:55.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.098307 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:50:55.103143 kernel: audit: type=1130 audit(1719910255.097:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.110961 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 08:50:55.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.115828 kernel: audit: type=1130 audit(1719910255.110:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.115972 systemd[1]: Starting dracut-cmdline.service... Jul 2 08:50:55.125794 dracut-cmdline[203]: dracut-dracut-053 Jul 2 08:50:55.126420 kernel: SCSI subsystem initialized Jul 2 08:50:55.128086 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:50:55.139231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:50:55.139257 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:50:55.140825 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 08:50:55.144066 systemd-modules-load[186]: Inserted module 'dm_multipath' Jul 2 08:50:55.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.144881 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:50:55.150173 kernel: audit: type=1130 audit(1719910255.144:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.146051 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:50:55.155998 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:50:55.160495 kernel: audit: type=1130 audit(1719910255.155:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.196845 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:50:55.217940 kernel: iscsi: registered transport (tcp) Jul 2 08:50:55.244133 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:50:55.244193 kernel: QLogic iSCSI HBA Driver Jul 2 08:50:55.296702 systemd[1]: Finished dracut-cmdline.service. Jul 2 08:50:55.306188 kernel: audit: type=1130 audit(1719910255.296:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.298103 systemd[1]: Starting dracut-pre-udev.service... Jul 2 08:50:55.354922 kernel: raid6: sse2x4 gen() 13092 MB/s Jul 2 08:50:55.371858 kernel: raid6: sse2x4 xor() 7322 MB/s Jul 2 08:50:55.388917 kernel: raid6: sse2x2 gen() 14459 MB/s Jul 2 08:50:55.405861 kernel: raid6: sse2x2 xor() 8842 MB/s Jul 2 08:50:55.422882 kernel: raid6: sse2x1 gen() 11443 MB/s Jul 2 08:50:55.440651 kernel: raid6: sse2x1 xor() 6982 MB/s Jul 2 08:50:55.440719 kernel: raid6: using algorithm sse2x2 gen() 14459 MB/s Jul 2 08:50:55.440746 kernel: raid6: .... xor() 8842 MB/s, rmw enabled Jul 2 08:50:55.441539 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 08:50:55.455883 kernel: xor: measuring software checksum speed Jul 2 08:50:55.458636 kernel: prefetch64-sse : 18492 MB/sec Jul 2 08:50:55.458694 kernel: generic_sse : 15627 MB/sec Jul 2 08:50:55.458722 kernel: xor: using function: prefetch64-sse (18492 MB/sec) Jul 2 08:50:55.571873 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 08:50:55.587859 systemd[1]: Finished dracut-pre-udev.service. Jul 2 08:50:55.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.591000 audit: BPF prog-id=7 op=LOAD Jul 2 08:50:55.591000 audit: BPF prog-id=8 op=LOAD Jul 2 08:50:55.592820 systemd[1]: Starting systemd-udevd.service... Jul 2 08:50:55.606556 systemd-udevd[386]: Using default interface naming scheme 'v252'. Jul 2 08:50:55.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.611502 systemd[1]: Started systemd-udevd.service. Jul 2 08:50:55.614611 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 08:50:55.641249 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jul 2 08:50:55.687763 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 08:50:55.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.689125 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:50:55.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:55.748623 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:50:55.804819 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jul 2 08:50:55.815462 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:50:55.815505 kernel: GPT:17805311 != 41943039 Jul 2 08:50:55.815517 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:50:55.815527 kernel: GPT:17805311 != 41943039 Jul 2 08:50:55.815537 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:50:55.815547 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:50:55.840821 kernel: libata version 3.00 loaded. Jul 2 08:50:55.851823 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (437) Jul 2 08:50:55.853087 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 08:50:55.857535 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 08:50:55.900721 kernel: scsi host0: ata_piix Jul 2 08:50:55.900925 kernel: scsi host1: ata_piix Jul 2 08:50:55.901032 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 2 08:50:55.901046 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 2 08:50:55.907524 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 08:50:55.908088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 08:50:55.912865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:50:55.917154 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 08:50:55.919854 systemd[1]: Starting disk-uuid.service... Jul 2 08:50:55.930970 disk-uuid[462]: Primary Header is updated. Jul 2 08:50:55.930970 disk-uuid[462]: Secondary Entries is updated. Jul 2 08:50:55.930970 disk-uuid[462]: Secondary Header is updated. Jul 2 08:50:55.938836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:50:56.961840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:50:56.962354 disk-uuid[463]: The operation has completed successfully. Jul 2 08:50:57.039155 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:50:57.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.039400 systemd[1]: Finished disk-uuid.service. Jul 2 08:50:57.049105 systemd[1]: Starting verity-setup.service... Jul 2 08:50:57.088850 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 2 08:50:57.195428 systemd[1]: Found device dev-mapper-usr.device. Jul 2 08:50:57.201953 systemd[1]: Mounting sysusr-usr.mount... Jul 2 08:50:57.207503 systemd[1]: Finished verity-setup.service. Jul 2 08:50:57.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.338889 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 08:50:57.338964 systemd[1]: Mounted sysusr-usr.mount. Jul 2 08:50:57.339539 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 08:50:57.340278 systemd[1]: Starting ignition-setup.service... Jul 2 08:50:57.347183 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 08:50:57.356853 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:50:57.356969 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:50:57.356999 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:50:57.381876 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:50:57.395213 systemd[1]: Finished ignition-setup.service. Jul 2 08:50:57.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.396547 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 08:50:57.501659 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 08:50:57.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.503000 audit: BPF prog-id=9 op=LOAD Jul 2 08:50:57.506060 systemd[1]: Starting systemd-networkd.service... Jul 2 08:50:57.530231 systemd-networkd[637]: lo: Link UP Jul 2 08:50:57.530983 systemd-networkd[637]: lo: Gained carrier Jul 2 08:50:57.531947 systemd-networkd[637]: Enumeration completed Jul 2 08:50:57.532556 systemd[1]: Started systemd-networkd.service. Jul 2 08:50:57.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.532994 systemd-networkd[637]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:50:57.533358 systemd[1]: Reached target network.target. Jul 2 08:50:57.534716 systemd-networkd[637]: eth0: Link UP Jul 2 08:50:57.534721 systemd-networkd[637]: eth0: Gained carrier Jul 2 08:50:57.536378 systemd[1]: Starting iscsiuio.service... Jul 2 08:50:57.543889 systemd[1]: Started iscsiuio.service. Jul 2 08:50:57.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.545323 systemd[1]: Starting iscsid.service... Jul 2 08:50:57.549075 iscsid[642]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:50:57.549075 iscsid[642]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 08:50:57.549075 iscsid[642]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 08:50:57.549075 iscsid[642]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 08:50:57.549075 iscsid[642]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:50:57.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.556641 iscsid[642]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 08:50:57.550931 systemd[1]: Started iscsid.service. Jul 2 08:50:57.553952 systemd[1]: Starting dracut-initqueue.service... Jul 2 08:50:57.554838 systemd-networkd[637]: eth0: DHCPv4 address 172.24.4.4/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:50:57.568297 systemd[1]: Finished dracut-initqueue.service. Jul 2 08:50:57.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.568998 systemd[1]: Reached target remote-fs-pre.target. Jul 2 08:50:57.569939 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:50:57.570466 systemd[1]: Reached target remote-fs.target. Jul 2 08:50:57.572972 systemd[1]: Starting dracut-pre-mount.service... Jul 2 08:50:57.583552 systemd[1]: Finished dracut-pre-mount.service. Jul 2 08:50:57.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.697471 ignition[548]: Ignition 2.14.0 Jul 2 08:50:57.698662 ignition[548]: Stage: fetch-offline Jul 2 08:50:57.698891 ignition[548]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:50:57.698942 ignition[548]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:50:57.701381 ignition[548]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:50:57.701630 ignition[548]: parsed url from cmdline: "" Jul 2 08:50:57.701640 ignition[548]: no config URL provided Jul 2 08:50:57.701654 ignition[548]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:50:57.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:57.704241 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 08:50:57.701674 ignition[548]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:50:57.708490 systemd[1]: Starting ignition-fetch.service... Jul 2 08:50:57.701694 ignition[548]: failed to fetch config: resource requires networking Jul 2 08:50:57.702229 ignition[548]: Ignition finished successfully Jul 2 08:50:57.729108 ignition[656]: Ignition 2.14.0 Jul 2 08:50:57.729136 ignition[656]: Stage: fetch Jul 2 08:50:57.729369 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:50:57.729413 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:50:57.731536 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:50:57.731743 ignition[656]: parsed url from cmdline: "" Jul 2 08:50:57.731753 ignition[656]: no config URL provided Jul 2 08:50:57.731767 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:50:57.731819 ignition[656]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:50:57.739552 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 08:50:57.739609 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 08:50:57.739842 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 08:50:58.089162 ignition[656]: GET result: OK Jul 2 08:50:58.089356 ignition[656]: parsing config with SHA512: 98c4cb3c8d7cb28d1a190d27290fd81fb6b20efdac2ad0c756c117b7db5c5a7e69a93847536e0dec25b47596a9bae323ac4dc6cb3387e15a9e39c101d03c89b4 Jul 2 08:50:58.105391 unknown[656]: fetched base config from "system" Jul 2 08:50:58.106911 unknown[656]: fetched base config from "system" Jul 2 08:50:58.108229 unknown[656]: fetched user config from "openstack" Jul 2 08:50:58.110186 ignition[656]: fetch: fetch complete Jul 2 08:50:58.110215 ignition[656]: fetch: fetch passed Jul 2 08:50:58.110313 ignition[656]: Ignition finished successfully Jul 2 08:50:58.113406 systemd[1]: Finished ignition-fetch.service. Jul 2 08:50:58.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.116580 systemd[1]: Starting ignition-kargs.service... Jul 2 08:50:58.136557 ignition[662]: Ignition 2.14.0 Jul 2 08:50:58.136588 ignition[662]: Stage: kargs Jul 2 08:50:58.136881 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:50:58.136952 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:50:58.139254 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:50:58.142043 ignition[662]: kargs: kargs passed Jul 2 08:50:58.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.152418 systemd[1]: Finished ignition-kargs.service. Jul 2 08:50:58.142140 ignition[662]: Ignition finished successfully Jul 2 08:50:58.156916 systemd[1]: Starting ignition-disks.service... Jul 2 08:50:58.182245 ignition[668]: Ignition 2.14.0 Jul 2 08:50:58.182282 ignition[668]: Stage: disks Jul 2 08:50:58.182631 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:50:58.182697 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:50:58.186260 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:50:58.190543 ignition[668]: disks: disks passed Jul 2 08:50:58.190685 ignition[668]: Ignition finished successfully Jul 2 08:50:58.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.192406 systemd[1]: Finished ignition-disks.service. Jul 2 08:50:58.193967 systemd[1]: Reached target initrd-root-device.target. Jul 2 08:50:58.196371 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:50:58.198659 systemd[1]: Reached target local-fs.target. Jul 2 08:50:58.200869 systemd[1]: Reached target sysinit.target. Jul 2 08:50:58.202944 systemd[1]: Reached target basic.target. Jul 2 08:50:58.206729 systemd[1]: Starting systemd-fsck-root.service... Jul 2 08:50:58.236045 systemd-fsck[675]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 08:50:58.245559 systemd[1]: Finished systemd-fsck-root.service. Jul 2 08:50:58.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.248340 systemd[1]: Mounting sysroot.mount... Jul 2 08:50:58.267835 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 08:50:58.269401 systemd[1]: Mounted sysroot.mount. Jul 2 08:50:58.270599 systemd[1]: Reached target initrd-root-fs.target. Jul 2 08:50:58.274675 systemd[1]: Mounting sysroot-usr.mount... Jul 2 08:50:58.276569 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 08:50:58.277987 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 2 08:50:58.283099 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:50:58.283166 systemd[1]: Reached target ignition-diskful.target. Jul 2 08:50:58.288641 systemd[1]: Mounted sysroot-usr.mount. Jul 2 08:50:58.297984 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:50:58.306231 systemd[1]: Starting initrd-setup-root.service... Jul 2 08:50:58.329847 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Jul 2 08:50:58.330718 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:50:58.335738 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:50:58.335771 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:50:58.335796 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:50:58.339907 initrd-setup-root[711]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:50:58.345756 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:50:58.350849 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:50:58.353408 initrd-setup-root[729]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:50:58.417449 systemd[1]: Finished initrd-setup-root.service. Jul 2 08:50:58.418645 systemd[1]: Starting ignition-mount.service... Jul 2 08:50:58.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.422855 systemd[1]: Starting sysroot-boot.service... Jul 2 08:50:58.428881 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 08:50:58.429007 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 08:50:58.456562 ignition[750]: INFO : Ignition 2.14.0 Jul 2 08:50:58.456562 ignition[750]: INFO : Stage: mount Jul 2 08:50:58.458431 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:50:58.458431 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:50:58.466661 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:50:58.466661 ignition[750]: INFO : mount: mount passed Jul 2 08:50:58.466661 ignition[750]: INFO : Ignition finished successfully Jul 2 08:50:58.466935 systemd[1]: Finished ignition-mount.service. Jul 2 08:50:58.470111 systemd[1]: Finished sysroot-boot.service. Jul 2 08:50:58.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.486102 coreos-metadata[681]: Jul 02 08:50:58.486 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 08:50:58.505298 coreos-metadata[681]: Jul 02 08:50:58.505 INFO Fetch successful Jul 2 08:50:58.505971 coreos-metadata[681]: Jul 02 08:50:58.505 INFO wrote hostname ci-3510-3-5-3-17f1331597.novalocal to /sysroot/etc/hostname Jul 2 08:50:58.510205 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 08:50:58.510309 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 2 08:50:58.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:50:58.513116 systemd[1]: Starting ignition-files.service... Jul 2 08:50:58.522996 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:50:58.532918 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Jul 2 08:50:58.536002 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:50:58.536041 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:50:58.536066 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:50:58.544187 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:50:58.559340 ignition[777]: INFO : Ignition 2.14.0 Jul 2 08:50:58.560779 ignition[777]: INFO : Stage: files Jul 2 08:50:58.562086 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:50:58.563645 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:50:58.567908 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:50:58.573155 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:50:58.575469 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:50:58.577345 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:50:58.582486 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:50:58.584319 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:50:58.587127 unknown[777]: wrote ssh authorized keys file for user: core Jul 2 08:50:58.588464 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:50:58.588464 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:50:58.590149 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 08:50:58.643950 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:50:58.934096 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:50:58.936624 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:50:58.936624 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 08:50:58.937122 systemd-networkd[637]: eth0: Gained IPv6LL Jul 2 08:50:59.483860 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:50:59.948039 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:50:59.948039 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:50:59.952380 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 08:51:00.446371 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 08:51:02.143346 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:51:02.144749 ignition[777]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:51:02.145492 ignition[777]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:51:02.146224 ignition[777]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 2 08:51:02.172555 ignition[777]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:51:02.179070 ignition[777]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:51:02.180034 ignition[777]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 2 08:51:02.180691 ignition[777]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:51:02.180691 ignition[777]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:51:02.180691 ignition[777]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:51:02.180691 ignition[777]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:51:02.203219 ignition[777]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:51:02.203219 ignition[777]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:51:02.203219 ignition[777]: INFO : files: files passed Jul 2 08:51:02.203219 ignition[777]: INFO : Ignition finished successfully Jul 2 08:51:02.226628 kernel: kauditd_printk_skb: 27 callbacks suppressed Jul 2 08:51:02.226680 kernel: audit: type=1130 audit(1719910262.210:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.205297 systemd[1]: Finished ignition-files.service. Jul 2 08:51:02.215886 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 08:51:02.225508 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 08:51:02.227153 systemd[1]: Starting ignition-quench.service... Jul 2 08:51:02.248566 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:51:02.252086 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 08:51:02.265351 kernel: audit: type=1130 audit(1719910262.252:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.253636 systemd[1]: Reached target ignition-complete.target. Jul 2 08:51:02.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.268145 systemd[1]: Starting initrd-parse-etc.service... Jul 2 08:51:02.288476 kernel: audit: type=1130 audit(1719910262.270:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.288529 kernel: audit: type=1131 audit(1719910262.277:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.269836 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:51:02.270036 systemd[1]: Finished ignition-quench.service. Jul 2 08:51:02.302301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:51:02.312356 kernel: audit: type=1130 audit(1719910262.302:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.312389 kernel: audit: type=1131 audit(1719910262.302:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.302489 systemd[1]: Finished initrd-parse-etc.service. Jul 2 08:51:02.303978 systemd[1]: Reached target initrd-fs.target. Jul 2 08:51:02.313404 systemd[1]: Reached target initrd.target. Jul 2 08:51:02.315110 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 08:51:02.317062 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 08:51:02.334302 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 08:51:02.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.341973 systemd[1]: Starting initrd-cleanup.service... Jul 2 08:51:02.347087 kernel: audit: type=1130 audit(1719910262.334:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.351355 systemd[1]: Stopped target nss-lookup.target. Jul 2 08:51:02.352861 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 08:51:02.354570 systemd[1]: Stopped target timers.target. Jul 2 08:51:02.356084 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:51:02.356349 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 08:51:02.358280 systemd[1]: Stopped target initrd.target. Jul 2 08:51:02.368255 kernel: audit: type=1131 audit(1719910262.357:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.367999 systemd[1]: Stopped target basic.target. Jul 2 08:51:02.369508 systemd[1]: Stopped target ignition-complete.target. Jul 2 08:51:02.371052 systemd[1]: Stopped target ignition-diskful.target. Jul 2 08:51:02.372612 systemd[1]: Stopped target initrd-root-device.target. Jul 2 08:51:02.374204 systemd[1]: Stopped target remote-fs.target. Jul 2 08:51:02.375703 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 08:51:02.377350 systemd[1]: Stopped target sysinit.target. Jul 2 08:51:02.378885 systemd[1]: Stopped target local-fs.target. Jul 2 08:51:02.380341 systemd[1]: Stopped target local-fs-pre.target. Jul 2 08:51:02.381929 systemd[1]: Stopped target swap.target. Jul 2 08:51:02.383278 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:51:02.389282 kernel: audit: type=1131 audit(1719910262.384:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.383538 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 08:51:02.385085 systemd[1]: Stopped target cryptsetup.target. Jul 2 08:51:02.395304 kernel: audit: type=1131 audit(1719910262.389:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.389744 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:51:02.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.389865 systemd[1]: Stopped dracut-initqueue.service. Jul 2 08:51:02.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.390834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:51:02.390951 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 08:51:02.395881 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:51:02.395981 systemd[1]: Stopped ignition-files.service. Jul 2 08:51:02.397813 systemd[1]: Stopping ignition-mount.service... Jul 2 08:51:02.404831 iscsid[642]: iscsid shutting down. Jul 2 08:51:02.406743 ignition[815]: INFO : Ignition 2.14.0 Jul 2 08:51:02.406743 ignition[815]: INFO : Stage: umount Jul 2 08:51:02.406743 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:51:02.406743 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:51:02.409010 systemd[1]: Stopping iscsid.service... Jul 2 08:51:02.411482 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:51:02.411482 ignition[815]: INFO : umount: umount passed Jul 2 08:51:02.411482 ignition[815]: INFO : Ignition finished successfully Jul 2 08:51:02.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.409555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:51:02.409710 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 08:51:02.411983 systemd[1]: Stopping sysroot-boot.service... Jul 2 08:51:02.418840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:51:02.419017 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 08:51:02.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.419654 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:51:02.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.419849 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 08:51:02.423253 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 08:51:02.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.423911 systemd[1]: Stopped iscsid.service. Jul 2 08:51:02.426377 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:51:02.426460 systemd[1]: Stopped ignition-mount.service. Jul 2 08:51:02.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.427767 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:51:02.427938 systemd[1]: Stopped ignition-disks.service. Jul 2 08:51:02.429745 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:51:02.429923 systemd[1]: Stopped ignition-kargs.service. Jul 2 08:51:02.430985 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:51:02.431095 systemd[1]: Stopped ignition-fetch.service. Jul 2 08:51:02.432659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:51:02.433642 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 08:51:02.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.434763 systemd[1]: Stopped target paths.target. Jul 2 08:51:02.435987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:51:02.439937 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 08:51:02.440472 systemd[1]: Stopped target slices.target. Jul 2 08:51:02.441160 systemd[1]: Stopped target sockets.target. Jul 2 08:51:02.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.441658 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:51:02.441753 systemd[1]: Closed iscsid.socket. Jul 2 08:51:02.442263 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:51:02.442379 systemd[1]: Stopped ignition-setup.service. Jul 2 08:51:02.443089 systemd[1]: Stopping iscsiuio.service... Jul 2 08:51:02.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.447770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:51:02.449513 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 08:51:02.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.449592 systemd[1]: Stopped iscsiuio.service. Jul 2 08:51:02.451404 systemd[1]: Stopped target network.target. Jul 2 08:51:02.452519 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:51:02.452550 systemd[1]: Closed iscsiuio.socket. Jul 2 08:51:02.453070 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:51:02.454203 systemd[1]: Stopping systemd-resolved.service... Jul 2 08:51:02.455408 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:51:02.455492 systemd[1]: Finished initrd-cleanup.service. Jul 2 08:51:02.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.456924 systemd-networkd[637]: eth0: DHCPv6 lease lost Jul 2 08:51:02.468000 audit: BPF prog-id=9 op=UNLOAD Jul 2 08:51:02.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.457996 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:51:02.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.458088 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:51:02.461208 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:51:02.461260 systemd[1]: Closed systemd-networkd.socket. Jul 2 08:51:02.462873 systemd[1]: Stopping network-cleanup.service... Jul 2 08:51:02.467419 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:51:02.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.481000 audit: BPF prog-id=6 op=UNLOAD Jul 2 08:51:02.467491 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 08:51:02.468509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:51:02.468555 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:51:02.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.469644 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:51:02.469689 systemd[1]: Stopped systemd-modules-load.service. Jul 2 08:51:02.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.470557 systemd[1]: Stopping systemd-udevd.service... Jul 2 08:51:02.477618 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:51:02.478197 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:51:02.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.478322 systemd[1]: Stopped systemd-resolved.service. Jul 2 08:51:02.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.483676 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:51:02.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.483877 systemd[1]: Stopped systemd-udevd.service. Jul 2 08:51:02.486124 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:51:02.486218 systemd[1]: Stopped network-cleanup.service. Jul 2 08:51:02.487315 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:51:02.487352 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 08:51:02.488111 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:51:02.488151 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 08:51:02.489048 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:51:02.489101 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 08:51:02.490392 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:51:02.490433 systemd[1]: Stopped dracut-cmdline.service. Jul 2 08:51:02.491604 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:51:02.491652 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 08:51:02.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.493419 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 08:51:02.499499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:51:02.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.499556 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 08:51:02.500893 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:51:02.500986 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 08:51:02.679148 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:51:02.679372 systemd[1]: Stopped sysroot-boot.service. Jul 2 08:51:02.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.682857 systemd[1]: Reached target initrd-switch-root.target. Jul 2 08:51:02.684641 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:51:02.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:02.684745 systemd[1]: Stopped initrd-setup-root.service. Jul 2 08:51:02.688423 systemd[1]: Starting initrd-switch-root.service... Jul 2 08:51:02.732824 systemd[1]: Switching root. Jul 2 08:51:02.766104 systemd-journald[185]: Journal stopped Jul 2 08:51:07.501235 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jul 2 08:51:07.501282 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 08:51:07.501300 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 08:51:07.501330 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 08:51:07.501347 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:51:07.501359 kernel: SELinux: policy capability open_perms=1 Jul 2 08:51:07.501371 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:51:07.501382 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:51:07.501393 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:51:07.501404 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:51:07.501416 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:51:07.501430 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:51:07.501445 systemd[1]: Successfully loaded SELinux policy in 90.702ms. Jul 2 08:51:07.501466 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.257ms. Jul 2 08:51:07.501480 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:51:07.501493 systemd[1]: Detected virtualization kvm. Jul 2 08:51:07.501505 systemd[1]: Detected architecture x86-64. Jul 2 08:51:07.501517 systemd[1]: Detected first boot. Jul 2 08:51:07.501530 systemd[1]: Hostname set to . Jul 2 08:51:07.501543 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:51:07.501560 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 08:51:07.501612 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:51:07.501628 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:51:07.501641 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:51:07.501655 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:51:07.501668 kernel: kauditd_printk_skb: 48 callbacks suppressed Jul 2 08:51:07.501682 kernel: audit: type=1334 audit(1719910267.252:89): prog-id=12 op=LOAD Jul 2 08:51:07.501693 kernel: audit: type=1334 audit(1719910267.252:90): prog-id=3 op=UNLOAD Jul 2 08:51:07.501705 kernel: audit: type=1334 audit(1719910267.252:91): prog-id=13 op=LOAD Jul 2 08:51:07.501716 kernel: audit: type=1334 audit(1719910267.252:92): prog-id=14 op=LOAD Jul 2 08:51:07.501727 kernel: audit: type=1334 audit(1719910267.252:93): prog-id=4 op=UNLOAD Jul 2 08:51:07.501742 kernel: audit: type=1334 audit(1719910267.252:94): prog-id=5 op=UNLOAD Jul 2 08:51:07.501753 kernel: audit: type=1334 audit(1719910267.253:95): prog-id=15 op=LOAD Jul 2 08:51:07.501765 kernel: audit: type=1334 audit(1719910267.253:96): prog-id=12 op=UNLOAD Jul 2 08:51:07.501776 kernel: audit: type=1334 audit(1719910267.253:97): prog-id=16 op=LOAD Jul 2 08:51:07.501804 kernel: audit: type=1334 audit(1719910267.253:98): prog-id=17 op=LOAD Jul 2 08:51:07.501817 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:51:07.501830 systemd[1]: Stopped initrd-switch-root.service. Jul 2 08:51:07.501842 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:51:07.501855 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 08:51:07.501867 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 08:51:07.501882 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 08:51:07.501895 systemd[1]: Created slice system-getty.slice. Jul 2 08:51:07.501907 systemd[1]: Created slice system-modprobe.slice. Jul 2 08:51:07.501919 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 08:51:07.501934 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 08:51:07.501951 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 08:51:07.501965 systemd[1]: Created slice user.slice. Jul 2 08:51:07.501977 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:51:07.501989 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 08:51:07.502003 systemd[1]: Set up automount boot.automount. Jul 2 08:51:07.502015 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 08:51:07.502027 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 08:51:07.502040 systemd[1]: Stopped target initrd-fs.target. Jul 2 08:51:07.502052 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 08:51:07.502064 systemd[1]: Reached target integritysetup.target. Jul 2 08:51:07.502076 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:51:07.502088 systemd[1]: Reached target remote-fs.target. Jul 2 08:51:07.502100 systemd[1]: Reached target slices.target. Jul 2 08:51:07.502113 systemd[1]: Reached target swap.target. Jul 2 08:51:07.502125 systemd[1]: Reached target torcx.target. Jul 2 08:51:07.502137 systemd[1]: Reached target veritysetup.target. Jul 2 08:51:07.502150 systemd[1]: Listening on systemd-coredump.socket. Jul 2 08:51:07.502162 systemd[1]: Listening on systemd-initctl.socket. Jul 2 08:51:07.502174 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:51:07.502186 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:51:07.502198 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:51:07.502210 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 08:51:07.502222 systemd[1]: Mounting dev-hugepages.mount... Jul 2 08:51:07.502237 systemd[1]: Mounting dev-mqueue.mount... Jul 2 08:51:07.502249 systemd[1]: Mounting media.mount... Jul 2 08:51:07.502261 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:51:07.502273 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 08:51:07.502285 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 08:51:07.502297 systemd[1]: Mounting tmp.mount... Jul 2 08:51:07.502309 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 08:51:07.502322 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:51:07.502336 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:51:07.502348 systemd[1]: Starting modprobe@configfs.service... Jul 2 08:51:07.502360 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:51:07.502372 systemd[1]: Starting modprobe@drm.service... Jul 2 08:51:07.502384 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:51:07.502396 systemd[1]: Starting modprobe@fuse.service... Jul 2 08:51:07.502408 systemd[1]: Starting modprobe@loop.service... Jul 2 08:51:07.502421 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:51:07.502434 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:51:07.502448 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 08:51:07.502460 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:51:07.502472 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:51:07.502484 systemd[1]: Stopped systemd-journald.service. Jul 2 08:51:07.502496 systemd[1]: Starting systemd-journald.service... Jul 2 08:51:07.502508 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:51:07.502520 systemd[1]: Starting systemd-network-generator.service... Jul 2 08:51:07.502532 systemd[1]: Starting systemd-remount-fs.service... Jul 2 08:51:07.502544 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:51:07.502557 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:51:07.502571 systemd[1]: Stopped verity-setup.service. Jul 2 08:51:07.502584 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:51:07.502596 systemd[1]: Mounted dev-hugepages.mount. Jul 2 08:51:07.502608 systemd[1]: Mounted dev-mqueue.mount. Jul 2 08:51:07.502619 systemd[1]: Mounted media.mount. Jul 2 08:51:07.502631 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 08:51:07.502643 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 08:51:07.502656 systemd[1]: Mounted tmp.mount. Jul 2 08:51:07.502668 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:51:07.502682 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:51:07.502695 systemd[1]: Finished modprobe@configfs.service. Jul 2 08:51:07.504522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:51:07.504541 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:51:07.504554 kernel: fuse: init (API version 7.34) Jul 2 08:51:07.504570 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:51:07.504583 systemd[1]: Finished modprobe@drm.service. Jul 2 08:51:07.505090 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:51:07.505108 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:51:07.505121 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:51:07.505134 systemd[1]: Finished modprobe@fuse.service. Jul 2 08:51:07.505146 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:51:07.505158 systemd[1]: Finished systemd-network-generator.service. Jul 2 08:51:07.505174 kernel: loop: module loaded Jul 2 08:51:07.505191 systemd-journald[914]: Journal started Jul 2 08:51:07.505241 systemd-journald[914]: Runtime Journal (/run/log/journal/6bac03a0fe1b4f0f89e73d7b98e8bc71) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:51:03.259000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:51:03.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:51:03.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:51:03.392000 audit: BPF prog-id=10 op=LOAD Jul 2 08:51:03.392000 audit: BPF prog-id=10 op=UNLOAD Jul 2 08:51:03.392000 audit: BPF prog-id=11 op=LOAD Jul 2 08:51:03.392000 audit: BPF prog-id=11 op=UNLOAD Jul 2 08:51:03.550000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 08:51:03.550000 audit[848]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:51:03.550000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:51:03.553000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 08:51:03.553000 audit[848]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:51:03.553000 audit: CWD cwd="/" Jul 2 08:51:03.553000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:03.553000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:03.553000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:51:07.506753 systemd[1]: Started systemd-journald.service. Jul 2 08:51:07.252000 audit: BPF prog-id=12 op=LOAD Jul 2 08:51:07.252000 audit: BPF prog-id=3 op=UNLOAD Jul 2 08:51:07.252000 audit: BPF prog-id=13 op=LOAD Jul 2 08:51:07.252000 audit: BPF prog-id=14 op=LOAD Jul 2 08:51:07.252000 audit: BPF prog-id=4 op=UNLOAD Jul 2 08:51:07.252000 audit: BPF prog-id=5 op=UNLOAD Jul 2 08:51:07.253000 audit: BPF prog-id=15 op=LOAD Jul 2 08:51:07.253000 audit: BPF prog-id=12 op=UNLOAD Jul 2 08:51:07.253000 audit: BPF prog-id=16 op=LOAD Jul 2 08:51:07.253000 audit: BPF prog-id=17 op=LOAD Jul 2 08:51:07.253000 audit: BPF prog-id=13 op=UNLOAD Jul 2 08:51:07.253000 audit: BPF prog-id=14 op=UNLOAD Jul 2 08:51:07.264000 audit: BPF prog-id=18 op=LOAD Jul 2 08:51:07.264000 audit: BPF prog-id=15 op=UNLOAD Jul 2 08:51:07.266000 audit: BPF prog-id=19 op=LOAD Jul 2 08:51:07.267000 audit: BPF prog-id=20 op=LOAD Jul 2 08:51:07.267000 audit: BPF prog-id=16 op=UNLOAD Jul 2 08:51:07.267000 audit: BPF prog-id=17 op=UNLOAD Jul 2 08:51:07.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.276000 audit: BPF prog-id=18 op=UNLOAD Jul 2 08:51:07.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.428000 audit: BPF prog-id=21 op=LOAD Jul 2 08:51:07.428000 audit: BPF prog-id=22 op=LOAD Jul 2 08:51:07.428000 audit: BPF prog-id=23 op=LOAD Jul 2 08:51:07.428000 audit: BPF prog-id=19 op=UNLOAD Jul 2 08:51:07.428000 audit: BPF prog-id=20 op=UNLOAD Jul 2 08:51:07.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.499000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:51:07.499000 audit[914]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fffaf4f7150 a2=4000 a3=7fffaf4f71ec items=0 ppid=1 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:51:07.499000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:51:07.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:03.546708 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:51:07.251450 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:51:03.547698 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:51:07.251462 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 08:51:07.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:03.547720 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:51:07.269506 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:51:07.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:03.547752 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 08:51:07.508407 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:51:03.547764 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 08:51:07.511928 systemd[1]: Finished modprobe@loop.service. Jul 2 08:51:03.547817 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 08:51:03.547833 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 08:51:03.548042 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 08:51:03.548084 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:51:03.548100 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:51:03.550219 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 08:51:03.550257 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 08:51:03.550278 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 08:51:03.550295 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 08:51:03.550314 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 08:51:03.550330 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 08:51:06.828568 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:51:06.828859 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:51:06.829018 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:51:06.829342 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:51:06.830205 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 08:51:06.830290 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-07-02T08:51:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 08:51:07.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.514677 systemd[1]: Finished systemd-remount-fs.service. Jul 2 08:51:07.515966 systemd[1]: Reached target network-pre.target. Jul 2 08:51:07.519617 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 08:51:07.520970 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 08:51:07.521457 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:51:07.524733 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 08:51:07.543051 systemd-journald[914]: Time spent on flushing to /var/log/journal/6bac03a0fe1b4f0f89e73d7b98e8bc71 is 35.749ms for 1105 entries. Jul 2 08:51:07.543051 systemd-journald[914]: System Journal (/var/log/journal/6bac03a0fe1b4f0f89e73d7b98e8bc71) is 8.0M, max 584.8M, 576.8M free. Jul 2 08:51:07.625439 systemd-journald[914]: Received client request to flush runtime journal. Jul 2 08:51:07.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.526461 systemd[1]: Starting systemd-journal-flush.service... Jul 2 08:51:07.526982 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:51:07.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.527964 systemd[1]: Starting systemd-random-seed.service... Jul 2 08:51:07.528558 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:51:07.530066 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:51:07.627879 udevadm[950]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:51:07.535136 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 08:51:07.535730 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 08:51:07.550831 systemd[1]: Finished systemd-random-seed.service. Jul 2 08:51:07.555711 systemd[1]: Reached target first-boot-complete.target. Jul 2 08:51:07.567459 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:51:07.571634 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:51:07.573247 systemd[1]: Starting systemd-udev-settle.service... Jul 2 08:51:07.594070 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 08:51:07.595632 systemd[1]: Starting systemd-sysusers.service... Jul 2 08:51:07.626511 systemd[1]: Finished systemd-journal-flush.service. Jul 2 08:51:07.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:07.664322 systemd[1]: Finished systemd-sysusers.service. Jul 2 08:51:08.507880 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 08:51:08.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:08.509000 audit: BPF prog-id=24 op=LOAD Jul 2 08:51:08.510000 audit: BPF prog-id=25 op=LOAD Jul 2 08:51:08.510000 audit: BPF prog-id=7 op=UNLOAD Jul 2 08:51:08.510000 audit: BPF prog-id=8 op=UNLOAD Jul 2 08:51:08.512726 systemd[1]: Starting systemd-udevd.service... Jul 2 08:51:08.555016 systemd-udevd[962]: Using default interface naming scheme 'v252'. Jul 2 08:51:08.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:08.614000 audit: BPF prog-id=26 op=LOAD Jul 2 08:51:08.611653 systemd[1]: Started systemd-udevd.service. Jul 2 08:51:08.620317 systemd[1]: Starting systemd-networkd.service... Jul 2 08:51:08.645000 audit: BPF prog-id=27 op=LOAD Jul 2 08:51:08.645000 audit: BPF prog-id=28 op=LOAD Jul 2 08:51:08.646000 audit: BPF prog-id=29 op=LOAD Jul 2 08:51:08.649272 systemd[1]: Starting systemd-userdbd.service... Jul 2 08:51:08.699888 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 08:51:08.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:08.700183 systemd[1]: Started systemd-userdbd.service. Jul 2 08:51:08.792500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:51:08.800278 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 08:51:08.804825 kernel: ACPI: button: Power Button [PWRF] Jul 2 08:51:08.796000 audit[965]: AVC avc: denied { confidentiality } for pid=965 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:51:08.822339 systemd-networkd[972]: lo: Link UP Jul 2 08:51:08.822347 systemd-networkd[972]: lo: Gained carrier Jul 2 08:51:08.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:08.823676 systemd-networkd[972]: Enumeration completed Jul 2 08:51:08.823769 systemd[1]: Started systemd-networkd.service. Jul 2 08:51:08.825683 systemd-networkd[972]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:51:08.796000 audit[965]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d6cf02c310 a1=3207c a2=7fc7b32e0bc5 a3=5 items=108 ppid=962 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:51:08.796000 audit: CWD cwd="/" Jul 2 08:51:08.796000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=1 name=(null) inode=13913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=2 name=(null) inode=13913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=3 name=(null) inode=13914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=4 name=(null) inode=13913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=5 name=(null) inode=13915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=6 name=(null) inode=13913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.827994 systemd-networkd[972]: eth0: Link UP Jul 2 08:51:08.828002 systemd-networkd[972]: eth0: Gained carrier Jul 2 08:51:08.796000 audit: PATH item=7 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=8 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=9 name=(null) inode=13917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=10 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=11 name=(null) inode=13918 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=12 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=13 name=(null) inode=13919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=14 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=15 name=(null) inode=13920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=16 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=17 name=(null) inode=13921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=18 name=(null) inode=13913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=19 name=(null) inode=13922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=20 name=(null) inode=13922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=21 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=22 name=(null) inode=13922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=23 name=(null) inode=13924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=24 name=(null) inode=13922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=25 name=(null) inode=13925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=26 name=(null) inode=13922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=27 name=(null) inode=13926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=28 name=(null) inode=13922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=29 name=(null) inode=13927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=30 name=(null) inode=13913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=31 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=32 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=33 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=34 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=35 name=(null) inode=13930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=36 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=37 name=(null) inode=13931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=38 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=39 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=40 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=41 name=(null) inode=13933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=42 name=(null) inode=13913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=43 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=44 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=45 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=46 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=47 name=(null) inode=13936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=48 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=49 name=(null) inode=13937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=50 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=51 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=52 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=53 name=(null) inode=13939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=55 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=56 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=57 name=(null) inode=13941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=58 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=59 name=(null) inode=13942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=60 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=61 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=62 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=63 name=(null) inode=13944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=64 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=65 name=(null) inode=13945 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=66 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=67 name=(null) inode=13946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=68 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=69 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=70 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=71 name=(null) inode=13948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=72 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=73 name=(null) inode=13949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=74 name=(null) inode=13949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=75 name=(null) inode=13950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=76 name=(null) inode=13949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=77 name=(null) inode=13951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=78 name=(null) inode=13949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=79 name=(null) inode=13952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=80 name=(null) inode=13949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=81 name=(null) inode=13953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=82 name=(null) inode=13949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=83 name=(null) inode=13954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=84 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=85 name=(null) inode=13955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=86 name=(null) inode=13955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=87 name=(null) inode=13956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=88 name=(null) inode=13955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.836973 systemd-networkd[972]: eth0: DHCPv4 address 172.24.4.4/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:51:08.796000 audit: PATH item=89 name=(null) inode=13957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=90 name=(null) inode=13955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=91 name=(null) inode=13958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=92 name=(null) inode=13955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=93 name=(null) inode=13959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=94 name=(null) inode=13955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=95 name=(null) inode=13960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=96 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=97 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=98 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=99 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=100 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=101 name=(null) inode=13963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=102 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=103 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=104 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=105 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=106 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PATH item=107 name=(null) inode=13966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:51:08.796000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 08:51:08.846877 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 08:51:08.848839 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 08:51:08.853823 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:51:08.902195 systemd[1]: Finished systemd-udev-settle.service. Jul 2 08:51:08.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:08.903945 systemd[1]: Starting lvm2-activation-early.service... Jul 2 08:51:08.935853 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:51:08.976624 systemd[1]: Finished lvm2-activation-early.service. Jul 2 08:51:08.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:08.978099 systemd[1]: Reached target cryptsetup.target. Jul 2 08:51:08.981562 systemd[1]: Starting lvm2-activation.service... Jul 2 08:51:08.990022 lvm[992]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:51:09.035736 systemd[1]: Finished lvm2-activation.service. Jul 2 08:51:09.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.037127 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:51:09.038261 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:51:09.038320 systemd[1]: Reached target local-fs.target. Jul 2 08:51:09.039417 systemd[1]: Reached target machines.target. Jul 2 08:51:09.043027 systemd[1]: Starting ldconfig.service... Jul 2 08:51:09.097028 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:51:09.097130 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:51:09.099282 systemd[1]: Starting systemd-boot-update.service... Jul 2 08:51:09.103133 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 08:51:09.106538 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 08:51:09.110056 systemd[1]: Starting systemd-sysext.service... Jul 2 08:51:09.248412 systemd[1]: boot.automount: Got automount request for /boot, triggered by 994 (bootctl) Jul 2 08:51:09.251001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 08:51:09.356456 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 08:51:09.395314 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 08:51:09.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.410421 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 08:51:09.410899 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 08:51:09.660914 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 08:51:09.729411 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:51:09.730662 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 08:51:09.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.783873 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:51:09.825859 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 08:51:09.870164 (sd-sysext)[1006]: Using extensions 'kubernetes'. Jul 2 08:51:09.871091 (sd-sysext)[1006]: Merged extensions into '/usr'. Jul 2 08:51:09.896970 systemd-fsck[1003]: fsck.fat 4.2 (2021-01-31) Jul 2 08:51:09.896970 systemd-fsck[1003]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 08:51:09.925396 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 08:51:09.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.931568 systemd[1]: Mounting boot.mount... Jul 2 08:51:09.933128 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:51:09.943034 systemd[1]: Mounting usr-share-oem.mount... Jul 2 08:51:09.944242 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:51:09.945925 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:51:09.947464 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:51:09.949748 systemd[1]: Starting modprobe@loop.service... Jul 2 08:51:09.950609 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:51:09.950723 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:51:09.951171 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:51:09.952271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:51:09.952400 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:51:09.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.956116 systemd[1]: Mounted usr-share-oem.mount. Jul 2 08:51:09.957331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:51:09.957444 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:51:09.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.959031 systemd[1]: Finished systemd-sysext.service. Jul 2 08:51:09.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.959876 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:51:09.959985 systemd[1]: Finished modprobe@loop.service. Jul 2 08:51:09.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:09.964006 systemd[1]: Starting ensure-sysext.service... Jul 2 08:51:09.964499 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:51:09.964542 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:51:09.965522 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 08:51:09.971634 systemd[1]: Reloading. Jul 2 08:51:10.007589 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 08:51:10.022199 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:51:10.038194 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:51:10.067459 /usr/lib/systemd/system-generators/torcx-generator[1033]: time="2024-07-02T08:51:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:51:10.068199 /usr/lib/systemd/system-generators/torcx-generator[1033]: time="2024-07-02T08:51:10Z" level=info msg="torcx already run" Jul 2 08:51:10.180755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:51:10.180776 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:51:10.209766 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:51:10.269000 audit: BPF prog-id=30 op=LOAD Jul 2 08:51:10.270000 audit: BPF prog-id=31 op=LOAD Jul 2 08:51:10.270000 audit: BPF prog-id=24 op=UNLOAD Jul 2 08:51:10.270000 audit: BPF prog-id=25 op=UNLOAD Jul 2 08:51:10.270000 audit: BPF prog-id=32 op=LOAD Jul 2 08:51:10.271000 audit: BPF prog-id=27 op=UNLOAD Jul 2 08:51:10.271000 audit: BPF prog-id=33 op=LOAD Jul 2 08:51:10.271000 audit: BPF prog-id=34 op=LOAD Jul 2 08:51:10.271000 audit: BPF prog-id=28 op=UNLOAD Jul 2 08:51:10.271000 audit: BPF prog-id=29 op=UNLOAD Jul 2 08:51:10.272000 audit: BPF prog-id=35 op=LOAD Jul 2 08:51:10.272000 audit: BPF prog-id=21 op=UNLOAD Jul 2 08:51:10.272000 audit: BPF prog-id=36 op=LOAD Jul 2 08:51:10.272000 audit: BPF prog-id=37 op=LOAD Jul 2 08:51:10.272000 audit: BPF prog-id=22 op=UNLOAD Jul 2 08:51:10.272000 audit: BPF prog-id=23 op=UNLOAD Jul 2 08:51:10.275000 audit: BPF prog-id=38 op=LOAD Jul 2 08:51:10.275000 audit: BPF prog-id=26 op=UNLOAD Jul 2 08:51:10.280603 systemd[1]: Mounted boot.mount. Jul 2 08:51:10.305827 systemd[1]: Finished ensure-sysext.service. Jul 2 08:51:10.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.307872 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:51:10.310533 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:51:10.313850 systemd[1]: Starting modprobe@drm.service... Jul 2 08:51:10.318379 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:51:10.322845 systemd[1]: Starting modprobe@loop.service... Jul 2 08:51:10.323776 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:51:10.323837 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:51:10.325772 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:51:10.327576 systemd[1]: Finished systemd-boot-update.service. Jul 2 08:51:10.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.328556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:51:10.328823 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:51:10.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.330087 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:51:10.330189 systemd[1]: Finished modprobe@drm.service. Jul 2 08:51:10.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.331647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:51:10.331939 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:51:10.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.333553 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:51:10.334262 systemd[1]: Finished modprobe@loop.service. Jul 2 08:51:10.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.335517 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:51:10.335557 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:51:10.359833 ldconfig[993]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:51:10.369907 systemd[1]: Finished ldconfig.service. Jul 2 08:51:10.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.385914 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 08:51:10.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.387511 systemd[1]: Starting audit-rules.service... Jul 2 08:51:10.388973 systemd[1]: Starting clean-ca-certificates.service... Jul 2 08:51:10.390508 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 08:51:10.395000 audit: BPF prog-id=39 op=LOAD Jul 2 08:51:10.397403 systemd[1]: Starting systemd-resolved.service... Jul 2 08:51:10.398000 audit: BPF prog-id=40 op=LOAD Jul 2 08:51:10.400957 systemd[1]: Starting systemd-timesyncd.service... Jul 2 08:51:10.403706 systemd[1]: Starting systemd-update-utmp.service... Jul 2 08:51:10.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.405157 systemd[1]: Finished clean-ca-certificates.service. Jul 2 08:51:10.405916 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:51:10.408448 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:51:10.408471 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:51:10.410000 audit[1092]: SYSTEM_BOOT pid=1092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.414319 systemd[1]: Finished systemd-update-utmp.service. Jul 2 08:51:10.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.432354 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 08:51:10.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.434688 systemd[1]: Starting systemd-update-done.service... Jul 2 08:51:10.441078 systemd[1]: Finished systemd-update-done.service. Jul 2 08:51:10.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:51:10.474000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:51:10.474000 audit[1107]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd55302500 a2=420 a3=0 items=0 ppid=1086 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:51:10.474000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 08:51:10.476665 augenrules[1107]: No rules Jul 2 08:51:10.476926 systemd[1]: Finished audit-rules.service. Jul 2 08:51:10.492729 systemd-resolved[1090]: Positive Trust Anchors: Jul 2 08:51:10.493124 systemd-resolved[1090]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:51:10.493226 systemd-resolved[1090]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:51:10.496296 systemd[1]: Started systemd-timesyncd.service. Jul 2 08:51:10.496879 systemd[1]: Reached target time-set.target. Jul 2 08:51:10.500514 systemd-resolved[1090]: Using system hostname 'ci-3510-3-5-3-17f1331597.novalocal'. Jul 2 08:51:10.502193 systemd[1]: Started systemd-resolved.service. Jul 2 08:51:10.502746 systemd[1]: Reached target network.target. Jul 2 08:51:10.503208 systemd[1]: Reached target nss-lookup.target. Jul 2 08:51:10.503655 systemd[1]: Reached target sysinit.target. Jul 2 08:51:10.504198 systemd[1]: Started motdgen.path. Jul 2 08:51:10.504606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 08:51:10.505259 systemd[1]: Started logrotate.timer. Jul 2 08:51:10.505845 systemd[1]: Started mdadm.timer. Jul 2 08:51:10.506258 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 08:51:10.506701 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:51:10.506733 systemd[1]: Reached target paths.target. Jul 2 08:51:10.507147 systemd[1]: Reached target timers.target. Jul 2 08:51:10.507909 systemd[1]: Listening on dbus.socket. Jul 2 08:51:10.509617 systemd[1]: Starting docker.socket... Jul 2 08:51:10.513079 systemd[1]: Listening on sshd.socket. Jul 2 08:51:10.513641 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:51:10.514063 systemd[1]: Listening on docker.socket. Jul 2 08:51:10.514587 systemd[1]: Reached target sockets.target. Jul 2 08:51:10.515014 systemd[1]: Reached target basic.target. Jul 2 08:51:10.515456 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:51:10.515483 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:51:10.516332 systemd[1]: Starting containerd.service... Jul 2 08:51:10.517622 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 08:51:10.519414 systemd[1]: Starting dbus.service... Jul 2 08:51:10.522160 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 08:51:10.524247 systemd[1]: Starting extend-filesystems.service... Jul 2 08:51:10.525482 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 08:51:10.529507 systemd[1]: Starting motdgen.service... Jul 2 08:51:10.532771 systemd[1]: Starting prepare-helm.service... Jul 2 08:51:10.534057 jq[1120]: false Jul 2 08:51:10.535680 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 08:51:10.540068 systemd[1]: Starting sshd-keygen.service... Jul 2 08:51:10.546217 systemd[1]: Starting systemd-logind.service... Jul 2 08:51:10.546749 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:51:10.546825 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:51:10.547904 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:51:10.548662 systemd[1]: Starting update-engine.service... Jul 2 08:51:10.550285 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 08:51:10.554521 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:51:10.554700 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 08:51:10.569106 jq[1129]: true Jul 2 08:51:10.564240 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:51:10.564396 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 08:51:10.578812 tar[1131]: linux-amd64/helm Jul 2 08:51:11.226655 systemd-timesyncd[1091]: Contacted time server 129.151.225.244:123 (0.flatcar.pool.ntp.org). Jul 2 08:51:11.226775 systemd-timesyncd[1091]: Initial clock synchronization to Tue 2024-07-02 08:51:11.226305 UTC. Jul 2 08:51:11.227309 systemd-resolved[1090]: Clock change detected. Flushing caches. Jul 2 08:51:11.236640 jq[1134]: true Jul 2 08:51:11.239697 extend-filesystems[1121]: Found loop1 Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda1 Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda2 Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda3 Jul 2 08:51:11.243113 extend-filesystems[1121]: Found usr Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda4 Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda6 Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda7 Jul 2 08:51:11.243113 extend-filesystems[1121]: Found vda9 Jul 2 08:51:11.243113 extend-filesystems[1121]: Checking size of /dev/vda9 Jul 2 08:51:11.298648 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jul 2 08:51:11.286473 dbus-daemon[1117]: [system] SELinux support is enabled Jul 2 08:51:11.279824 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:51:11.299049 extend-filesystems[1121]: Resized partition /dev/vda9 Jul 2 08:51:11.279970 systemd[1]: Finished motdgen.service. Jul 2 08:51:11.307399 extend-filesystems[1157]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 08:51:11.286657 systemd[1]: Started dbus.service. Jul 2 08:51:11.289168 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:51:11.289202 systemd[1]: Reached target system-config.target. Jul 2 08:51:11.289701 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:51:11.289717 systemd[1]: Reached target user-config.target. Jul 2 08:51:11.292093 systemd-networkd[972]: eth0: Gained IPv6LL Jul 2 08:51:11.294445 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:51:11.298171 systemd[1]: Reached target network-online.target. Jul 2 08:51:11.302747 systemd[1]: Starting kubelet.service... Jul 2 08:51:11.365108 update_engine[1128]: I0702 08:51:11.363862 1128 main.cc:92] Flatcar Update Engine starting Jul 2 08:51:11.371947 systemd[1]: Started update-engine.service. Jul 2 08:51:11.428125 update_engine[1128]: I0702 08:51:11.372026 1128 update_check_scheduler.cc:74] Next update check in 8m12s Jul 2 08:51:11.374644 systemd[1]: Started locksmithd.service. Jul 2 08:51:11.377808 systemd[1]: Created slice system-sshd.slice. Jul 2 08:51:11.429144 env[1132]: time="2024-07-02T08:51:11.428744563Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 08:51:11.431790 systemd-logind[1127]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 08:51:11.431829 systemd-logind[1127]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:51:11.432390 systemd-logind[1127]: New seat seat0. Jul 2 08:51:11.437448 systemd[1]: Started systemd-logind.service. Jul 2 08:51:11.460641 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jul 2 08:51:11.463005 env[1132]: time="2024-07-02T08:51:11.462964643Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:51:11.926216 extend-filesystems[1157]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 08:51:11.926216 extend-filesystems[1157]: old_desc_blocks = 1, new_desc_blocks = 3 Jul 2 08:51:11.926216 extend-filesystems[1157]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jul 2 08:51:11.932317 extend-filesystems[1121]: Resized filesystem in /dev/vda9 Jul 2 08:51:11.927441 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:51:11.934413 env[1132]: time="2024-07-02T08:51:11.932448620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:51:11.927782 systemd[1]: Finished extend-filesystems.service. Jul 2 08:51:11.934807 env[1132]: time="2024-07-02T08:51:11.934712195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:51:11.934881 env[1132]: time="2024-07-02T08:51:11.934812624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:51:11.935497 env[1132]: time="2024-07-02T08:51:11.935412739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:51:11.935724 env[1132]: time="2024-07-02T08:51:11.935500043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:51:11.935724 env[1132]: time="2024-07-02T08:51:11.935576877Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:51:11.935724 env[1132]: time="2024-07-02T08:51:11.935605791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:51:11.938078 env[1132]: time="2024-07-02T08:51:11.938033134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:51:11.939062 env[1132]: time="2024-07-02T08:51:11.938982184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:51:11.939555 env[1132]: time="2024-07-02T08:51:11.939461683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:51:11.939636 env[1132]: time="2024-07-02T08:51:11.939557302Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:51:11.939912 env[1132]: time="2024-07-02T08:51:11.939867855Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:51:11.939978 env[1132]: time="2024-07-02T08:51:11.939912859Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:51:11.947653 bash[1177]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:51:11.949723 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 08:51:11.959619 env[1132]: time="2024-07-02T08:51:11.959508260Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:51:11.959781 env[1132]: time="2024-07-02T08:51:11.959678359Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:51:11.959816 env[1132]: time="2024-07-02T08:51:11.959795479Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:51:11.960032 env[1132]: time="2024-07-02T08:51:11.959912398Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960085 env[1132]: time="2024-07-02T08:51:11.960001125Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960158 env[1132]: time="2024-07-02T08:51:11.960118795Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960242 env[1132]: time="2024-07-02T08:51:11.960205909Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960330 env[1132]: time="2024-07-02T08:51:11.960258948Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960416 env[1132]: time="2024-07-02T08:51:11.960344940Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960467 env[1132]: time="2024-07-02T08:51:11.960429929Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960555 env[1132]: time="2024-07-02T08:51:11.960518505Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.960667 env[1132]: time="2024-07-02T08:51:11.960595860Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:51:11.961106 env[1132]: time="2024-07-02T08:51:11.961055663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:51:11.961447 env[1132]: time="2024-07-02T08:51:11.961393707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:51:11.962420 env[1132]: time="2024-07-02T08:51:11.962375989Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:51:11.964608 env[1132]: time="2024-07-02T08:51:11.963757580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.964608 env[1132]: time="2024-07-02T08:51:11.963879539Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:51:11.964705 env[1132]: time="2024-07-02T08:51:11.964649533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.964739 env[1132]: time="2024-07-02T08:51:11.964703314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.964767 env[1132]: time="2024-07-02T08:51:11.964738570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.964794 env[1132]: time="2024-07-02T08:51:11.964772233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.965521 env[1132]: time="2024-07-02T08:51:11.965018305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.965521 env[1132]: time="2024-07-02T08:51:11.965093155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.965521 env[1132]: time="2024-07-02T08:51:11.965130856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.965521 env[1132]: time="2024-07-02T08:51:11.965162916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.965521 env[1132]: time="2024-07-02T08:51:11.965198883Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:51:11.967775 env[1132]: time="2024-07-02T08:51:11.967727476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.967833 env[1132]: time="2024-07-02T08:51:11.967789853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.967865 env[1132]: time="2024-07-02T08:51:11.967832392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.967893 env[1132]: time="2024-07-02T08:51:11.967867248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:51:11.969248 env[1132]: time="2024-07-02T08:51:11.967907223Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 08:51:11.969248 env[1132]: time="2024-07-02T08:51:11.967947899Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:51:11.969248 env[1132]: time="2024-07-02T08:51:11.968001560Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 08:51:11.969248 env[1132]: time="2024-07-02T08:51:11.968365332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:51:11.970970 env[1132]: time="2024-07-02T08:51:11.970818773Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:51:11.973630 env[1132]: time="2024-07-02T08:51:11.970998380Z" level=info msg="Connect containerd service" Jul 2 08:51:11.973630 env[1132]: time="2024-07-02T08:51:11.971072880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:51:11.980586 env[1132]: time="2024-07-02T08:51:11.980469273Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:51:11.982474 env[1132]: time="2024-07-02T08:51:11.980838826Z" level=info msg="Start subscribing containerd event" Jul 2 08:51:11.984011 env[1132]: time="2024-07-02T08:51:11.983952035Z" level=info msg="Start recovering state" Jul 2 08:51:11.984209 env[1132]: time="2024-07-02T08:51:11.984169543Z" level=info msg="Start event monitor" Jul 2 08:51:11.984265 env[1132]: time="2024-07-02T08:51:11.984232841Z" level=info msg="Start snapshots syncer" Jul 2 08:51:11.984297 env[1132]: time="2024-07-02T08:51:11.984280641Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:51:11.984324 env[1132]: time="2024-07-02T08:51:11.984303213Z" level=info msg="Start streaming server" Jul 2 08:51:11.991135 env[1132]: time="2024-07-02T08:51:11.991052852Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:51:11.991524 env[1132]: time="2024-07-02T08:51:11.991484452Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:51:11.992092 env[1132]: time="2024-07-02T08:51:11.992053960Z" level=info msg="containerd successfully booted in 0.659641s" Jul 2 08:51:11.992233 systemd[1]: Started containerd.service. Jul 2 08:51:12.156142 locksmithd[1178]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:51:12.160733 sshd_keygen[1153]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:51:12.202735 systemd[1]: Finished sshd-keygen.service. Jul 2 08:51:12.204896 systemd[1]: Starting issuegen.service... Jul 2 08:51:12.206465 systemd[1]: Started sshd@0-172.24.4.4:22-172.24.4.1:32816.service. Jul 2 08:51:12.214068 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:51:12.214251 systemd[1]: Finished issuegen.service. Jul 2 08:51:12.216316 systemd[1]: Starting systemd-user-sessions.service... Jul 2 08:51:12.226644 systemd[1]: Finished systemd-user-sessions.service. Jul 2 08:51:12.228733 systemd[1]: Started getty@tty1.service. Jul 2 08:51:12.230485 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 08:51:12.231124 systemd[1]: Reached target getty.target. Jul 2 08:51:12.516796 tar[1131]: linux-amd64/LICENSE Jul 2 08:51:12.517226 tar[1131]: linux-amd64/README.md Jul 2 08:51:12.522316 systemd[1]: Finished prepare-helm.service. Jul 2 08:51:13.639935 sshd[1197]: Accepted publickey for core from 172.24.4.1 port 32816 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:51:13.686574 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:13.926561 systemd-logind[1127]: New session 1 of user core. Jul 2 08:51:13.930598 systemd[1]: Created slice user-500.slice. Jul 2 08:51:13.934480 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 08:51:13.990776 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 08:51:13.996727 systemd[1]: Starting user@500.service... Jul 2 08:51:14.098536 (systemd)[1207]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:14.644098 systemd[1207]: Queued start job for default target default.target. Jul 2 08:51:14.645946 systemd[1207]: Reached target paths.target. Jul 2 08:51:14.646013 systemd[1207]: Reached target sockets.target. Jul 2 08:51:14.646052 systemd[1207]: Reached target timers.target. Jul 2 08:51:14.646087 systemd[1207]: Reached target basic.target. Jul 2 08:51:14.646205 systemd[1207]: Reached target default.target. Jul 2 08:51:14.646274 systemd[1207]: Startup finished in 533ms. Jul 2 08:51:14.646483 systemd[1]: Started user@500.service. Jul 2 08:51:14.650295 systemd[1]: Started session-1.scope. Jul 2 08:51:14.670194 systemd[1]: Started kubelet.service. Jul 2 08:51:15.028125 systemd[1]: Started sshd@1-172.24.4.4:22-172.24.4.1:51398.service. Jul 2 08:51:16.352927 kubelet[1216]: E0702 08:51:16.352602 1216 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:51:16.357754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:51:16.358031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:51:16.358530 systemd[1]: kubelet.service: Consumed 2.017s CPU time. Jul 2 08:51:17.012293 sshd[1224]: Accepted publickey for core from 172.24.4.1 port 51398 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:51:17.484085 sshd[1224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:17.495399 systemd-logind[1127]: New session 2 of user core. Jul 2 08:51:17.495467 systemd[1]: Started session-2.scope. Jul 2 08:51:18.044852 sshd[1224]: pam_unix(sshd:session): session closed for user core Jul 2 08:51:18.050851 systemd[1]: Started sshd@2-172.24.4.4:22-172.24.4.1:51412.service. Jul 2 08:51:18.057457 systemd[1]: sshd@1-172.24.4.4:22-172.24.4.1:51398.service: Deactivated successfully. Jul 2 08:51:18.059245 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:51:18.062451 systemd-logind[1127]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:51:18.064962 systemd-logind[1127]: Removed session 2. Jul 2 08:51:18.305552 coreos-metadata[1116]: Jul 02 08:51:18.304 WARN failed to locate config-drive, using the metadata service API instead Jul 2 08:51:18.398958 coreos-metadata[1116]: Jul 02 08:51:18.398 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 08:51:18.868169 coreos-metadata[1116]: Jul 02 08:51:18.868 INFO Fetch successful Jul 2 08:51:18.868169 coreos-metadata[1116]: Jul 02 08:51:18.868 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:51:18.884491 coreos-metadata[1116]: Jul 02 08:51:18.884 INFO Fetch successful Jul 2 08:51:18.890006 unknown[1116]: wrote ssh authorized keys file for user: core Jul 2 08:51:18.928836 update-ssh-keys[1237]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:51:18.930672 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 08:51:18.931765 systemd[1]: Reached target multi-user.target. Jul 2 08:51:18.935870 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 08:51:18.968744 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 08:51:18.969408 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 08:51:18.970099 systemd[1]: Startup finished in 977ms (kernel) + 8.356s (initrd) + 15.181s (userspace) = 24.515s. Jul 2 08:51:19.381554 sshd[1232]: Accepted publickey for core from 172.24.4.1 port 51412 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:51:19.384109 sshd[1232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:19.393074 systemd-logind[1127]: New session 3 of user core. Jul 2 08:51:19.393425 systemd[1]: Started session-3.scope. Jul 2 08:51:20.198305 sshd[1232]: pam_unix(sshd:session): session closed for user core Jul 2 08:51:20.202859 systemd[1]: sshd@2-172.24.4.4:22-172.24.4.1:51412.service: Deactivated successfully. Jul 2 08:51:20.203773 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:51:20.205452 systemd-logind[1127]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:51:20.206905 systemd-logind[1127]: Removed session 3. Jul 2 08:51:26.475251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:51:26.475819 systemd[1]: Stopped kubelet.service. Jul 2 08:51:26.475902 systemd[1]: kubelet.service: Consumed 2.017s CPU time. Jul 2 08:51:26.479546 systemd[1]: Starting kubelet.service... Jul 2 08:51:26.774184 systemd[1]: Started kubelet.service. Jul 2 08:51:26.946317 kubelet[1246]: E0702 08:51:26.946268 1246 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:51:26.955441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:51:26.955599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:51:30.209785 systemd[1]: Started sshd@3-172.24.4.4:22-172.24.4.1:36332.service. Jul 2 08:51:31.514492 sshd[1254]: Accepted publickey for core from 172.24.4.1 port 36332 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:51:31.517405 sshd[1254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:31.527699 systemd[1]: Started session-4.scope. Jul 2 08:51:31.529775 systemd-logind[1127]: New session 4 of user core. Jul 2 08:51:32.107893 sshd[1254]: pam_unix(sshd:session): session closed for user core Jul 2 08:51:32.114414 systemd[1]: Started sshd@4-172.24.4.4:22-172.24.4.1:36336.service. Jul 2 08:51:32.116761 systemd[1]: sshd@3-172.24.4.4:22-172.24.4.1:36332.service: Deactivated successfully. Jul 2 08:51:32.118322 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:51:32.121928 systemd-logind[1127]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:51:32.124793 systemd-logind[1127]: Removed session 4. Jul 2 08:51:33.368856 sshd[1259]: Accepted publickey for core from 172.24.4.1 port 36336 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:51:33.372034 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:33.384117 systemd-logind[1127]: New session 5 of user core. Jul 2 08:51:33.384877 systemd[1]: Started session-5.scope. Jul 2 08:51:34.133479 sshd[1259]: pam_unix(sshd:session): session closed for user core Jul 2 08:51:34.140245 systemd[1]: Started sshd@5-172.24.4.4:22-172.24.4.1:36344.service. Jul 2 08:51:34.144353 systemd[1]: sshd@4-172.24.4.4:22-172.24.4.1:36336.service: Deactivated successfully. Jul 2 08:51:34.145844 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:51:34.147587 systemd-logind[1127]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:51:34.150252 systemd-logind[1127]: Removed session 5. Jul 2 08:51:35.598559 sshd[1265]: Accepted publickey for core from 172.24.4.1 port 36344 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:51:35.601767 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:35.609316 systemd[1]: Started session-6.scope. Jul 2 08:51:35.610067 systemd-logind[1127]: New session 6 of user core. Jul 2 08:51:36.434221 systemd[1]: Started sshd@6-172.24.4.4:22-172.24.4.1:53502.service. Jul 2 08:51:36.434386 sshd[1265]: pam_unix(sshd:session): session closed for user core Jul 2 08:51:36.439278 systemd[1]: sshd@5-172.24.4.4:22-172.24.4.1:36344.service: Deactivated successfully. Jul 2 08:51:36.440273 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:51:36.441698 systemd-logind[1127]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:51:36.443129 systemd-logind[1127]: Removed session 6. Jul 2 08:51:36.974669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:51:36.975062 systemd[1]: Stopped kubelet.service. Jul 2 08:51:36.977662 systemd[1]: Starting kubelet.service... Jul 2 08:51:37.282690 systemd[1]: Started kubelet.service. Jul 2 08:51:37.591567 kubelet[1278]: E0702 08:51:37.589338 1278 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:51:37.596925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:51:37.597231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:51:37.941085 sshd[1271]: Accepted publickey for core from 172.24.4.1 port 53502 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:51:37.944145 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:51:37.954989 systemd-logind[1127]: New session 7 of user core. Jul 2 08:51:37.955855 systemd[1]: Started session-7.scope. Jul 2 08:51:38.416157 sudo[1286]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:51:38.417472 sudo[1286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:51:38.483195 systemd[1]: Starting docker.service... Jul 2 08:51:38.563214 env[1296]: time="2024-07-02T08:51:38.563147229Z" level=info msg="Starting up" Jul 2 08:51:38.565189 env[1296]: time="2024-07-02T08:51:38.565157188Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:51:38.565283 env[1296]: time="2024-07-02T08:51:38.565266764Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:51:38.565414 env[1296]: time="2024-07-02T08:51:38.565394263Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:51:38.565477 env[1296]: time="2024-07-02T08:51:38.565463603Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:51:38.568098 env[1296]: time="2024-07-02T08:51:38.568054582Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:51:38.568098 env[1296]: time="2024-07-02T08:51:38.568078537Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:51:38.568098 env[1296]: time="2024-07-02T08:51:38.568095529Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:51:38.568098 env[1296]: time="2024-07-02T08:51:38.568108163Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:51:38.576913 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3290525655-merged.mount: Deactivated successfully. Jul 2 08:51:38.642392 env[1296]: time="2024-07-02T08:51:38.642326960Z" level=info msg="Loading containers: start." Jul 2 08:51:38.864184 kernel: Initializing XFRM netlink socket Jul 2 08:51:38.908941 env[1296]: time="2024-07-02T08:51:38.908699021Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 08:51:39.016481 systemd-networkd[972]: docker0: Link UP Jul 2 08:51:39.033243 env[1296]: time="2024-07-02T08:51:39.033178843Z" level=info msg="Loading containers: done." Jul 2 08:51:39.054676 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3504334477-merged.mount: Deactivated successfully. Jul 2 08:51:39.063384 env[1296]: time="2024-07-02T08:51:39.063290187Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:51:39.063541 env[1296]: time="2024-07-02T08:51:39.063485062Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 08:51:39.063609 env[1296]: time="2024-07-02T08:51:39.063599747Z" level=info msg="Daemon has completed initialization" Jul 2 08:51:39.097185 systemd[1]: Started docker.service. Jul 2 08:51:39.110223 env[1296]: time="2024-07-02T08:51:39.110162586Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:51:40.863979 env[1132]: time="2024-07-02T08:51:40.863905139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 08:51:41.853167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9484695.mount: Deactivated successfully. Jul 2 08:51:45.273092 env[1132]: time="2024-07-02T08:51:45.273023915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:45.277142 env[1132]: time="2024-07-02T08:51:45.277086957Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:45.284686 env[1132]: time="2024-07-02T08:51:45.284645244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:45.286931 env[1132]: time="2024-07-02T08:51:45.286892294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:45.290348 env[1132]: time="2024-07-02T08:51:45.290296677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 08:51:45.304870 env[1132]: time="2024-07-02T08:51:45.304785222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 08:51:47.725728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:51:47.726226 systemd[1]: Stopped kubelet.service. Jul 2 08:51:47.729998 systemd[1]: Starting kubelet.service... Jul 2 08:51:47.850686 systemd[1]: Started kubelet.service. Jul 2 08:51:48.657363 kubelet[1435]: E0702 08:51:48.657255 1435 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:51:48.663012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:51:48.663372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:51:49.180147 env[1132]: time="2024-07-02T08:51:49.180049469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:49.191237 env[1132]: time="2024-07-02T08:51:49.191078335Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:49.198160 env[1132]: time="2024-07-02T08:51:49.198080747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:49.203027 env[1132]: time="2024-07-02T08:51:49.202946338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:49.205329 env[1132]: time="2024-07-02T08:51:49.205268530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 08:51:49.230338 env[1132]: time="2024-07-02T08:51:49.230255350Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 08:51:51.140271 env[1132]: time="2024-07-02T08:51:51.140178902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:51.143026 env[1132]: time="2024-07-02T08:51:51.142964324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:51.146180 env[1132]: time="2024-07-02T08:51:51.146138431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:51.149766 env[1132]: time="2024-07-02T08:51:51.149742660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:51.151489 env[1132]: time="2024-07-02T08:51:51.151457379Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 08:51:51.166746 env[1132]: time="2024-07-02T08:51:51.166698663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 08:51:53.297498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166533350.mount: Deactivated successfully. Jul 2 08:51:54.187467 env[1132]: time="2024-07-02T08:51:54.187330640Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:54.190821 env[1132]: time="2024-07-02T08:51:54.190744070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:54.193438 env[1132]: time="2024-07-02T08:51:54.193362391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:54.196313 env[1132]: time="2024-07-02T08:51:54.196238948Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:54.197107 env[1132]: time="2024-07-02T08:51:54.197060869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 08:51:54.217284 env[1132]: time="2024-07-02T08:51:54.217232017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:51:54.957733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526760690.mount: Deactivated successfully. Jul 2 08:51:56.472843 update_engine[1128]: I0702 08:51:56.472702 1128 update_attempter.cc:509] Updating boot flags... Jul 2 08:51:56.684700 env[1132]: time="2024-07-02T08:51:56.683804795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:56.748998 env[1132]: time="2024-07-02T08:51:56.747239384Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:56.761049 env[1132]: time="2024-07-02T08:51:56.760976323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:56.763387 env[1132]: time="2024-07-02T08:51:56.763334739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:56.764479 env[1132]: time="2024-07-02T08:51:56.764442438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 08:51:56.781455 env[1132]: time="2024-07-02T08:51:56.781422872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:51:57.513521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377047430.mount: Deactivated successfully. Jul 2 08:51:57.527071 env[1132]: time="2024-07-02T08:51:57.527008258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:57.532647 env[1132]: time="2024-07-02T08:51:57.532562186Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:57.537659 env[1132]: time="2024-07-02T08:51:57.537561729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:57.543135 env[1132]: time="2024-07-02T08:51:57.543063158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:51:57.547332 env[1132]: time="2024-07-02T08:51:57.545576444Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 08:51:57.573318 env[1132]: time="2024-07-02T08:51:57.573233722Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:51:58.241422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3190155487.mount: Deactivated successfully. Jul 2 08:51:58.723121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 08:51:58.723310 systemd[1]: Stopped kubelet.service. Jul 2 08:51:58.726532 systemd[1]: Starting kubelet.service... Jul 2 08:51:59.320814 systemd[1]: Started kubelet.service. Jul 2 08:51:59.724229 kubelet[1485]: E0702 08:51:59.724110 1485 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:51:59.729418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:51:59.729816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:52:02.744855 env[1132]: time="2024-07-02T08:52:02.744784254Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:02.751572 env[1132]: time="2024-07-02T08:52:02.751528368Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:02.756102 env[1132]: time="2024-07-02T08:52:02.756055939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:02.760781 env[1132]: time="2024-07-02T08:52:02.760733314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:02.763041 env[1132]: time="2024-07-02T08:52:02.762982056Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 08:52:07.114417 systemd[1]: Stopped kubelet.service. Jul 2 08:52:07.118017 systemd[1]: Starting kubelet.service... Jul 2 08:52:07.144766 systemd[1]: Reloading. Jul 2 08:52:07.264106 /usr/lib/systemd/system-generators/torcx-generator[1579]: time="2024-07-02T08:52:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:52:07.264433 /usr/lib/systemd/system-generators/torcx-generator[1579]: time="2024-07-02T08:52:07Z" level=info msg="torcx already run" Jul 2 08:52:07.350531 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:52:07.350561 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:52:07.375598 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:52:07.525990 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:52:07.526141 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:52:07.526601 systemd[1]: Stopped kubelet.service. Jul 2 08:52:07.529243 systemd[1]: Starting kubelet.service... Jul 2 08:52:08.350531 systemd[1]: Started kubelet.service. Jul 2 08:52:08.493102 kubelet[1630]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:52:08.493102 kubelet[1630]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:52:08.493102 kubelet[1630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:52:08.493637 kubelet[1630]: I0702 08:52:08.493326 1630 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:52:08.825754 kubelet[1630]: I0702 08:52:08.825705 1630 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 08:52:08.826160 kubelet[1630]: I0702 08:52:08.826132 1630 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:52:08.826820 kubelet[1630]: I0702 08:52:08.826788 1630 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 08:52:08.872727 kubelet[1630]: E0702 08:52:08.872685 1630 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:08.878328 kubelet[1630]: I0702 08:52:08.878270 1630 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:52:08.909325 kubelet[1630]: I0702 08:52:08.909256 1630 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:52:08.910096 kubelet[1630]: I0702 08:52:08.910064 1630 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:52:08.910681 kubelet[1630]: I0702 08:52:08.910607 1630 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:52:08.910979 kubelet[1630]: I0702 08:52:08.910951 1630 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:52:08.911147 kubelet[1630]: I0702 08:52:08.911123 1630 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:52:08.911475 kubelet[1630]: I0702 08:52:08.911446 1630 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:52:08.911993 kubelet[1630]: I0702 08:52:08.911938 1630 kubelet.go:396] "Attempting to sync node with API server" Jul 2 08:52:08.912890 kubelet[1630]: I0702 08:52:08.912862 1630 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:52:08.913113 kubelet[1630]: W0702 08:52:08.913021 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-3-17f1331597.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:08.913220 kubelet[1630]: E0702 08:52:08.913125 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-3-17f1331597.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:08.913338 kubelet[1630]: I0702 08:52:08.913087 1630 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:52:08.913487 kubelet[1630]: I0702 08:52:08.913466 1630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:52:08.924530 kubelet[1630]: W0702 08:52:08.924449 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:08.924828 kubelet[1630]: E0702 08:52:08.924800 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:08.925095 kubelet[1630]: I0702 08:52:08.925053 1630 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:52:08.931554 kubelet[1630]: I0702 08:52:08.931450 1630 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:52:08.931753 kubelet[1630]: W0702 08:52:08.931628 1630 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:52:08.932785 kubelet[1630]: I0702 08:52:08.932693 1630 server.go:1256] "Started kubelet" Jul 2 08:52:08.933080 kubelet[1630]: I0702 08:52:08.933038 1630 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:52:08.935109 kubelet[1630]: I0702 08:52:08.935075 1630 server.go:461] "Adding debug handlers to kubelet server" Jul 2 08:52:08.944375 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 08:52:08.944561 kubelet[1630]: I0702 08:52:08.943260 1630 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:52:08.944561 kubelet[1630]: I0702 08:52:08.943523 1630 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:52:08.952808 kubelet[1630]: E0702 08:52:08.952742 1630 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.4:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-5-3-17f1331597.novalocal.17de5954e98e6a81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-5-3-17f1331597.novalocal,UID:ci-3510-3-5-3-17f1331597.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-5-3-17f1331597.novalocal,},FirstTimestamp:2024-07-02 08:52:08.932657793 +0000 UTC m=+0.562990674,LastTimestamp:2024-07-02 08:52:08.932657793 +0000 UTC m=+0.562990674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-5-3-17f1331597.novalocal,}" Jul 2 08:52:08.955125 kubelet[1630]: I0702 08:52:08.955081 1630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:52:08.958702 kubelet[1630]: I0702 08:52:08.958645 1630 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:52:08.960032 kubelet[1630]: I0702 08:52:08.959999 1630 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:52:08.960319 kubelet[1630]: I0702 08:52:08.960293 1630 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:52:08.961253 kubelet[1630]: W0702 08:52:08.961179 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:08.961463 kubelet[1630]: E0702 08:52:08.961438 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:08.962060 kubelet[1630]: E0702 08:52:08.962025 1630 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:52:08.963254 kubelet[1630]: E0702 08:52:08.963220 1630 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-3-17f1331597.novalocal?timeout=10s\": dial tcp 172.24.4.4:6443: connect: connection refused" interval="200ms" Jul 2 08:52:08.966800 kubelet[1630]: I0702 08:52:08.966764 1630 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:52:08.967003 kubelet[1630]: I0702 08:52:08.966981 1630 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:52:08.967262 kubelet[1630]: I0702 08:52:08.967225 1630 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:52:09.001044 kubelet[1630]: I0702 08:52:09.001015 1630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:52:09.002291 kubelet[1630]: I0702 08:52:09.002279 1630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:52:09.002384 kubelet[1630]: I0702 08:52:09.002374 1630 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:52:09.002551 kubelet[1630]: I0702 08:52:09.002540 1630 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 08:52:09.002678 kubelet[1630]: E0702 08:52:09.002667 1630 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:52:09.003493 kubelet[1630]: W0702 08:52:09.003471 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:09.003608 kubelet[1630]: E0702 08:52:09.003596 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:09.015331 kubelet[1630]: I0702 08:52:09.015308 1630 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:52:09.015331 kubelet[1630]: I0702 08:52:09.015329 1630 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:52:09.015476 kubelet[1630]: I0702 08:52:09.015343 1630 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:52:09.019480 kubelet[1630]: I0702 08:52:09.019457 1630 policy_none.go:49] "None policy: Start" Jul 2 08:52:09.020325 kubelet[1630]: I0702 08:52:09.020298 1630 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:52:09.020325 kubelet[1630]: I0702 08:52:09.020323 1630 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:52:09.036847 systemd[1]: Created slice kubepods.slice. Jul 2 08:52:09.044416 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 08:52:09.047937 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 08:52:09.053258 kubelet[1630]: I0702 08:52:09.053232 1630 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:52:09.053926 kubelet[1630]: I0702 08:52:09.053915 1630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:52:09.056593 kubelet[1630]: E0702 08:52:09.056569 1630 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-5-3-17f1331597.novalocal\" not found" Jul 2 08:52:09.060023 kubelet[1630]: I0702 08:52:09.060008 1630 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.060533 kubelet[1630]: E0702 08:52:09.060521 1630 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.4:6443/api/v1/nodes\": dial tcp 172.24.4.4:6443: connect: connection refused" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.105700 kubelet[1630]: I0702 08:52:09.102937 1630 topology_manager.go:215] "Topology Admit Handler" podUID="c8907a8dacad82cf9cfd970d7b127116" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.110074 kubelet[1630]: I0702 08:52:09.110028 1630 topology_manager.go:215] "Topology Admit Handler" podUID="701d24d87908b16c6b949f8249dfadc6" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.113439 kubelet[1630]: I0702 08:52:09.113381 1630 topology_manager.go:215] "Topology Admit Handler" podUID="19cda551d300608b778b79ca24bd485b" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.125005 systemd[1]: Created slice kubepods-burstable-podc8907a8dacad82cf9cfd970d7b127116.slice. Jul 2 08:52:09.144394 systemd[1]: Created slice kubepods-burstable-pod701d24d87908b16c6b949f8249dfadc6.slice. Jul 2 08:52:09.161921 systemd[1]: Created slice kubepods-burstable-pod19cda551d300608b778b79ca24bd485b.slice. Jul 2 08:52:09.162909 kubelet[1630]: I0702 08:52:09.162855 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163033 kubelet[1630]: I0702 08:52:09.162969 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163125 kubelet[1630]: I0702 08:52:09.163040 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8907a8dacad82cf9cfd970d7b127116-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"c8907a8dacad82cf9cfd970d7b127116\") " pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163125 kubelet[1630]: I0702 08:52:09.163100 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8907a8dacad82cf9cfd970d7b127116-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"c8907a8dacad82cf9cfd970d7b127116\") " pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163278 kubelet[1630]: I0702 08:52:09.163168 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163278 kubelet[1630]: I0702 08:52:09.163226 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163416 kubelet[1630]: I0702 08:52:09.163283 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163416 kubelet[1630]: I0702 08:52:09.163341 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19cda551d300608b778b79ca24bd485b-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"19cda551d300608b778b79ca24bd485b\") " pod="kube-system/kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.163416 kubelet[1630]: I0702 08:52:09.163397 1630 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8907a8dacad82cf9cfd970d7b127116-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"c8907a8dacad82cf9cfd970d7b127116\") " pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.165597 kubelet[1630]: E0702 08:52:09.165518 1630 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-3-17f1331597.novalocal?timeout=10s\": dial tcp 172.24.4.4:6443: connect: connection refused" interval="400ms" Jul 2 08:52:09.264255 kubelet[1630]: I0702 08:52:09.264149 1630 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.265149 kubelet[1630]: E0702 08:52:09.265089 1630 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.4:6443/api/v1/nodes\": dial tcp 172.24.4.4:6443: connect: connection refused" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.447568 env[1132]: time="2024-07-02T08:52:09.444679143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal,Uid:c8907a8dacad82cf9cfd970d7b127116,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:09.454232 env[1132]: time="2024-07-02T08:52:09.454167649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal,Uid:701d24d87908b16c6b949f8249dfadc6,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:09.470119 env[1132]: time="2024-07-02T08:52:09.470046006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal,Uid:19cda551d300608b778b79ca24bd485b,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:09.566457 kubelet[1630]: E0702 08:52:09.566378 1630 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-3-17f1331597.novalocal?timeout=10s\": dial tcp 172.24.4.4:6443: connect: connection refused" interval="800ms" Jul 2 08:52:09.671196 kubelet[1630]: I0702 08:52:09.670374 1630 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.671196 kubelet[1630]: E0702 08:52:09.671156 1630 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.4:6443/api/v1/nodes\": dial tcp 172.24.4.4:6443: connect: connection refused" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:09.769473 kubelet[1630]: W0702 08:52:09.769346 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:09.769473 kubelet[1630]: E0702 08:52:09.769476 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:10.074113 kubelet[1630]: W0702 08:52:10.073376 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:10.074113 kubelet[1630]: E0702 08:52:10.073504 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:10.095911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805265497.mount: Deactivated successfully. Jul 2 08:52:10.106517 env[1132]: time="2024-07-02T08:52:10.106409146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.113545 env[1132]: time="2024-07-02T08:52:10.113455339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.117059 env[1132]: time="2024-07-02T08:52:10.116977750Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.121321 env[1132]: time="2024-07-02T08:52:10.121272842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.125878 env[1132]: time="2024-07-02T08:52:10.125806293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.129265 env[1132]: time="2024-07-02T08:52:10.129174173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.133208 env[1132]: time="2024-07-02T08:52:10.133076508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.136764 env[1132]: time="2024-07-02T08:52:10.135461901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.141779 env[1132]: time="2024-07-02T08:52:10.141674968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.147869 env[1132]: time="2024-07-02T08:52:10.147773090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.152905 env[1132]: time="2024-07-02T08:52:10.152815207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.162044 env[1132]: time="2024-07-02T08:52:10.161952439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:10.258010 env[1132]: time="2024-07-02T08:52:10.257864226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:52:10.258010 env[1132]: time="2024-07-02T08:52:10.257949616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:52:10.258350 env[1132]: time="2024-07-02T08:52:10.257975585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:52:10.258969 env[1132]: time="2024-07-02T08:52:10.258859948Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/901ea822401e1c007d4f28245f41a20b883a43f0041a4f91431696333ca60675 pid=1673 runtime=io.containerd.runc.v2 Jul 2 08:52:10.271753 env[1132]: time="2024-07-02T08:52:10.271515083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:52:10.271971 env[1132]: time="2024-07-02T08:52:10.271869379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:52:10.272491 env[1132]: time="2024-07-02T08:52:10.271998722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:52:10.273457 env[1132]: time="2024-07-02T08:52:10.273360341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c96920347772cb6185b95c1907b42b0a7e826790898ec63e422801ee2b800a32 pid=1676 runtime=io.containerd.runc.v2 Jul 2 08:52:10.289962 env[1132]: time="2024-07-02T08:52:10.289882354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:52:10.290071 env[1132]: time="2024-07-02T08:52:10.289968156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:52:10.290071 env[1132]: time="2024-07-02T08:52:10.289997902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:52:10.290175 env[1132]: time="2024-07-02T08:52:10.290140128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cecf574412fcbac93e29c96af8d235b444c5e14d4cb4c60ef751ade1d3af466 pid=1712 runtime=io.containerd.runc.v2 Jul 2 08:52:10.298696 systemd[1]: Started cri-containerd-901ea822401e1c007d4f28245f41a20b883a43f0041a4f91431696333ca60675.scope. Jul 2 08:52:10.309969 systemd[1]: Started cri-containerd-c96920347772cb6185b95c1907b42b0a7e826790898ec63e422801ee2b800a32.scope. Jul 2 08:52:10.325202 systemd[1]: Started cri-containerd-6cecf574412fcbac93e29c96af8d235b444c5e14d4cb4c60ef751ade1d3af466.scope. Jul 2 08:52:10.367313 kubelet[1630]: E0702 08:52:10.367273 1630 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-3-17f1331597.novalocal?timeout=10s\": dial tcp 172.24.4.4:6443: connect: connection refused" interval="1.6s" Jul 2 08:52:10.394978 env[1132]: time="2024-07-02T08:52:10.394925614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal,Uid:c8907a8dacad82cf9cfd970d7b127116,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cecf574412fcbac93e29c96af8d235b444c5e14d4cb4c60ef751ade1d3af466\"" Jul 2 08:52:10.399534 env[1132]: time="2024-07-02T08:52:10.399506062Z" level=info msg="CreateContainer within sandbox \"6cecf574412fcbac93e29c96af8d235b444c5e14d4cb4c60ef751ade1d3af466\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:52:10.407358 env[1132]: time="2024-07-02T08:52:10.407320209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal,Uid:19cda551d300608b778b79ca24bd485b,Namespace:kube-system,Attempt:0,} returns sandbox id \"901ea822401e1c007d4f28245f41a20b883a43f0041a4f91431696333ca60675\"" Jul 2 08:52:10.411372 env[1132]: time="2024-07-02T08:52:10.408330928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal,Uid:701d24d87908b16c6b949f8249dfadc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c96920347772cb6185b95c1907b42b0a7e826790898ec63e422801ee2b800a32\"" Jul 2 08:52:10.414013 env[1132]: time="2024-07-02T08:52:10.413978153Z" level=info msg="CreateContainer within sandbox \"c96920347772cb6185b95c1907b42b0a7e826790898ec63e422801ee2b800a32\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:52:10.416112 env[1132]: time="2024-07-02T08:52:10.416076296Z" level=info msg="CreateContainer within sandbox \"901ea822401e1c007d4f28245f41a20b883a43f0041a4f91431696333ca60675\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:52:10.427197 kubelet[1630]: W0702 08:52:10.427132 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:10.427317 kubelet[1630]: E0702 08:52:10.427208 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:10.444284 env[1132]: time="2024-07-02T08:52:10.444245870Z" level=info msg="CreateContainer within sandbox \"6cecf574412fcbac93e29c96af8d235b444c5e14d4cb4c60ef751ade1d3af466\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0637de576237b16b4e258b3f585d0f27a48d6cfabeb57af9e5fe0b841fce46b4\"" Jul 2 08:52:10.445034 env[1132]: time="2024-07-02T08:52:10.445011810Z" level=info msg="StartContainer for \"0637de576237b16b4e258b3f585d0f27a48d6cfabeb57af9e5fe0b841fce46b4\"" Jul 2 08:52:10.455274 env[1132]: time="2024-07-02T08:52:10.455221950Z" level=info msg="CreateContainer within sandbox \"901ea822401e1c007d4f28245f41a20b883a43f0041a4f91431696333ca60675\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d7449014449b413a3c75aa625aabd7f8c68157aa8101f874be9c7f42c963c4d\"" Jul 2 08:52:10.455887 env[1132]: time="2024-07-02T08:52:10.455853327Z" level=info msg="StartContainer for \"5d7449014449b413a3c75aa625aabd7f8c68157aa8101f874be9c7f42c963c4d\"" Jul 2 08:52:10.462230 env[1132]: time="2024-07-02T08:52:10.462167454Z" level=info msg="CreateContainer within sandbox \"c96920347772cb6185b95c1907b42b0a7e826790898ec63e422801ee2b800a32\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4fca6779cd1598a7a2a760dfdeb576989096bb45b6c2f6b56b9aa75607a3b0a1\"" Jul 2 08:52:10.462850 env[1132]: time="2024-07-02T08:52:10.462815102Z" level=info msg="StartContainer for \"4fca6779cd1598a7a2a760dfdeb576989096bb45b6c2f6b56b9aa75607a3b0a1\"" Jul 2 08:52:10.468889 systemd[1]: Started cri-containerd-0637de576237b16b4e258b3f585d0f27a48d6cfabeb57af9e5fe0b841fce46b4.scope. Jul 2 08:52:10.476586 kubelet[1630]: I0702 08:52:10.476557 1630 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:10.476928 kubelet[1630]: E0702 08:52:10.476908 1630 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.4:6443/api/v1/nodes\": dial tcp 172.24.4.4:6443: connect: connection refused" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:10.486844 systemd[1]: Started cri-containerd-5d7449014449b413a3c75aa625aabd7f8c68157aa8101f874be9c7f42c963c4d.scope. Jul 2 08:52:10.489512 kubelet[1630]: W0702 08:52:10.489010 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-3-17f1331597.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:10.489512 kubelet[1630]: E0702 08:52:10.489079 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-3-17f1331597.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:10.529881 systemd[1]: Started cri-containerd-4fca6779cd1598a7a2a760dfdeb576989096bb45b6c2f6b56b9aa75607a3b0a1.scope. Jul 2 08:52:10.549230 env[1132]: time="2024-07-02T08:52:10.549179724Z" level=info msg="StartContainer for \"0637de576237b16b4e258b3f585d0f27a48d6cfabeb57af9e5fe0b841fce46b4\" returns successfully" Jul 2 08:52:10.585067 env[1132]: time="2024-07-02T08:52:10.584833836Z" level=info msg="StartContainer for \"5d7449014449b413a3c75aa625aabd7f8c68157aa8101f874be9c7f42c963c4d\" returns successfully" Jul 2 08:52:10.626970 env[1132]: time="2024-07-02T08:52:10.626914387Z" level=info msg="StartContainer for \"4fca6779cd1598a7a2a760dfdeb576989096bb45b6c2f6b56b9aa75607a3b0a1\" returns successfully" Jul 2 08:52:10.895709 kubelet[1630]: E0702 08:52:10.895572 1630 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:11.844584 kubelet[1630]: W0702 08:52:11.844546 1630 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:11.844846 kubelet[1630]: E0702 08:52:11.844834 1630 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.4:6443: connect: connection refused Jul 2 08:52:12.079144 kubelet[1630]: I0702 08:52:12.079109 1630 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:13.800540 kubelet[1630]: E0702 08:52:13.800480 1630 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-5-3-17f1331597.novalocal\" not found" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:13.899279 kubelet[1630]: I0702 08:52:13.899253 1630 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:13.920951 kubelet[1630]: I0702 08:52:13.920907 1630 apiserver.go:52] "Watching apiserver" Jul 2 08:52:13.961342 kubelet[1630]: I0702 08:52:13.961299 1630 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:52:14.904931 kubelet[1630]: W0702 08:52:14.904873 1630 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:52:17.106665 kubelet[1630]: W0702 08:52:17.106599 1630 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:52:17.197062 systemd[1]: Reloading. Jul 2 08:52:17.292165 /usr/lib/systemd/system-generators/torcx-generator[1916]: time="2024-07-02T08:52:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:52:17.292200 /usr/lib/systemd/system-generators/torcx-generator[1916]: time="2024-07-02T08:52:17Z" level=info msg="torcx already run" Jul 2 08:52:17.391865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:52:17.391883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:52:17.416338 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:52:17.534290 systemd[1]: Stopping kubelet.service... Jul 2 08:52:17.554078 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:52:17.554244 systemd[1]: Stopped kubelet.service. Jul 2 08:52:17.554300 systemd[1]: kubelet.service: Consumed 1.294s CPU time. Jul 2 08:52:17.556414 systemd[1]: Starting kubelet.service... Jul 2 08:52:21.465856 systemd[1]: Started kubelet.service. Jul 2 08:52:21.856881 kubelet[1967]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:52:21.856881 kubelet[1967]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:52:21.856881 kubelet[1967]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:52:21.857254 kubelet[1967]: I0702 08:52:21.856978 1967 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:52:21.861507 kubelet[1967]: I0702 08:52:21.861480 1967 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 08:52:21.861507 kubelet[1967]: I0702 08:52:21.861504 1967 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:52:21.861836 kubelet[1967]: I0702 08:52:21.861814 1967 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 08:52:21.864668 kubelet[1967]: I0702 08:52:21.864142 1967 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:52:21.869799 kubelet[1967]: I0702 08:52:21.869541 1967 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:52:21.880041 sudo[1980]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:52:21.880264 sudo[1980]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:52:21.882921 kubelet[1967]: I0702 08:52:21.882898 1967 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:52:21.883198 kubelet[1967]: I0702 08:52:21.883182 1967 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:52:21.883392 kubelet[1967]: I0702 08:52:21.883373 1967 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:52:21.883490 kubelet[1967]: I0702 08:52:21.883403 1967 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:52:21.883490 kubelet[1967]: I0702 08:52:21.883416 1967 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:52:21.883490 kubelet[1967]: I0702 08:52:21.883475 1967 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:52:21.884070 kubelet[1967]: I0702 08:52:21.883901 1967 kubelet.go:396] "Attempting to sync node with API server" Jul 2 08:52:21.884070 kubelet[1967]: I0702 08:52:21.883923 1967 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:52:21.884525 kubelet[1967]: I0702 08:52:21.884497 1967 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:52:21.886694 kubelet[1967]: I0702 08:52:21.886675 1967 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:52:21.895049 kubelet[1967]: I0702 08:52:21.895031 1967 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:52:21.895406 kubelet[1967]: I0702 08:52:21.895395 1967 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:52:21.896046 kubelet[1967]: I0702 08:52:21.896013 1967 server.go:1256] "Started kubelet" Jul 2 08:52:21.899484 kubelet[1967]: I0702 08:52:21.899470 1967 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:52:21.907558 kubelet[1967]: I0702 08:52:21.907469 1967 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:52:21.909008 kubelet[1967]: I0702 08:52:21.908995 1967 server.go:461] "Adding debug handlers to kubelet server" Jul 2 08:52:21.910251 kubelet[1967]: I0702 08:52:21.910239 1967 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:52:21.911141 kubelet[1967]: I0702 08:52:21.911129 1967 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:52:21.916791 kubelet[1967]: I0702 08:52:21.916768 1967 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:52:21.919841 kubelet[1967]: I0702 08:52:21.919786 1967 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:52:21.920308 kubelet[1967]: I0702 08:52:21.920295 1967 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:52:21.922266 kubelet[1967]: I0702 08:52:21.922237 1967 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:52:21.922379 kubelet[1967]: I0702 08:52:21.922350 1967 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:52:21.923934 kubelet[1967]: I0702 08:52:21.923920 1967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:52:21.925214 kubelet[1967]: I0702 08:52:21.925203 1967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:52:21.925332 kubelet[1967]: I0702 08:52:21.925321 1967 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:52:21.925451 kubelet[1967]: I0702 08:52:21.925440 1967 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 08:52:21.925587 kubelet[1967]: E0702 08:52:21.925564 1967 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:52:21.927050 kubelet[1967]: E0702 08:52:21.927026 1967 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:52:21.932300 kubelet[1967]: I0702 08:52:21.925354 1967 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:52:21.977218 kubelet[1967]: I0702 08:52:21.977170 1967 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:52:21.977341 kubelet[1967]: I0702 08:52:21.977251 1967 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:52:21.977341 kubelet[1967]: I0702 08:52:21.977270 1967 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:52:21.977458 kubelet[1967]: I0702 08:52:21.977441 1967 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:52:21.977506 kubelet[1967]: I0702 08:52:21.977469 1967 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:52:21.977506 kubelet[1967]: I0702 08:52:21.977477 1967 policy_none.go:49] "None policy: Start" Jul 2 08:52:21.978225 kubelet[1967]: I0702 08:52:21.978203 1967 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:52:21.978225 kubelet[1967]: I0702 08:52:21.978226 1967 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:52:21.978470 kubelet[1967]: I0702 08:52:21.978444 1967 state_mem.go:75] "Updated machine memory state" Jul 2 08:52:21.982783 kubelet[1967]: I0702 08:52:21.982763 1967 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:52:21.982974 kubelet[1967]: I0702 08:52:21.982958 1967 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:52:22.020052 kubelet[1967]: I0702 08:52:22.020018 1967 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.026445 kubelet[1967]: I0702 08:52:22.026416 1967 topology_manager.go:215] "Topology Admit Handler" podUID="c8907a8dacad82cf9cfd970d7b127116" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.026566 kubelet[1967]: I0702 08:52:22.026487 1967 topology_manager.go:215] "Topology Admit Handler" podUID="701d24d87908b16c6b949f8249dfadc6" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.026566 kubelet[1967]: I0702 08:52:22.026529 1967 topology_manager.go:215] "Topology Admit Handler" podUID="19cda551d300608b778b79ca24bd485b" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.044478 kubelet[1967]: W0702 08:52:22.044448 1967 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:52:22.112478 kubelet[1967]: W0702 08:52:22.112286 1967 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:52:22.113224 kubelet[1967]: E0702 08:52:22.113170 1967 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.122449 kubelet[1967]: I0702 08:52:22.122357 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8907a8dacad82cf9cfd970d7b127116-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"c8907a8dacad82cf9cfd970d7b127116\") " pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.122657 kubelet[1967]: I0702 08:52:22.122493 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.122657 kubelet[1967]: I0702 08:52:22.122588 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.122864 kubelet[1967]: I0702 08:52:22.122682 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.122864 kubelet[1967]: I0702 08:52:22.122769 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.122864 kubelet[1967]: I0702 08:52:22.122847 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19cda551d300608b778b79ca24bd485b-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"19cda551d300608b778b79ca24bd485b\") " pod="kube-system/kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.123932 kubelet[1967]: I0702 08:52:22.122894 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8907a8dacad82cf9cfd970d7b127116-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"c8907a8dacad82cf9cfd970d7b127116\") " pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.123932 kubelet[1967]: I0702 08:52:22.122970 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8907a8dacad82cf9cfd970d7b127116-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"c8907a8dacad82cf9cfd970d7b127116\") " pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.123932 kubelet[1967]: I0702 08:52:22.123046 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/701d24d87908b16c6b949f8249dfadc6-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" (UID: \"701d24d87908b16c6b949f8249dfadc6\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.133255 kubelet[1967]: W0702 08:52:22.132999 1967 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:52:22.133255 kubelet[1967]: E0702 08:52:22.133176 1967 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.254202 kubelet[1967]: I0702 08:52:22.254087 1967 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.254427 kubelet[1967]: I0702 08:52:22.254307 1967 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.897791 kubelet[1967]: I0702 08:52:22.897724 1967 apiserver.go:52] "Watching apiserver" Jul 2 08:52:22.920794 kubelet[1967]: I0702 08:52:22.920739 1967 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:52:22.946467 kubelet[1967]: W0702 08:52:22.946443 1967 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:52:22.946720 kubelet[1967]: E0702 08:52:22.946703 1967 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" Jul 2 08:52:22.982317 kubelet[1967]: I0702 08:52:22.982277 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-5-3-17f1331597.novalocal" podStartSLOduration=0.98223251 podStartE2EDuration="982.23251ms" podCreationTimestamp="2024-07-02 08:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:52:22.971050659 +0000 UTC m=+1.320167234" watchObservedRunningTime="2024-07-02 08:52:22.98223251 +0000 UTC m=+1.331349085" Jul 2 08:52:23.001342 kubelet[1967]: I0702 08:52:23.001313 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-5-3-17f1331597.novalocal" podStartSLOduration=6.001250084 podStartE2EDuration="6.001250084s" podCreationTimestamp="2024-07-02 08:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:52:22.983488439 +0000 UTC m=+1.332605024" watchObservedRunningTime="2024-07-02 08:52:23.001250084 +0000 UTC m=+1.350366679" Jul 2 08:52:23.001675 kubelet[1967]: I0702 08:52:23.001661 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-5-3-17f1331597.novalocal" podStartSLOduration=9.001606642 podStartE2EDuration="9.001606642s" podCreationTimestamp="2024-07-02 08:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:52:22.999103744 +0000 UTC m=+1.348220319" watchObservedRunningTime="2024-07-02 08:52:23.001606642 +0000 UTC m=+1.350723227" Jul 2 08:52:23.132668 sudo[1980]: pam_unix(sudo:session): session closed for user root Jul 2 08:52:25.592288 sudo[1286]: pam_unix(sudo:session): session closed for user root Jul 2 08:52:25.760280 sshd[1271]: pam_unix(sshd:session): session closed for user core Jul 2 08:52:25.765911 systemd[1]: sshd@6-172.24.4.4:22-172.24.4.1:53502.service: Deactivated successfully. Jul 2 08:52:25.768218 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:52:25.768571 systemd[1]: session-7.scope: Consumed 7.221s CPU time. Jul 2 08:52:25.769961 systemd-logind[1127]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:52:25.772753 systemd-logind[1127]: Removed session 7. Jul 2 08:52:29.696519 kubelet[1967]: I0702 08:52:29.696489 1967 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:52:29.697186 env[1132]: time="2024-07-02T08:52:29.697157666Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:52:29.697696 kubelet[1967]: I0702 08:52:29.697675 1967 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:52:30.432111 kubelet[1967]: I0702 08:52:30.432060 1967 topology_manager.go:215] "Topology Admit Handler" podUID="ae75ce0f-c19e-4255-be82-e34db0c86c5a" podNamespace="kube-system" podName="kube-proxy-czhjk" Jul 2 08:52:30.450211 systemd[1]: Created slice kubepods-besteffort-podae75ce0f_c19e_4255_be82_e34db0c86c5a.slice. Jul 2 08:52:30.472118 kubelet[1967]: I0702 08:52:30.472043 1967 topology_manager.go:215] "Topology Admit Handler" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" podNamespace="kube-system" podName="cilium-qwv8c" Jul 2 08:52:30.480393 systemd[1]: Created slice kubepods-burstable-pod604ed4ca_fecd_44fe_9055_97cbd95792a0.slice. Jul 2 08:52:30.484497 kubelet[1967]: I0702 08:52:30.484470 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-cgroup\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.484740 kubelet[1967]: I0702 08:52:30.484727 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cni-path\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.484882 kubelet[1967]: I0702 08:52:30.484870 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-bpf-maps\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.485014 kubelet[1967]: I0702 08:52:30.485001 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2f5\" (UniqueName: \"kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-kube-api-access-9z2f5\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.485215 kubelet[1967]: I0702 08:52:30.485173 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae75ce0f-c19e-4255-be82-e34db0c86c5a-kube-proxy\") pod \"kube-proxy-czhjk\" (UID: \"ae75ce0f-c19e-4255-be82-e34db0c86c5a\") " pod="kube-system/kube-proxy-czhjk" Jul 2 08:52:30.485355 kubelet[1967]: I0702 08:52:30.485343 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-xtables-lock\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.485486 kubelet[1967]: I0702 08:52:30.485475 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-net\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.485914 kubelet[1967]: I0702 08:52:30.485894 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-kernel\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.486163 kubelet[1967]: I0702 08:52:30.486148 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae75ce0f-c19e-4255-be82-e34db0c86c5a-lib-modules\") pod \"kube-proxy-czhjk\" (UID: \"ae75ce0f-c19e-4255-be82-e34db0c86c5a\") " pod="kube-system/kube-proxy-czhjk" Jul 2 08:52:30.486302 kubelet[1967]: I0702 08:52:30.486291 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8b44\" (UniqueName: \"kubernetes.io/projected/ae75ce0f-c19e-4255-be82-e34db0c86c5a-kube-api-access-n8b44\") pod \"kube-proxy-czhjk\" (UID: \"ae75ce0f-c19e-4255-be82-e34db0c86c5a\") " pod="kube-system/kube-proxy-czhjk" Jul 2 08:52:30.486630 kubelet[1967]: I0702 08:52:30.486598 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-run\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.486765 kubelet[1967]: I0702 08:52:30.486753 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-lib-modules\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.486907 kubelet[1967]: I0702 08:52:30.486896 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-config-path\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.487049 kubelet[1967]: I0702 08:52:30.487038 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-hubble-tls\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.487978 kubelet[1967]: I0702 08:52:30.487954 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-hostproc\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.488149 kubelet[1967]: I0702 08:52:30.488122 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/604ed4ca-fecd-44fe-9055-97cbd95792a0-clustermesh-secrets\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.488325 kubelet[1967]: I0702 08:52:30.488284 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae75ce0f-c19e-4255-be82-e34db0c86c5a-xtables-lock\") pod \"kube-proxy-czhjk\" (UID: \"ae75ce0f-c19e-4255-be82-e34db0c86c5a\") " pod="kube-system/kube-proxy-czhjk" Jul 2 08:52:30.488456 kubelet[1967]: I0702 08:52:30.488445 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-etc-cni-netd\") pod \"cilium-qwv8c\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " pod="kube-system/cilium-qwv8c" Jul 2 08:52:30.746472 kubelet[1967]: E0702 08:52:30.746413 1967 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 08:52:30.746472 kubelet[1967]: E0702 08:52:30.746475 1967 projected.go:200] Error preparing data for projected volume kube-api-access-9z2f5 for pod kube-system/cilium-qwv8c: configmap "kube-root-ca.crt" not found Jul 2 08:52:30.746909 kubelet[1967]: E0702 08:52:30.746561 1967 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-kube-api-access-9z2f5 podName:604ed4ca-fecd-44fe-9055-97cbd95792a0 nodeName:}" failed. No retries permitted until 2024-07-02 08:52:31.246535843 +0000 UTC m=+9.595652418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9z2f5" (UniqueName: "kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-kube-api-access-9z2f5") pod "cilium-qwv8c" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0") : configmap "kube-root-ca.crt" not found Jul 2 08:52:30.790916 kubelet[1967]: E0702 08:52:30.790880 1967 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 08:52:30.790916 kubelet[1967]: E0702 08:52:30.790910 1967 projected.go:200] Error preparing data for projected volume kube-api-access-n8b44 for pod kube-system/kube-proxy-czhjk: configmap "kube-root-ca.crt" not found Jul 2 08:52:30.791102 kubelet[1967]: E0702 08:52:30.790980 1967 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae75ce0f-c19e-4255-be82-e34db0c86c5a-kube-api-access-n8b44 podName:ae75ce0f-c19e-4255-be82-e34db0c86c5a nodeName:}" failed. No retries permitted until 2024-07-02 08:52:31.29096025 +0000 UTC m=+9.640076825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n8b44" (UniqueName: "kubernetes.io/projected/ae75ce0f-c19e-4255-be82-e34db0c86c5a-kube-api-access-n8b44") pod "kube-proxy-czhjk" (UID: "ae75ce0f-c19e-4255-be82-e34db0c86c5a") : configmap "kube-root-ca.crt" not found Jul 2 08:52:30.805920 kubelet[1967]: I0702 08:52:30.805855 1967 topology_manager.go:215] "Topology Admit Handler" podUID="ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c" podNamespace="kube-system" podName="cilium-operator-5cc964979-l6pxn" Jul 2 08:52:30.811504 systemd[1]: Created slice kubepods-besteffort-podac5dd9ab_d13a_42e8_a4b2_8d83f4231b0c.slice. Jul 2 08:52:30.900820 kubelet[1967]: I0702 08:52:30.900776 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-cilium-config-path\") pod \"cilium-operator-5cc964979-l6pxn\" (UID: \"ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c\") " pod="kube-system/cilium-operator-5cc964979-l6pxn" Jul 2 08:52:30.901047 kubelet[1967]: I0702 08:52:30.900859 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtzq6\" (UniqueName: \"kubernetes.io/projected/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-kube-api-access-xtzq6\") pod \"cilium-operator-5cc964979-l6pxn\" (UID: \"ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c\") " pod="kube-system/cilium-operator-5cc964979-l6pxn" Jul 2 08:52:31.116106 env[1132]: time="2024-07-02T08:52:31.115884444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-l6pxn,Uid:ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:31.164874 env[1132]: time="2024-07-02T08:52:31.164408972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:52:31.164874 env[1132]: time="2024-07-02T08:52:31.164488701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:52:31.164874 env[1132]: time="2024-07-02T08:52:31.164519269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:52:31.165487 env[1132]: time="2024-07-02T08:52:31.165394100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b pid=2048 runtime=io.containerd.runc.v2 Jul 2 08:52:31.206804 systemd[1]: Started cri-containerd-68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b.scope. Jul 2 08:52:31.275998 env[1132]: time="2024-07-02T08:52:31.275951556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-l6pxn,Uid:ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b\"" Jul 2 08:52:31.280078 env[1132]: time="2024-07-02T08:52:31.280030500Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:52:31.360198 env[1132]: time="2024-07-02T08:52:31.360131662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czhjk,Uid:ae75ce0f-c19e-4255-be82-e34db0c86c5a,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:31.385976 env[1132]: time="2024-07-02T08:52:31.385827501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwv8c,Uid:604ed4ca-fecd-44fe-9055-97cbd95792a0,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:31.708093 env[1132]: time="2024-07-02T08:52:31.698038184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:52:31.708093 env[1132]: time="2024-07-02T08:52:31.698233822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:52:31.708093 env[1132]: time="2024-07-02T08:52:31.698308572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:52:31.708093 env[1132]: time="2024-07-02T08:52:31.698867881Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1 pid=2092 runtime=io.containerd.runc.v2 Jul 2 08:52:31.737684 env[1132]: time="2024-07-02T08:52:31.734081752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:52:31.737684 env[1132]: time="2024-07-02T08:52:31.734170398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:52:31.737684 env[1132]: time="2024-07-02T08:52:31.734201917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:52:31.737684 env[1132]: time="2024-07-02T08:52:31.734528319Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d5e26d734a74aaace2db4bb275a975f5069a3e469d2a3deaa8d1cb6a2784089 pid=2109 runtime=io.containerd.runc.v2 Jul 2 08:52:31.754211 systemd[1]: Started cri-containerd-6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1.scope. Jul 2 08:52:31.777393 systemd[1]: Started cri-containerd-8d5e26d734a74aaace2db4bb275a975f5069a3e469d2a3deaa8d1cb6a2784089.scope. Jul 2 08:52:31.811029 env[1132]: time="2024-07-02T08:52:31.810982218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwv8c,Uid:604ed4ca-fecd-44fe-9055-97cbd95792a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\"" Jul 2 08:52:31.839726 env[1132]: time="2024-07-02T08:52:31.839671624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czhjk,Uid:ae75ce0f-c19e-4255-be82-e34db0c86c5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d5e26d734a74aaace2db4bb275a975f5069a3e469d2a3deaa8d1cb6a2784089\"" Jul 2 08:52:31.844859 env[1132]: time="2024-07-02T08:52:31.844794698Z" level=info msg="CreateContainer within sandbox \"8d5e26d734a74aaace2db4bb275a975f5069a3e469d2a3deaa8d1cb6a2784089\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:52:31.871774 env[1132]: time="2024-07-02T08:52:31.871713111Z" level=info msg="CreateContainer within sandbox \"8d5e26d734a74aaace2db4bb275a975f5069a3e469d2a3deaa8d1cb6a2784089\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9be7a668d8e06c9a7869a947e4796c34d1bdbb47a95531c24f7625edc96ac678\"" Jul 2 08:52:31.873683 env[1132]: time="2024-07-02T08:52:31.872526327Z" level=info msg="StartContainer for \"9be7a668d8e06c9a7869a947e4796c34d1bdbb47a95531c24f7625edc96ac678\"" Jul 2 08:52:31.898546 systemd[1]: Started cri-containerd-9be7a668d8e06c9a7869a947e4796c34d1bdbb47a95531c24f7625edc96ac678.scope. Jul 2 08:52:31.959525 env[1132]: time="2024-07-02T08:52:31.959399437Z" level=info msg="StartContainer for \"9be7a668d8e06c9a7869a947e4796c34d1bdbb47a95531c24f7625edc96ac678\" returns successfully" Jul 2 08:52:33.288705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557592839.mount: Deactivated successfully. Jul 2 08:52:35.299525 env[1132]: time="2024-07-02T08:52:35.299393690Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:35.303454 env[1132]: time="2024-07-02T08:52:35.303378868Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:35.307139 env[1132]: time="2024-07-02T08:52:35.307071777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:35.308341 env[1132]: time="2024-07-02T08:52:35.308282749Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:52:35.313720 env[1132]: time="2024-07-02T08:52:35.312889663Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:52:35.318254 env[1132]: time="2024-07-02T08:52:35.318192152Z" level=info msg="CreateContainer within sandbox \"68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:52:35.354143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911380673.mount: Deactivated successfully. Jul 2 08:52:35.371874 env[1132]: time="2024-07-02T08:52:35.371786641Z" level=info msg="CreateContainer within sandbox \"68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\"" Jul 2 08:52:35.375938 env[1132]: time="2024-07-02T08:52:35.375877998Z" level=info msg="StartContainer for \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\"" Jul 2 08:52:35.427277 systemd[1]: Started cri-containerd-e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5.scope. Jul 2 08:52:35.474547 env[1132]: time="2024-07-02T08:52:35.474499212Z" level=info msg="StartContainer for \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\" returns successfully" Jul 2 08:52:36.081568 kubelet[1967]: I0702 08:52:36.081528 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-czhjk" podStartSLOduration=6.081481656 podStartE2EDuration="6.081481656s" podCreationTimestamp="2024-07-02 08:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:52:32.987704431 +0000 UTC m=+11.336821106" watchObservedRunningTime="2024-07-02 08:52:36.081481656 +0000 UTC m=+14.430598231" Jul 2 08:52:36.342158 systemd[1]: run-containerd-runc-k8s.io-e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5-runc.lCgMOx.mount: Deactivated successfully. Jul 2 08:52:43.438717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443270885.mount: Deactivated successfully. Jul 2 08:52:47.782116 env[1132]: time="2024-07-02T08:52:47.782046360Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:47.787675 env[1132]: time="2024-07-02T08:52:47.787642626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:47.790494 env[1132]: time="2024-07-02T08:52:47.790431571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:52:47.791263 env[1132]: time="2024-07-02T08:52:47.791215961Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:52:47.794770 env[1132]: time="2024-07-02T08:52:47.794683699Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:52:47.814816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432720310.mount: Deactivated successfully. Jul 2 08:52:47.821861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568265715.mount: Deactivated successfully. Jul 2 08:52:47.830056 env[1132]: time="2024-07-02T08:52:47.830018814Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\"" Jul 2 08:52:47.831300 env[1132]: time="2024-07-02T08:52:47.831277712Z" level=info msg="StartContainer for \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\"" Jul 2 08:52:47.857418 systemd[1]: Started cri-containerd-28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c.scope. Jul 2 08:52:47.897743 env[1132]: time="2024-07-02T08:52:47.897696667Z" level=info msg="StartContainer for \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\" returns successfully" Jul 2 08:52:47.905801 systemd[1]: cri-containerd-28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c.scope: Deactivated successfully. Jul 2 08:52:48.348120 kubelet[1967]: I0702 08:52:48.348040 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-l6pxn" podStartSLOduration=14.316257001 podStartE2EDuration="18.34791419s" podCreationTimestamp="2024-07-02 08:52:30 +0000 UTC" firstStartedPulling="2024-07-02 08:52:31.277487688 +0000 UTC m=+9.626604263" lastFinishedPulling="2024-07-02 08:52:35.309144827 +0000 UTC m=+13.658261452" observedRunningTime="2024-07-02 08:52:36.093578082 +0000 UTC m=+14.442694657" watchObservedRunningTime="2024-07-02 08:52:48.34791419 +0000 UTC m=+26.697030866" Jul 2 08:52:48.377180 env[1132]: time="2024-07-02T08:52:48.377072953Z" level=info msg="shim disconnected" id=28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c Jul 2 08:52:48.377180 env[1132]: time="2024-07-02T08:52:48.377176054Z" level=warning msg="cleaning up after shim disconnected" id=28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c namespace=k8s.io Jul 2 08:52:48.378993 env[1132]: time="2024-07-02T08:52:48.377201511Z" level=info msg="cleaning up dead shim" Jul 2 08:52:48.404159 env[1132]: time="2024-07-02T08:52:48.404090873Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:52:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2411 runtime=io.containerd.runc.v2\n" Jul 2 08:52:48.819261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c-rootfs.mount: Deactivated successfully. Jul 2 08:52:49.212680 env[1132]: time="2024-07-02T08:52:49.211942103Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:52:49.258702 env[1132]: time="2024-07-02T08:52:49.258506577Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\"" Jul 2 08:52:49.260229 env[1132]: time="2024-07-02T08:52:49.260159214Z" level=info msg="StartContainer for \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\"" Jul 2 08:52:49.304734 systemd[1]: Started cri-containerd-b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5.scope. Jul 2 08:52:49.353020 env[1132]: time="2024-07-02T08:52:49.352980330Z" level=info msg="StartContainer for \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\" returns successfully" Jul 2 08:52:49.359871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:52:49.360137 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:52:49.361031 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:52:49.362922 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:52:49.368920 systemd[1]: cri-containerd-b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5.scope: Deactivated successfully. Jul 2 08:52:49.390016 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:52:49.401150 env[1132]: time="2024-07-02T08:52:49.401104258Z" level=info msg="shim disconnected" id=b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5 Jul 2 08:52:49.401413 env[1132]: time="2024-07-02T08:52:49.401380521Z" level=warning msg="cleaning up after shim disconnected" id=b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5 namespace=k8s.io Jul 2 08:52:49.401496 env[1132]: time="2024-07-02T08:52:49.401480216Z" level=info msg="cleaning up dead shim" Jul 2 08:52:49.408170 env[1132]: time="2024-07-02T08:52:49.408125400Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:52:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2473 runtime=io.containerd.runc.v2\n" Jul 2 08:52:49.814261 systemd[1]: run-containerd-runc-k8s.io-b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5-runc.WK0xqO.mount: Deactivated successfully. Jul 2 08:52:49.814490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5-rootfs.mount: Deactivated successfully. Jul 2 08:52:50.223060 env[1132]: time="2024-07-02T08:52:50.222195234Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:52:50.277788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount336710618.mount: Deactivated successfully. Jul 2 08:52:50.297585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869404062.mount: Deactivated successfully. Jul 2 08:52:50.315348 env[1132]: time="2024-07-02T08:52:50.315285479Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\"" Jul 2 08:52:50.315989 env[1132]: time="2024-07-02T08:52:50.315957407Z" level=info msg="StartContainer for \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\"" Jul 2 08:52:50.337114 systemd[1]: Started cri-containerd-a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c.scope. Jul 2 08:52:50.386077 env[1132]: time="2024-07-02T08:52:50.386041251Z" level=info msg="StartContainer for \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\" returns successfully" Jul 2 08:52:50.395239 systemd[1]: cri-containerd-a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c.scope: Deactivated successfully. Jul 2 08:52:50.432481 env[1132]: time="2024-07-02T08:52:50.432410157Z" level=info msg="shim disconnected" id=a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c Jul 2 08:52:50.432688 env[1132]: time="2024-07-02T08:52:50.432483683Z" level=warning msg="cleaning up after shim disconnected" id=a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c namespace=k8s.io Jul 2 08:52:50.432688 env[1132]: time="2024-07-02T08:52:50.432502378Z" level=info msg="cleaning up dead shim" Jul 2 08:52:50.440662 env[1132]: time="2024-07-02T08:52:50.440590865Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:52:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2530 runtime=io.containerd.runc.v2\n" Jul 2 08:52:51.223988 env[1132]: time="2024-07-02T08:52:51.223885559Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:52:51.267767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119172728.mount: Deactivated successfully. Jul 2 08:52:51.280736 env[1132]: time="2024-07-02T08:52:51.280650035Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\"" Jul 2 08:52:51.284608 env[1132]: time="2024-07-02T08:52:51.284550158Z" level=info msg="StartContainer for \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\"" Jul 2 08:52:51.321153 systemd[1]: Started cri-containerd-4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041.scope. Jul 2 08:52:51.355479 systemd[1]: cri-containerd-4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041.scope: Deactivated successfully. Jul 2 08:52:51.362564 env[1132]: time="2024-07-02T08:52:51.362510205Z" level=info msg="StartContainer for \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\" returns successfully" Jul 2 08:52:51.393488 env[1132]: time="2024-07-02T08:52:51.393432590Z" level=info msg="shim disconnected" id=4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041 Jul 2 08:52:51.393787 env[1132]: time="2024-07-02T08:52:51.393768053Z" level=warning msg="cleaning up after shim disconnected" id=4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041 namespace=k8s.io Jul 2 08:52:51.393858 env[1132]: time="2024-07-02T08:52:51.393844454Z" level=info msg="cleaning up dead shim" Jul 2 08:52:51.401754 env[1132]: time="2024-07-02T08:52:51.401727103Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:52:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2588 runtime=io.containerd.runc.v2\n" Jul 2 08:52:51.814403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041-rootfs.mount: Deactivated successfully. Jul 2 08:52:52.240789 env[1132]: time="2024-07-02T08:52:52.240282206Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:52:52.288934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706139163.mount: Deactivated successfully. Jul 2 08:52:52.303507 env[1132]: time="2024-07-02T08:52:52.303466205Z" level=info msg="CreateContainer within sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\"" Jul 2 08:52:52.304422 env[1132]: time="2024-07-02T08:52:52.304381485Z" level=info msg="StartContainer for \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\"" Jul 2 08:52:52.334508 systemd[1]: Started cri-containerd-372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c.scope. Jul 2 08:52:52.393414 env[1132]: time="2024-07-02T08:52:52.393371941Z" level=info msg="StartContainer for \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\" returns successfully" Jul 2 08:52:52.615719 kubelet[1967]: I0702 08:52:52.614964 1967 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:52:52.664558 kubelet[1967]: I0702 08:52:52.664510 1967 topology_manager.go:215] "Topology Admit Handler" podUID="d6f73411-0924-4ac5-979e-dd80f15277c7" podNamespace="kube-system" podName="coredns-76f75df574-vbjdc" Jul 2 08:52:52.675445 kubelet[1967]: I0702 08:52:52.675418 1967 topology_manager.go:215] "Topology Admit Handler" podUID="219f332b-9d5c-42b0-a302-a7890fcf4ed7" podNamespace="kube-system" podName="coredns-76f75df574-zg8rg" Jul 2 08:52:52.677597 systemd[1]: Created slice kubepods-burstable-podd6f73411_0924_4ac5_979e_dd80f15277c7.slice. Jul 2 08:52:52.690697 systemd[1]: Created slice kubepods-burstable-pod219f332b_9d5c_42b0_a302_a7890fcf4ed7.slice. Jul 2 08:52:52.776006 kubelet[1967]: I0702 08:52:52.775976 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/219f332b-9d5c-42b0-a302-a7890fcf4ed7-config-volume\") pod \"coredns-76f75df574-zg8rg\" (UID: \"219f332b-9d5c-42b0-a302-a7890fcf4ed7\") " pod="kube-system/coredns-76f75df574-zg8rg" Jul 2 08:52:52.776315 kubelet[1967]: I0702 08:52:52.776302 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6w2w\" (UniqueName: \"kubernetes.io/projected/d6f73411-0924-4ac5-979e-dd80f15277c7-kube-api-access-t6w2w\") pod \"coredns-76f75df574-vbjdc\" (UID: \"d6f73411-0924-4ac5-979e-dd80f15277c7\") " pod="kube-system/coredns-76f75df574-vbjdc" Jul 2 08:52:52.776500 kubelet[1967]: I0702 08:52:52.776461 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rw4\" (UniqueName: \"kubernetes.io/projected/219f332b-9d5c-42b0-a302-a7890fcf4ed7-kube-api-access-j5rw4\") pod \"coredns-76f75df574-zg8rg\" (UID: \"219f332b-9d5c-42b0-a302-a7890fcf4ed7\") " pod="kube-system/coredns-76f75df574-zg8rg" Jul 2 08:52:52.776709 kubelet[1967]: I0702 08:52:52.776695 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6f73411-0924-4ac5-979e-dd80f15277c7-config-volume\") pod \"coredns-76f75df574-vbjdc\" (UID: \"d6f73411-0924-4ac5-979e-dd80f15277c7\") " pod="kube-system/coredns-76f75df574-vbjdc" Jul 2 08:52:52.987358 env[1132]: time="2024-07-02T08:52:52.986173352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vbjdc,Uid:d6f73411-0924-4ac5-979e-dd80f15277c7,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:52.998269 env[1132]: time="2024-07-02T08:52:52.998194101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zg8rg,Uid:219f332b-9d5c-42b0-a302-a7890fcf4ed7,Namespace:kube-system,Attempt:0,}" Jul 2 08:52:55.054527 systemd-networkd[972]: cilium_host: Link UP Jul 2 08:52:55.060246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 08:52:55.060405 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:52:55.060740 systemd-networkd[972]: cilium_net: Link UP Jul 2 08:52:55.061593 systemd-networkd[972]: cilium_net: Gained carrier Jul 2 08:52:55.062249 systemd-networkd[972]: cilium_host: Gained carrier Jul 2 08:52:55.207833 systemd-networkd[972]: cilium_vxlan: Link UP Jul 2 08:52:55.207842 systemd-networkd[972]: cilium_vxlan: Gained carrier Jul 2 08:52:55.564468 systemd-networkd[972]: cilium_host: Gained IPv6LL Jul 2 08:52:55.612086 systemd-networkd[972]: cilium_net: Gained IPv6LL Jul 2 08:52:56.636701 systemd-networkd[972]: cilium_vxlan: Gained IPv6LL Jul 2 08:52:56.724668 kernel: NET: Registered PF_ALG protocol family Jul 2 08:52:57.694055 systemd-networkd[972]: lxc_health: Link UP Jul 2 08:52:57.702318 systemd-networkd[972]: lxc_health: Gained carrier Jul 2 08:52:57.702696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:52:58.094122 systemd-networkd[972]: lxc3df9ce116276: Link UP Jul 2 08:52:58.108807 kernel: eth0: renamed from tmpbe6fd Jul 2 08:52:58.117738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3df9ce116276: link becomes ready Jul 2 08:52:58.117286 systemd-networkd[972]: lxc3df9ce116276: Gained carrier Jul 2 08:52:58.136540 systemd-networkd[972]: lxc56020675cac2: Link UP Jul 2 08:52:58.145807 kernel: eth0: renamed from tmp317d7 Jul 2 08:52:58.162693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc56020675cac2: link becomes ready Jul 2 08:52:58.160875 systemd-networkd[972]: lxc56020675cac2: Gained carrier Jul 2 08:52:59.207410 systemd-networkd[972]: lxc_health: Gained IPv6LL Jul 2 08:52:59.427581 kubelet[1967]: I0702 08:52:59.427533 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qwv8c" podStartSLOduration=13.448057141 podStartE2EDuration="29.427468088s" podCreationTimestamp="2024-07-02 08:52:30 +0000 UTC" firstStartedPulling="2024-07-02 08:52:31.81279534 +0000 UTC m=+10.161911915" lastFinishedPulling="2024-07-02 08:52:47.792206236 +0000 UTC m=+26.141322862" observedRunningTime="2024-07-02 08:52:53.259936453 +0000 UTC m=+31.609053049" watchObservedRunningTime="2024-07-02 08:52:59.427468088 +0000 UTC m=+37.776584673" Jul 2 08:52:59.835903 systemd-networkd[972]: lxc56020675cac2: Gained IPv6LL Jul 2 08:52:59.899999 systemd-networkd[972]: lxc3df9ce116276: Gained IPv6LL Jul 2 08:53:02.970502 env[1132]: time="2024-07-02T08:53:02.970186458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:53:02.970502 env[1132]: time="2024-07-02T08:53:02.970229629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:53:02.970502 env[1132]: time="2024-07-02T08:53:02.970242613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:53:02.971784 env[1132]: time="2024-07-02T08:53:02.970543423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/317d7d8055c708b3683f9b7852125ac47772595ca2cc768c255e0317f303a0d0 pid=3140 runtime=io.containerd.runc.v2 Jul 2 08:53:03.002172 systemd[1]: Started cri-containerd-317d7d8055c708b3683f9b7852125ac47772595ca2cc768c255e0317f303a0d0.scope. Jul 2 08:53:03.107019 env[1132]: time="2024-07-02T08:53:03.106955549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zg8rg,Uid:219f332b-9d5c-42b0-a302-a7890fcf4ed7,Namespace:kube-system,Attempt:0,} returns sandbox id \"317d7d8055c708b3683f9b7852125ac47772595ca2cc768c255e0317f303a0d0\"" Jul 2 08:53:03.112980 env[1132]: time="2024-07-02T08:53:03.112916329Z" level=info msg="CreateContainer within sandbox \"317d7d8055c708b3683f9b7852125ac47772595ca2cc768c255e0317f303a0d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:53:03.150215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102253363.mount: Deactivated successfully. Jul 2 08:53:03.167965 env[1132]: time="2024-07-02T08:53:03.165789991Z" level=info msg="CreateContainer within sandbox \"317d7d8055c708b3683f9b7852125ac47772595ca2cc768c255e0317f303a0d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baf5feab2ccb802d406712d4b261f417c799f4baf8cfebdebbbb7e1509b34996\"" Jul 2 08:53:03.168869 env[1132]: time="2024-07-02T08:53:03.168826064Z" level=info msg="StartContainer for \"baf5feab2ccb802d406712d4b261f417c799f4baf8cfebdebbbb7e1509b34996\"" Jul 2 08:53:03.199999 systemd[1]: Started cri-containerd-baf5feab2ccb802d406712d4b261f417c799f4baf8cfebdebbbb7e1509b34996.scope. Jul 2 08:53:03.237458 env[1132]: time="2024-07-02T08:53:03.237322238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:53:03.237687 env[1132]: time="2024-07-02T08:53:03.237468490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:53:03.237687 env[1132]: time="2024-07-02T08:53:03.237501692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:53:03.239679 env[1132]: time="2024-07-02T08:53:03.237845563Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be6fd9c8e0e944b3f00912cb91a315ad4d35ef59beccc56c5328fac081deb280 pid=3207 runtime=io.containerd.runc.v2 Jul 2 08:53:03.284718 systemd[1]: Started cri-containerd-be6fd9c8e0e944b3f00912cb91a315ad4d35ef59beccc56c5328fac081deb280.scope. Jul 2 08:53:03.301947 env[1132]: time="2024-07-02T08:53:03.301896037Z" level=info msg="StartContainer for \"baf5feab2ccb802d406712d4b261f417c799f4baf8cfebdebbbb7e1509b34996\" returns successfully" Jul 2 08:53:03.352653 env[1132]: time="2024-07-02T08:53:03.352580341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vbjdc,Uid:d6f73411-0924-4ac5-979e-dd80f15277c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"be6fd9c8e0e944b3f00912cb91a315ad4d35ef59beccc56c5328fac081deb280\"" Jul 2 08:53:03.355871 env[1132]: time="2024-07-02T08:53:03.355831856Z" level=info msg="CreateContainer within sandbox \"be6fd9c8e0e944b3f00912cb91a315ad4d35ef59beccc56c5328fac081deb280\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:53:03.406431 env[1132]: time="2024-07-02T08:53:03.406366812Z" level=info msg="CreateContainer within sandbox \"be6fd9c8e0e944b3f00912cb91a315ad4d35ef59beccc56c5328fac081deb280\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5acc9b34528c430d60d370df9f1e3ac8a359256febc81f9caf7032115983c71\"" Jul 2 08:53:03.407590 env[1132]: time="2024-07-02T08:53:03.407545086Z" level=info msg="StartContainer for \"b5acc9b34528c430d60d370df9f1e3ac8a359256febc81f9caf7032115983c71\"" Jul 2 08:53:03.431658 systemd[1]: Started cri-containerd-b5acc9b34528c430d60d370df9f1e3ac8a359256febc81f9caf7032115983c71.scope. Jul 2 08:53:03.481928 env[1132]: time="2024-07-02T08:53:03.481852111Z" level=info msg="StartContainer for \"b5acc9b34528c430d60d370df9f1e3ac8a359256febc81f9caf7032115983c71\" returns successfully" Jul 2 08:53:03.988317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386584404.mount: Deactivated successfully. Jul 2 08:53:04.333273 kubelet[1967]: I0702 08:53:04.333101 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zg8rg" podStartSLOduration=34.332903664 podStartE2EDuration="34.332903664s" podCreationTimestamp="2024-07-02 08:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:53:04.327291201 +0000 UTC m=+42.676407846" watchObservedRunningTime="2024-07-02 08:53:04.332903664 +0000 UTC m=+42.682020309" Jul 2 08:53:04.404370 kubelet[1967]: I0702 08:53:04.404322 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vbjdc" podStartSLOduration=34.40417482 podStartE2EDuration="34.40417482s" podCreationTimestamp="2024-07-02 08:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:53:04.400138893 +0000 UTC m=+42.749255518" watchObservedRunningTime="2024-07-02 08:53:04.40417482 +0000 UTC m=+42.753291445" Jul 2 08:53:38.334447 systemd[1]: Started sshd@7-172.24.4.4:22-172.24.4.1:49310.service. Jul 2 08:53:39.599708 sshd[3303]: Accepted publickey for core from 172.24.4.1 port 49310 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:39.603798 sshd[3303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:39.616824 systemd-logind[1127]: New session 8 of user core. Jul 2 08:53:39.617166 systemd[1]: Started session-8.scope. Jul 2 08:53:40.477280 sshd[3303]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:40.482364 systemd-logind[1127]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:53:40.482698 systemd[1]: sshd@7-172.24.4.4:22-172.24.4.1:49310.service: Deactivated successfully. Jul 2 08:53:40.483417 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:53:40.484835 systemd-logind[1127]: Removed session 8. Jul 2 08:53:45.491525 systemd[1]: Started sshd@8-172.24.4.4:22-172.24.4.1:56748.service. Jul 2 08:53:46.677115 sshd[3316]: Accepted publickey for core from 172.24.4.1 port 56748 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:46.682124 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:46.696760 systemd[1]: Started session-9.scope. Jul 2 08:53:46.697983 systemd-logind[1127]: New session 9 of user core. Jul 2 08:53:47.597730 sshd[3316]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:47.600932 systemd[1]: sshd@8-172.24.4.4:22-172.24.4.1:56748.service: Deactivated successfully. Jul 2 08:53:47.601892 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:53:47.602676 systemd-logind[1127]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:53:47.603733 systemd-logind[1127]: Removed session 9. Jul 2 08:53:52.609783 systemd[1]: Started sshd@9-172.24.4.4:22-172.24.4.1:56756.service. Jul 2 08:53:53.786080 sshd[3329]: Accepted publickey for core from 172.24.4.1 port 56756 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:53.790660 sshd[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:53.805990 systemd-logind[1127]: New session 10 of user core. Jul 2 08:53:53.807020 systemd[1]: Started session-10.scope. Jul 2 08:53:54.599686 sshd[3329]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:54.625072 systemd[1]: sshd@9-172.24.4.4:22-172.24.4.1:56756.service: Deactivated successfully. Jul 2 08:53:54.626701 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:53:54.628234 systemd-logind[1127]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:53:54.630693 systemd-logind[1127]: Removed session 10. Jul 2 08:53:59.586127 systemd[1]: Started sshd@10-172.24.4.4:22-172.24.4.1:53880.service. Jul 2 08:54:00.754849 sshd[3341]: Accepted publickey for core from 172.24.4.1 port 53880 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:00.757341 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:00.770981 systemd-logind[1127]: New session 11 of user core. Jul 2 08:54:00.772886 systemd[1]: Started session-11.scope. Jul 2 08:54:01.645875 sshd[3341]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:01.654915 systemd[1]: Started sshd@11-172.24.4.4:22-172.24.4.1:53888.service. Jul 2 08:54:01.656165 systemd[1]: sshd@10-172.24.4.4:22-172.24.4.1:53880.service: Deactivated successfully. Jul 2 08:54:01.659377 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:54:01.661998 systemd-logind[1127]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:54:01.664935 systemd-logind[1127]: Removed session 11. Jul 2 08:54:03.157050 sshd[3354]: Accepted publickey for core from 172.24.4.1 port 53888 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:03.160054 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:03.172159 systemd-logind[1127]: New session 12 of user core. Jul 2 08:54:03.173262 systemd[1]: Started session-12.scope. Jul 2 08:54:04.124519 sshd[3354]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:04.129726 systemd[1]: Started sshd@12-172.24.4.4:22-172.24.4.1:53904.service. Jul 2 08:54:04.136607 systemd[1]: sshd@11-172.24.4.4:22-172.24.4.1:53888.service: Deactivated successfully. Jul 2 08:54:04.138878 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:54:04.142443 systemd-logind[1127]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:54:04.150073 systemd-logind[1127]: Removed session 12. Jul 2 08:54:05.627525 sshd[3367]: Accepted publickey for core from 172.24.4.1 port 53904 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:05.632201 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:05.645137 systemd-logind[1127]: New session 13 of user core. Jul 2 08:54:05.647051 systemd[1]: Started session-13.scope. Jul 2 08:54:06.514126 sshd[3367]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:06.522980 systemd[1]: sshd@12-172.24.4.4:22-172.24.4.1:53904.service: Deactivated successfully. Jul 2 08:54:06.526143 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:54:06.529120 systemd-logind[1127]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:54:06.532527 systemd-logind[1127]: Removed session 13. Jul 2 08:54:11.523570 systemd[1]: Started sshd@13-172.24.4.4:22-172.24.4.1:51736.service. Jul 2 08:54:12.940963 sshd[3381]: Accepted publickey for core from 172.24.4.1 port 51736 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:12.942147 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:12.956802 systemd-logind[1127]: New session 14 of user core. Jul 2 08:54:12.956920 systemd[1]: Started session-14.scope. Jul 2 08:54:13.931009 sshd[3381]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:13.942782 systemd[1]: Started sshd@14-172.24.4.4:22-172.24.4.1:51748.service. Jul 2 08:54:13.964184 systemd[1]: sshd@13-172.24.4.4:22-172.24.4.1:51736.service: Deactivated successfully. Jul 2 08:54:13.966076 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:54:13.967767 systemd-logind[1127]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:54:13.970441 systemd-logind[1127]: Removed session 14. Jul 2 08:54:15.300233 sshd[3392]: Accepted publickey for core from 172.24.4.1 port 51748 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:15.305596 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:15.318190 systemd-logind[1127]: New session 15 of user core. Jul 2 08:54:15.323000 systemd[1]: Started session-15.scope. Jul 2 08:54:17.088604 systemd[1]: Started sshd@15-172.24.4.4:22-172.24.4.1:54068.service. Jul 2 08:54:17.095757 sshd[3392]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:17.102987 systemd[1]: sshd@14-172.24.4.4:22-172.24.4.1:51748.service: Deactivated successfully. Jul 2 08:54:17.105933 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:54:17.110186 systemd-logind[1127]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:54:17.113490 systemd-logind[1127]: Removed session 15. Jul 2 08:54:18.667577 sshd[3402]: Accepted publickey for core from 172.24.4.1 port 54068 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:18.671011 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:18.687740 systemd-logind[1127]: New session 16 of user core. Jul 2 08:54:18.688964 systemd[1]: Started session-16.scope. Jul 2 08:54:21.365366 sshd[3402]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:21.373294 systemd[1]: Started sshd@16-172.24.4.4:22-172.24.4.1:54072.service. Jul 2 08:54:21.378033 systemd[1]: sshd@15-172.24.4.4:22-172.24.4.1:54068.service: Deactivated successfully. Jul 2 08:54:21.380513 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:54:21.390059 systemd-logind[1127]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:54:21.393002 systemd-logind[1127]: Removed session 16. Jul 2 08:54:22.888154 sshd[3419]: Accepted publickey for core from 172.24.4.1 port 54072 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:22.891383 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:22.906108 systemd[1]: Started session-17.scope. Jul 2 08:54:22.907692 systemd-logind[1127]: New session 17 of user core. Jul 2 08:54:24.119866 sshd[3419]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:24.130913 systemd[1]: Started sshd@17-172.24.4.4:22-172.24.4.1:54074.service. Jul 2 08:54:24.133389 systemd[1]: sshd@16-172.24.4.4:22-172.24.4.1:54072.service: Deactivated successfully. Jul 2 08:54:24.142527 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:54:24.145092 systemd-logind[1127]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:54:24.149790 systemd-logind[1127]: Removed session 17. Jul 2 08:54:25.524676 sshd[3432]: Accepted publickey for core from 172.24.4.1 port 54074 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:25.527528 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:25.538451 systemd-logind[1127]: New session 18 of user core. Jul 2 08:54:25.539878 systemd[1]: Started session-18.scope. Jul 2 08:54:26.338924 sshd[3432]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:26.343220 systemd[1]: sshd@17-172.24.4.4:22-172.24.4.1:54074.service: Deactivated successfully. Jul 2 08:54:26.344237 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:54:26.345290 systemd-logind[1127]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:54:26.346994 systemd-logind[1127]: Removed session 18. Jul 2 08:54:31.348773 systemd[1]: Started sshd@18-172.24.4.4:22-172.24.4.1:32990.service. Jul 2 08:54:32.527006 sshd[3449]: Accepted publickey for core from 172.24.4.1 port 32990 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:32.530312 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:32.544206 systemd-logind[1127]: New session 19 of user core. Jul 2 08:54:32.545499 systemd[1]: Started session-19.scope. Jul 2 08:54:33.276107 sshd[3449]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:33.280084 systemd[1]: sshd@18-172.24.4.4:22-172.24.4.1:32990.service: Deactivated successfully. Jul 2 08:54:33.280865 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:54:33.282352 systemd-logind[1127]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:54:33.283945 systemd-logind[1127]: Removed session 19. Jul 2 08:54:38.289006 systemd[1]: Started sshd@19-172.24.4.4:22-172.24.4.1:52268.service. Jul 2 08:54:39.569019 sshd[3464]: Accepted publickey for core from 172.24.4.1 port 52268 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:39.571772 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:39.583715 systemd[1]: Started session-20.scope. Jul 2 08:54:39.584590 systemd-logind[1127]: New session 20 of user core. Jul 2 08:54:40.363954 sshd[3464]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:40.369374 systemd[1]: sshd@19-172.24.4.4:22-172.24.4.1:52268.service: Deactivated successfully. Jul 2 08:54:40.371457 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:54:40.373450 systemd-logind[1127]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:54:40.377199 systemd-logind[1127]: Removed session 20. Jul 2 08:54:45.374767 systemd[1]: Started sshd@20-172.24.4.4:22-172.24.4.1:37254.service. Jul 2 08:54:46.753816 sshd[3475]: Accepted publickey for core from 172.24.4.1 port 37254 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:46.756841 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:46.767532 systemd-logind[1127]: New session 21 of user core. Jul 2 08:54:46.768728 systemd[1]: Started session-21.scope. Jul 2 08:54:47.422031 sshd[3475]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:47.425585 systemd[1]: Started sshd@21-172.24.4.4:22-172.24.4.1:37266.service. Jul 2 08:54:47.432123 systemd[1]: sshd@20-172.24.4.4:22-172.24.4.1:37254.service: Deactivated successfully. Jul 2 08:54:47.432917 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:54:47.435960 systemd-logind[1127]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:54:47.438444 systemd-logind[1127]: Removed session 21. Jul 2 08:54:48.753820 sshd[3486]: Accepted publickey for core from 172.24.4.1 port 37266 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:48.758573 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:48.771342 systemd-logind[1127]: New session 22 of user core. Jul 2 08:54:48.772128 systemd[1]: Started session-22.scope. Jul 2 08:54:51.291766 systemd[1]: run-containerd-runc-k8s.io-372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c-runc.SAr5dA.mount: Deactivated successfully. Jul 2 08:54:51.332797 env[1132]: time="2024-07-02T08:54:51.332718045Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:54:51.366688 env[1132]: time="2024-07-02T08:54:51.366638245Z" level=info msg="StopContainer for \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\" with timeout 2 (s)" Jul 2 08:54:51.367022 env[1132]: time="2024-07-02T08:54:51.367002737Z" level=info msg="StopContainer for \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\" with timeout 30 (s)" Jul 2 08:54:51.367489 env[1132]: time="2024-07-02T08:54:51.367468310Z" level=info msg="Stop container \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\" with signal terminated" Jul 2 08:54:51.367933 env[1132]: time="2024-07-02T08:54:51.367890130Z" level=info msg="Stop container \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\" with signal terminated" Jul 2 08:54:51.382699 systemd[1]: cri-containerd-e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5.scope: Deactivated successfully. Jul 2 08:54:51.389200 systemd-networkd[972]: lxc_health: Link DOWN Jul 2 08:54:51.389211 systemd-networkd[972]: lxc_health: Lost carrier Jul 2 08:54:51.436968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5-rootfs.mount: Deactivated successfully. Jul 2 08:54:51.439749 systemd[1]: cri-containerd-372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c.scope: Deactivated successfully. Jul 2 08:54:51.439955 systemd[1]: cri-containerd-372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c.scope: Consumed 9.286s CPU time. Jul 2 08:54:51.465353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c-rootfs.mount: Deactivated successfully. Jul 2 08:54:51.468866 env[1132]: time="2024-07-02T08:54:51.468811446Z" level=info msg="shim disconnected" id=e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5 Jul 2 08:54:51.468952 env[1132]: time="2024-07-02T08:54:51.468865788Z" level=warning msg="cleaning up after shim disconnected" id=e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5 namespace=k8s.io Jul 2 08:54:51.468952 env[1132]: time="2024-07-02T08:54:51.468877891Z" level=info msg="cleaning up dead shim" Jul 2 08:54:51.469320 env[1132]: time="2024-07-02T08:54:51.469279483Z" level=info msg="shim disconnected" id=372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c Jul 2 08:54:51.469441 env[1132]: time="2024-07-02T08:54:51.469421528Z" level=warning msg="cleaning up after shim disconnected" id=372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c namespace=k8s.io Jul 2 08:54:51.469533 env[1132]: time="2024-07-02T08:54:51.469516627Z" level=info msg="cleaning up dead shim" Jul 2 08:54:51.479141 env[1132]: time="2024-07-02T08:54:51.479084657Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3556 runtime=io.containerd.runc.v2\n" Jul 2 08:54:51.479724 env[1132]: time="2024-07-02T08:54:51.479695161Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3555 runtime=io.containerd.runc.v2\n" Jul 2 08:54:51.484342 env[1132]: time="2024-07-02T08:54:51.484300476Z" level=info msg="StopContainer for \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\" returns successfully" Jul 2 08:54:51.484826 env[1132]: time="2024-07-02T08:54:51.484737735Z" level=info msg="StopContainer for \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\" returns successfully" Jul 2 08:54:51.484991 env[1132]: time="2024-07-02T08:54:51.484958107Z" level=info msg="StopPodSandbox for \"68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b\"" Jul 2 08:54:51.485042 env[1132]: time="2024-07-02T08:54:51.485022388Z" level=info msg="Container to stop \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.485547 env[1132]: time="2024-07-02T08:54:51.485515391Z" level=info msg="StopPodSandbox for \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\"" Jul 2 08:54:51.485606 env[1132]: time="2024-07-02T08:54:51.485565315Z" level=info msg="Container to stop \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.485606 env[1132]: time="2024-07-02T08:54:51.485582107Z" level=info msg="Container to stop \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.485606 env[1132]: time="2024-07-02T08:54:51.485595672Z" level=info msg="Container to stop \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.485739 env[1132]: time="2024-07-02T08:54:51.485609086Z" level=info msg="Container to stop \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.485739 env[1132]: time="2024-07-02T08:54:51.485646837Z" level=info msg="Container to stop \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.491708 systemd[1]: cri-containerd-6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1.scope: Deactivated successfully. Jul 2 08:54:51.494092 systemd[1]: cri-containerd-68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b.scope: Deactivated successfully. Jul 2 08:54:51.526361 env[1132]: time="2024-07-02T08:54:51.526291992Z" level=info msg="shim disconnected" id=6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1 Jul 2 08:54:51.526361 env[1132]: time="2024-07-02T08:54:51.526345703Z" level=warning msg="cleaning up after shim disconnected" id=6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1 namespace=k8s.io Jul 2 08:54:51.526361 env[1132]: time="2024-07-02T08:54:51.526357144Z" level=info msg="cleaning up dead shim" Jul 2 08:54:51.534984 env[1132]: time="2024-07-02T08:54:51.534915614Z" level=info msg="shim disconnected" id=68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b Jul 2 08:54:51.535154 env[1132]: time="2024-07-02T08:54:51.535046578Z" level=warning msg="cleaning up after shim disconnected" id=68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b namespace=k8s.io Jul 2 08:54:51.535154 env[1132]: time="2024-07-02T08:54:51.535093396Z" level=info msg="cleaning up dead shim" Jul 2 08:54:51.538428 env[1132]: time="2024-07-02T08:54:51.538386735Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3620 runtime=io.containerd.runc.v2\n" Jul 2 08:54:51.538828 env[1132]: time="2024-07-02T08:54:51.538789650Z" level=info msg="TearDown network for sandbox \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" successfully" Jul 2 08:54:51.538828 env[1132]: time="2024-07-02T08:54:51.538823212Z" level=info msg="StopPodSandbox for \"6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1\" returns successfully" Jul 2 08:54:51.550165 env[1132]: time="2024-07-02T08:54:51.549075644Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3632 runtime=io.containerd.runc.v2\n" Jul 2 08:54:51.550165 env[1132]: time="2024-07-02T08:54:51.549605687Z" level=info msg="TearDown network for sandbox \"68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b\" successfully" Jul 2 08:54:51.550165 env[1132]: time="2024-07-02T08:54:51.549666049Z" level=info msg="StopPodSandbox for \"68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b\" returns successfully" Jul 2 08:54:51.631461 kubelet[1967]: I0702 08:54:51.631433 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-cgroup\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.631915 kubelet[1967]: I0702 08:54:51.631901 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-bpf-maps\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632012 kubelet[1967]: I0702 08:54:51.632001 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-hostproc\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632099 kubelet[1967]: I0702 08:54:51.632088 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cni-path\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632182 kubelet[1967]: I0702 08:54:51.632172 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-run\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632274 kubelet[1967]: I0702 08:54:51.632264 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-hubble-tls\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632366 kubelet[1967]: I0702 08:54:51.632354 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-cilium-config-path\") pod \"ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c\" (UID: \"ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c\") " Jul 2 08:54:51.632454 kubelet[1967]: I0702 08:54:51.632444 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-kernel\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632552 kubelet[1967]: I0702 08:54:51.632535 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-lib-modules\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632665 kubelet[1967]: I0702 08:54:51.632653 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtzq6\" (UniqueName: \"kubernetes.io/projected/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-kube-api-access-xtzq6\") pod \"ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c\" (UID: \"ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c\") " Jul 2 08:54:51.632756 kubelet[1967]: I0702 08:54:51.632746 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-xtables-lock\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632843 kubelet[1967]: I0702 08:54:51.632833 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-net\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.632935 kubelet[1967]: I0702 08:54:51.632925 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/604ed4ca-fecd-44fe-9055-97cbd95792a0-clustermesh-secrets\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.633023 kubelet[1967]: I0702 08:54:51.633012 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-etc-cni-netd\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.633110 kubelet[1967]: I0702 08:54:51.633100 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z2f5\" (UniqueName: \"kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-kube-api-access-9z2f5\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.633197 kubelet[1967]: I0702 08:54:51.633188 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-config-path\") pod \"604ed4ca-fecd-44fe-9055-97cbd95792a0\" (UID: \"604ed4ca-fecd-44fe-9055-97cbd95792a0\") " Jul 2 08:54:51.634552 kubelet[1967]: I0702 08:54:51.631646 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.634656 kubelet[1967]: I0702 08:54:51.634597 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.634698 kubelet[1967]: I0702 08:54:51.634660 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.634698 kubelet[1967]: I0702 08:54:51.634682 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.634756 kubelet[1967]: I0702 08:54:51.634701 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.634756 kubelet[1967]: I0702 08:54:51.634720 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.635756 kubelet[1967]: I0702 08:54:51.635733 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:54:51.635870 kubelet[1967]: I0702 08:54:51.635854 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.641437 kubelet[1967]: I0702 08:54:51.641397 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:54:51.643747 kubelet[1967]: I0702 08:54:51.643693 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c" (UID: "ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:54:51.643988 kubelet[1967]: I0702 08:54:51.643962 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-kube-api-access-xtzq6" (OuterVolumeSpecName: "kube-api-access-xtzq6") pod "ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c" (UID: "ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c"). InnerVolumeSpecName "kube-api-access-xtzq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:54:51.644091 kubelet[1967]: I0702 08:54:51.644075 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.646162 kubelet[1967]: I0702 08:54:51.646124 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604ed4ca-fecd-44fe-9055-97cbd95792a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:54:51.646290 kubelet[1967]: I0702 08:54:51.646181 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.646290 kubelet[1967]: I0702 08:54:51.646203 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.647207 kubelet[1967]: I0702 08:54:51.647179 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-kube-api-access-9z2f5" (OuterVolumeSpecName: "kube-api-access-9z2f5") pod "604ed4ca-fecd-44fe-9055-97cbd95792a0" (UID: "604ed4ca-fecd-44fe-9055-97cbd95792a0"). InnerVolumeSpecName "kube-api-access-9z2f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:54:51.711113 systemd[1]: Removed slice kubepods-burstable-pod604ed4ca_fecd_44fe_9055_97cbd95792a0.slice. Jul 2 08:54:51.711214 systemd[1]: kubepods-burstable-pod604ed4ca_fecd_44fe_9055_97cbd95792a0.slice: Consumed 9.400s CPU time. Jul 2 08:54:51.720679 kubelet[1967]: I0702 08:54:51.720641 1967 scope.go:117] "RemoveContainer" containerID="e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5" Jul 2 08:54:51.729562 systemd[1]: Removed slice kubepods-besteffort-podac5dd9ab_d13a_42e8_a4b2_8d83f4231b0c.slice. Jul 2 08:54:51.737445 env[1132]: time="2024-07-02T08:54:51.737373219Z" level=info msg="RemoveContainer for \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\"" Jul 2 08:54:51.739320 kubelet[1967]: I0702 08:54:51.739294 1967 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cni-path\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.740994 kubelet[1967]: I0702 08:54:51.739398 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-run\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.740994 kubelet[1967]: I0702 08:54:51.739415 1967 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-hubble-tls\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.740994 kubelet[1967]: I0702 08:54:51.739429 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-cilium-config-path\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.740994 kubelet[1967]: I0702 08:54:51.739443 1967 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-kernel\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.740994 kubelet[1967]: I0702 08:54:51.739457 1967 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-lib-modules\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.740994 kubelet[1967]: I0702 08:54:51.739492 1967 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xtzq6\" (UniqueName: \"kubernetes.io/projected/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c-kube-api-access-xtzq6\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.740994 kubelet[1967]: I0702 08:54:51.739505 1967 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-xtables-lock\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741230 kubelet[1967]: I0702 08:54:51.739518 1967 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-host-proc-sys-net\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741230 kubelet[1967]: I0702 08:54:51.739533 1967 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/604ed4ca-fecd-44fe-9055-97cbd95792a0-clustermesh-secrets\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741230 kubelet[1967]: I0702 08:54:51.739546 1967 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-etc-cni-netd\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741230 kubelet[1967]: I0702 08:54:51.739559 1967 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9z2f5\" (UniqueName: \"kubernetes.io/projected/604ed4ca-fecd-44fe-9055-97cbd95792a0-kube-api-access-9z2f5\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741230 kubelet[1967]: I0702 08:54:51.739586 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-config-path\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741230 kubelet[1967]: I0702 08:54:51.739600 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-cilium-cgroup\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741230 kubelet[1967]: I0702 08:54:51.739687 1967 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-bpf-maps\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.741464 kubelet[1967]: I0702 08:54:51.739706 1967 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/604ed4ca-fecd-44fe-9055-97cbd95792a0-hostproc\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:54:51.758083 env[1132]: time="2024-07-02T08:54:51.757979423Z" level=info msg="RemoveContainer for \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\" returns successfully" Jul 2 08:54:51.758424 kubelet[1967]: I0702 08:54:51.758406 1967 scope.go:117] "RemoveContainer" containerID="e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5" Jul 2 08:54:51.758978 env[1132]: time="2024-07-02T08:54:51.758854181Z" level=error msg="ContainerStatus for \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\": not found" Jul 2 08:54:51.762129 kubelet[1967]: E0702 08:54:51.762112 1967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\": not found" containerID="e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5" Jul 2 08:54:51.775910 kubelet[1967]: I0702 08:54:51.775884 1967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5"} err="failed to get container status \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e58bed22bd4f1e63d8ea81028b6d9c297e36c07b529101606451997cc20080b5\": not found" Jul 2 08:54:51.776058 kubelet[1967]: I0702 08:54:51.776033 1967 scope.go:117] "RemoveContainer" containerID="372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c" Jul 2 08:54:51.777806 env[1132]: time="2024-07-02T08:54:51.777509210Z" level=info msg="RemoveContainer for \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\"" Jul 2 08:54:51.782580 env[1132]: time="2024-07-02T08:54:51.782482585Z" level=info msg="RemoveContainer for \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\" returns successfully" Jul 2 08:54:51.782897 kubelet[1967]: I0702 08:54:51.782881 1967 scope.go:117] "RemoveContainer" containerID="4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041" Jul 2 08:54:51.784089 env[1132]: time="2024-07-02T08:54:51.784052305Z" level=info msg="RemoveContainer for \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\"" Jul 2 08:54:51.787971 env[1132]: time="2024-07-02T08:54:51.787934857Z" level=info msg="RemoveContainer for \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\" returns successfully" Jul 2 08:54:51.788195 kubelet[1967]: I0702 08:54:51.788169 1967 scope.go:117] "RemoveContainer" containerID="a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c" Jul 2 08:54:51.789309 env[1132]: time="2024-07-02T08:54:51.789284585Z" level=info msg="RemoveContainer for \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\"" Jul 2 08:54:51.792671 env[1132]: time="2024-07-02T08:54:51.792608811Z" level=info msg="RemoveContainer for \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\" returns successfully" Jul 2 08:54:51.792860 kubelet[1967]: I0702 08:54:51.792832 1967 scope.go:117] "RemoveContainer" containerID="b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5" Jul 2 08:54:51.794272 env[1132]: time="2024-07-02T08:54:51.794003322Z" level=info msg="RemoveContainer for \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\"" Jul 2 08:54:51.797273 env[1132]: time="2024-07-02T08:54:51.797246887Z" level=info msg="RemoveContainer for \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\" returns successfully" Jul 2 08:54:51.797527 kubelet[1967]: I0702 08:54:51.797514 1967 scope.go:117] "RemoveContainer" containerID="28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c" Jul 2 08:54:51.798852 env[1132]: time="2024-07-02T08:54:51.798825454Z" level=info msg="RemoveContainer for \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\"" Jul 2 08:54:51.804778 env[1132]: time="2024-07-02T08:54:51.802723194Z" level=info msg="RemoveContainer for \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\" returns successfully" Jul 2 08:54:51.805002 kubelet[1967]: I0702 08:54:51.804988 1967 scope.go:117] "RemoveContainer" containerID="372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c" Jul 2 08:54:51.805328 env[1132]: time="2024-07-02T08:54:51.805267229Z" level=error msg="ContainerStatus for \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\": not found" Jul 2 08:54:51.805509 kubelet[1967]: E0702 08:54:51.805484 1967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\": not found" containerID="372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c" Jul 2 08:54:51.805638 kubelet[1967]: I0702 08:54:51.805601 1967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c"} err="failed to get container status \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\": rpc error: code = NotFound desc = an error occurred when try to find container \"372c74967fd3837414fec876da45a9382b505046c98367d7c522b16f6cced48c\": not found" Jul 2 08:54:51.805721 kubelet[1967]: I0702 08:54:51.805711 1967 scope.go:117] "RemoveContainer" containerID="4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041" Jul 2 08:54:51.806064 env[1132]: time="2024-07-02T08:54:51.805970966Z" level=error msg="ContainerStatus for \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\": not found" Jul 2 08:54:51.806209 kubelet[1967]: E0702 08:54:51.806183 1967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\": not found" containerID="4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041" Jul 2 08:54:51.806311 kubelet[1967]: I0702 08:54:51.806301 1967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041"} err="failed to get container status \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a997f906dc5eb4dbbd0401da38a0dbfcbb013952f94e7783883a8ea8f502041\": not found" Jul 2 08:54:51.806399 kubelet[1967]: I0702 08:54:51.806388 1967 scope.go:117] "RemoveContainer" containerID="a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c" Jul 2 08:54:51.806716 env[1132]: time="2024-07-02T08:54:51.806663483Z" level=error msg="ContainerStatus for \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\": not found" Jul 2 08:54:51.806994 kubelet[1967]: E0702 08:54:51.806981 1967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\": not found" containerID="a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c" Jul 2 08:54:51.807108 kubelet[1967]: I0702 08:54:51.807097 1967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c"} err="failed to get container status \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a424e9716f7dafb0d5c4855b8ac16d2d6bb3a4c67b3c4fd470346236a33cf61c\": not found" Jul 2 08:54:51.807205 kubelet[1967]: I0702 08:54:51.807194 1967 scope.go:117] "RemoveContainer" containerID="b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5" Jul 2 08:54:51.807528 env[1132]: time="2024-07-02T08:54:51.807439567Z" level=error msg="ContainerStatus for \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\": not found" Jul 2 08:54:51.807700 kubelet[1967]: E0702 08:54:51.807688 1967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\": not found" containerID="b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5" Jul 2 08:54:51.807819 kubelet[1967]: I0702 08:54:51.807807 1967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5"} err="failed to get container status \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4596069a929d302d2bb403b6ae17e8887ace9569e29a5ca7b76d139ee8352e5\": not found" Jul 2 08:54:51.807915 kubelet[1967]: I0702 08:54:51.807904 1967 scope.go:117] "RemoveContainer" containerID="28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c" Jul 2 08:54:51.808198 env[1132]: time="2024-07-02T08:54:51.808130291Z" level=error msg="ContainerStatus for \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\": not found" Jul 2 08:54:51.808326 kubelet[1967]: E0702 08:54:51.808315 1967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\": not found" containerID="28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c" Jul 2 08:54:51.808428 kubelet[1967]: I0702 08:54:51.808418 1967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c"} err="failed to get container status \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\": rpc error: code = NotFound desc = an error occurred when try to find container \"28570b485063584ca20dc744311983cd560ff713715e8e0c35b23b0c8027652c\": not found" Jul 2 08:54:51.942704 kubelet[1967]: I0702 08:54:51.942591 1967 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" path="/var/lib/kubelet/pods/604ed4ca-fecd-44fe-9055-97cbd95792a0/volumes" Jul 2 08:54:51.944693 kubelet[1967]: I0702 08:54:51.944660 1967 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c" path="/var/lib/kubelet/pods/ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c/volumes" Jul 2 08:54:52.084985 kubelet[1967]: E0702 08:54:52.084812 1967 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:54:52.277405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1-rootfs.mount: Deactivated successfully. Jul 2 08:54:52.277667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6791c51b3813ea378bba0feda126b408a33ddf5e1b21313941d896d1d94e0ec1-shm.mount: Deactivated successfully. Jul 2 08:54:52.277824 systemd[1]: var-lib-kubelet-pods-604ed4ca\x2dfecd\x2d44fe\x2d9055\x2d97cbd95792a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9z2f5.mount: Deactivated successfully. Jul 2 08:54:52.278019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b-rootfs.mount: Deactivated successfully. Jul 2 08:54:52.278161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68f0a66f5886987ad272987ae319bf4c4503991b9e616d8d679a0531dfffc07b-shm.mount: Deactivated successfully. Jul 2 08:54:52.278304 systemd[1]: var-lib-kubelet-pods-ac5dd9ab\x2dd13a\x2d42e8\x2da4b2\x2d8d83f4231b0c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxtzq6.mount: Deactivated successfully. Jul 2 08:54:52.278446 systemd[1]: var-lib-kubelet-pods-604ed4ca\x2dfecd\x2d44fe\x2d9055\x2d97cbd95792a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:54:52.278607 systemd[1]: var-lib-kubelet-pods-604ed4ca\x2dfecd\x2d44fe\x2d9055\x2d97cbd95792a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:54:53.344057 sshd[3486]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:53.355867 systemd[1]: Started sshd@22-172.24.4.4:22-172.24.4.1:37274.service. Jul 2 08:54:53.357652 systemd[1]: sshd@21-172.24.4.4:22-172.24.4.1:37266.service: Deactivated successfully. Jul 2 08:54:53.359891 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:54:53.360318 systemd[1]: session-22.scope: Consumed 1.335s CPU time. Jul 2 08:54:53.364542 systemd-logind[1127]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:54:53.368594 systemd-logind[1127]: Removed session 22. Jul 2 08:54:54.921517 sshd[3651]: Accepted publickey for core from 172.24.4.1 port 37274 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:54.924229 sshd[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:54.939358 systemd-logind[1127]: New session 23 of user core. Jul 2 08:54:54.940401 systemd[1]: Started session-23.scope. Jul 2 08:54:55.647890 kubelet[1967]: I0702 08:54:55.647856 1967 setters.go:568] "Node became not ready" node="ci-3510-3-5-3-17f1331597.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:54:55Z","lastTransitionTime":"2024-07-02T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:54:56.612670 kubelet[1967]: I0702 08:54:56.612587 1967 topology_manager.go:215] "Topology Admit Handler" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" podNamespace="kube-system" podName="cilium-tvml7" Jul 2 08:54:56.612998 kubelet[1967]: E0702 08:54:56.612979 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c" containerName="cilium-operator" Jul 2 08:54:56.613163 kubelet[1967]: E0702 08:54:56.613144 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" containerName="mount-cgroup" Jul 2 08:54:56.613305 kubelet[1967]: E0702 08:54:56.613288 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" containerName="apply-sysctl-overwrites" Jul 2 08:54:56.613442 kubelet[1967]: E0702 08:54:56.613426 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" containerName="mount-bpf-fs" Jul 2 08:54:56.613580 kubelet[1967]: E0702 08:54:56.613564 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" containerName="clean-cilium-state" Jul 2 08:54:56.613720 kubelet[1967]: E0702 08:54:56.613703 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" containerName="cilium-agent" Jul 2 08:54:56.613946 kubelet[1967]: I0702 08:54:56.613930 1967 memory_manager.go:354] "RemoveStaleState removing state" podUID="604ed4ca-fecd-44fe-9055-97cbd95792a0" containerName="cilium-agent" Jul 2 08:54:56.614078 kubelet[1967]: I0702 08:54:56.614062 1967 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5dd9ab-d13a-42e8-a4b2-8d83f4231b0c" containerName="cilium-operator" Jul 2 08:54:56.648840 systemd[1]: Created slice kubepods-burstable-pod854d82cf_e776_4c6c_9089_c94bcf8f3f4f.slice. Jul 2 08:54:56.762512 sshd[3651]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:56.766980 systemd[1]: Started sshd@23-172.24.4.4:22-172.24.4.1:43282.service. Jul 2 08:54:56.770448 systemd[1]: sshd@22-172.24.4.4:22-172.24.4.1:37274.service: Deactivated successfully. Jul 2 08:54:56.771705 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:54:56.771916 systemd[1]: session-23.scope: Consumed 1.218s CPU time. Jul 2 08:54:56.774914 systemd-logind[1127]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:54:56.777103 systemd-logind[1127]: Removed session 23. Jul 2 08:54:56.779879 kubelet[1967]: I0702 08:54:56.779831 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hostproc\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.780737 kubelet[1967]: I0702 08:54:56.779944 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-etc-cni-netd\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.780737 kubelet[1967]: I0702 08:54:56.780011 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-lib-modules\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.780737 kubelet[1967]: I0702 08:54:56.780049 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-config-path\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.780737 kubelet[1967]: I0702 08:54:56.780116 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cni-path\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.780737 kubelet[1967]: I0702 08:54:56.780193 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-cgroup\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.780737 kubelet[1967]: I0702 08:54:56.780224 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-bpf-maps\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781201 kubelet[1967]: I0702 08:54:56.780295 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-xtables-lock\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781201 kubelet[1967]: I0702 08:54:56.780327 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-ipsec-secrets\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781201 kubelet[1967]: I0702 08:54:56.780390 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-net\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781201 kubelet[1967]: I0702 08:54:56.780464 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljzbn\" (UniqueName: \"kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-kube-api-access-ljzbn\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781201 kubelet[1967]: I0702 08:54:56.780494 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-kernel\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781453 kubelet[1967]: I0702 08:54:56.780572 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-clustermesh-secrets\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781453 kubelet[1967]: I0702 08:54:56.780598 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hubble-tls\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.781453 kubelet[1967]: I0702 08:54:56.780677 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-run\") pod \"cilium-tvml7\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " pod="kube-system/cilium-tvml7" Jul 2 08:54:56.955418 env[1132]: time="2024-07-02T08:54:56.954036702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tvml7,Uid:854d82cf-e776-4c6c-9089-c94bcf8f3f4f,Namespace:kube-system,Attempt:0,}" Jul 2 08:54:56.972965 env[1132]: time="2024-07-02T08:54:56.972756524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:54:56.972965 env[1132]: time="2024-07-02T08:54:56.972806107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:54:56.972965 env[1132]: time="2024-07-02T08:54:56.972821185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:54:56.973174 env[1132]: time="2024-07-02T08:54:56.972998617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822 pid=3673 runtime=io.containerd.runc.v2 Jul 2 08:54:56.985522 systemd[1]: Started cri-containerd-171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822.scope. Jul 2 08:54:57.018314 env[1132]: time="2024-07-02T08:54:57.018275720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tvml7,Uid:854d82cf-e776-4c6c-9089-c94bcf8f3f4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\"" Jul 2 08:54:57.025942 env[1132]: time="2024-07-02T08:54:57.025437143Z" level=info msg="CreateContainer within sandbox \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:54:57.039864 env[1132]: time="2024-07-02T08:54:57.039824771Z" level=info msg="CreateContainer within sandbox \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\"" Jul 2 08:54:57.041246 env[1132]: time="2024-07-02T08:54:57.040693939Z" level=info msg="StartContainer for \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\"" Jul 2 08:54:57.062974 systemd[1]: Started cri-containerd-7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9.scope. Jul 2 08:54:57.078644 systemd[1]: cri-containerd-7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9.scope: Deactivated successfully. Jul 2 08:54:57.086750 kubelet[1967]: E0702 08:54:57.086695 1967 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:54:57.100941 env[1132]: time="2024-07-02T08:54:57.100892148Z" level=info msg="shim disconnected" id=7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9 Jul 2 08:54:57.101146 env[1132]: time="2024-07-02T08:54:57.101126588Z" level=warning msg="cleaning up after shim disconnected" id=7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9 namespace=k8s.io Jul 2 08:54:57.101234 env[1132]: time="2024-07-02T08:54:57.101219181Z" level=info msg="cleaning up dead shim" Jul 2 08:54:57.114660 env[1132]: time="2024-07-02T08:54:57.114540442Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3735 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:54:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:54:57.115146 env[1132]: time="2024-07-02T08:54:57.115043424Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Jul 2 08:54:57.115777 env[1132]: time="2024-07-02T08:54:57.115724599Z" level=error msg="Failed to pipe stdout of container \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\"" error="reading from a closed fifo" Jul 2 08:54:57.116657 env[1132]: time="2024-07-02T08:54:57.116582687Z" level=error msg="Failed to pipe stderr of container \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\"" error="reading from a closed fifo" Jul 2 08:54:57.120491 env[1132]: time="2024-07-02T08:54:57.120437887Z" level=error msg="StartContainer for \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:54:57.120889 kubelet[1967]: E0702 08:54:57.120733 1967 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9" Jul 2 08:54:57.125446 kubelet[1967]: E0702 08:54:57.125322 1967 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:54:57.125446 kubelet[1967]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:54:57.125446 kubelet[1967]: rm /hostbin/cilium-mount Jul 2 08:54:57.125645 kubelet[1967]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljzbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-tvml7_kube-system(854d82cf-e776-4c6c-9089-c94bcf8f3f4f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:54:57.125645 kubelet[1967]: E0702 08:54:57.125401 1967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-tvml7" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" Jul 2 08:54:57.732570 env[1132]: time="2024-07-02T08:54:57.732487209Z" level=info msg="CreateContainer within sandbox \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 2 08:54:57.763507 env[1132]: time="2024-07-02T08:54:57.763426447Z" level=info msg="CreateContainer within sandbox \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\"" Jul 2 08:54:57.764811 env[1132]: time="2024-07-02T08:54:57.764759324Z" level=info msg="StartContainer for \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\"" Jul 2 08:54:57.792111 systemd[1]: Started cri-containerd-657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad.scope. Jul 2 08:54:57.806832 systemd[1]: cri-containerd-657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad.scope: Deactivated successfully. Jul 2 08:54:57.820425 env[1132]: time="2024-07-02T08:54:57.820381062Z" level=info msg="shim disconnected" id=657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad Jul 2 08:54:57.820695 env[1132]: time="2024-07-02T08:54:57.820663081Z" level=warning msg="cleaning up after shim disconnected" id=657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad namespace=k8s.io Jul 2 08:54:57.820782 env[1132]: time="2024-07-02T08:54:57.820766173Z" level=info msg="cleaning up dead shim" Jul 2 08:54:57.828889 env[1132]: time="2024-07-02T08:54:57.828843442Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3772 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:54:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:54:57.829304 env[1132]: time="2024-07-02T08:54:57.829249342Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Jul 2 08:54:57.829602 env[1132]: time="2024-07-02T08:54:57.829538664Z" level=error msg="Failed to pipe stdout of container \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\"" error="reading from a closed fifo" Jul 2 08:54:57.830653 env[1132]: time="2024-07-02T08:54:57.830574515Z" level=error msg="Failed to pipe stderr of container \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\"" error="reading from a closed fifo" Jul 2 08:54:57.832455 env[1132]: time="2024-07-02T08:54:57.832410734Z" level=error msg="StartContainer for \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:54:57.833456 kubelet[1967]: E0702 08:54:57.832671 1967 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad" Jul 2 08:54:57.833456 kubelet[1967]: E0702 08:54:57.832809 1967 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:54:57.833456 kubelet[1967]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:54:57.833456 kubelet[1967]: rm /hostbin/cilium-mount Jul 2 08:54:57.833456 kubelet[1967]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljzbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-tvml7_kube-system(854d82cf-e776-4c6c-9089-c94bcf8f3f4f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:54:57.833456 kubelet[1967]: E0702 08:54:57.832859 1967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-tvml7" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" Jul 2 08:54:58.380666 sshd[3661]: Accepted publickey for core from 172.24.4.1 port 43282 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:54:58.385493 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:54:58.394320 systemd[1]: Started session-24.scope. Jul 2 08:54:58.394764 systemd-logind[1127]: New session 24 of user core. Jul 2 08:54:58.726986 kubelet[1967]: I0702 08:54:58.726949 1967 scope.go:117] "RemoveContainer" containerID="7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9" Jul 2 08:54:58.727842 kubelet[1967]: I0702 08:54:58.727799 1967 scope.go:117] "RemoveContainer" containerID="7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9" Jul 2 08:54:58.729605 env[1132]: time="2024-07-02T08:54:58.729557103Z" level=info msg="RemoveContainer for \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\"" Jul 2 08:54:58.730722 env[1132]: time="2024-07-02T08:54:58.730634140Z" level=info msg="RemoveContainer for \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\"" Jul 2 08:54:58.730853 env[1132]: time="2024-07-02T08:54:58.730750928Z" level=error msg="RemoveContainer for \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\" failed" error="failed to set removing state for container \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\": container is already in removing state" Jul 2 08:54:58.731146 kubelet[1967]: E0702 08:54:58.731117 1967 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\": container is already in removing state" containerID="7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9" Jul 2 08:54:58.748100 kubelet[1967]: E0702 08:54:58.748045 1967 kuberuntime_container.go:858] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9": container is already in removing state; Skipping pod "cilium-tvml7_kube-system(854d82cf-e776-4c6c-9089-c94bcf8f3f4f)" Jul 2 08:54:58.748924 kubelet[1967]: E0702 08:54:58.748897 1967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-tvml7_kube-system(854d82cf-e776-4c6c-9089-c94bcf8f3f4f)\"" pod="kube-system/cilium-tvml7" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" Jul 2 08:54:58.760474 env[1132]: time="2024-07-02T08:54:58.760406322Z" level=info msg="RemoveContainer for \"7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9\" returns successfully" Jul 2 08:54:59.330859 sshd[3661]: pam_unix(sshd:session): session closed for user core Jul 2 08:54:59.334356 systemd[1]: Started sshd@24-172.24.4.4:22-172.24.4.1:43292.service. Jul 2 08:54:59.334959 systemd[1]: sshd@23-172.24.4.4:22-172.24.4.1:43282.service: Deactivated successfully. Jul 2 08:54:59.337784 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:54:59.338946 systemd-logind[1127]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:54:59.341185 systemd-logind[1127]: Removed session 24. Jul 2 08:54:59.736414 env[1132]: time="2024-07-02T08:54:59.736353311Z" level=info msg="StopPodSandbox for \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\"" Jul 2 08:54:59.737414 env[1132]: time="2024-07-02T08:54:59.737339830Z" level=info msg="Container to stop \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:59.742543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822-shm.mount: Deactivated successfully. Jul 2 08:54:59.757409 systemd[1]: cri-containerd-171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822.scope: Deactivated successfully. Jul 2 08:54:59.816109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822-rootfs.mount: Deactivated successfully. Jul 2 08:55:00.159735 env[1132]: time="2024-07-02T08:55:00.159472913Z" level=info msg="shim disconnected" id=171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822 Jul 2 08:55:00.159735 env[1132]: time="2024-07-02T08:55:00.159556560Z" level=warning msg="cleaning up after shim disconnected" id=171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822 namespace=k8s.io Jul 2 08:55:00.159735 env[1132]: time="2024-07-02T08:55:00.159570606Z" level=info msg="cleaning up dead shim" Jul 2 08:55:00.177034 env[1132]: time="2024-07-02T08:55:00.176937744Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3813 runtime=io.containerd.runc.v2\n" Jul 2 08:55:00.177697 env[1132]: time="2024-07-02T08:55:00.177644769Z" level=info msg="TearDown network for sandbox \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\" successfully" Jul 2 08:55:00.177854 env[1132]: time="2024-07-02T08:55:00.177693340Z" level=info msg="StopPodSandbox for \"171be2142750df79e12374be04a5c5634ba0f99c02be14db3d1b524b807f9822\" returns successfully" Jul 2 08:55:00.223389 kubelet[1967]: W0702 08:55:00.223330 1967 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854d82cf_e776_4c6c_9089_c94bcf8f3f4f.slice/cri-containerd-7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9.scope WatchSource:0}: container "7a8fc19b4c144c13762a5b9e94061d69c890318039ea4686d86f70ff32f083d9" in namespace "k8s.io": not found Jul 2 08:55:00.306378 kubelet[1967]: I0702 08:55:00.306320 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-run\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306517 kubelet[1967]: I0702 08:55:00.306396 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-xtables-lock\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306517 kubelet[1967]: I0702 08:55:00.306454 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljzbn\" (UniqueName: \"kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-kube-api-access-ljzbn\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306517 kubelet[1967]: I0702 08:55:00.306505 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-etc-cni-netd\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306765 kubelet[1967]: I0702 08:55:00.306554 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-config-path\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306765 kubelet[1967]: I0702 08:55:00.306592 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-bpf-maps\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306765 kubelet[1967]: I0702 08:55:00.306664 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-net\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306765 kubelet[1967]: I0702 08:55:00.306711 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cni-path\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306943 kubelet[1967]: I0702 08:55:00.306776 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-clustermesh-secrets\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306943 kubelet[1967]: I0702 08:55:00.306820 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-kernel\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306943 kubelet[1967]: I0702 08:55:00.306866 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hostproc\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.306943 kubelet[1967]: I0702 08:55:00.306907 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-lib-modules\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.307156 kubelet[1967]: I0702 08:55:00.306946 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-cgroup\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.307156 kubelet[1967]: I0702 08:55:00.306991 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-ipsec-secrets\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.307156 kubelet[1967]: I0702 08:55:00.307033 1967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hubble-tls\") pod \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\" (UID: \"854d82cf-e776-4c6c-9089-c94bcf8f3f4f\") " Jul 2 08:55:00.309630 kubelet[1967]: I0702 08:55:00.307646 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.309630 kubelet[1967]: I0702 08:55:00.307733 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.309630 kubelet[1967]: I0702 08:55:00.307789 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.309998 kubelet[1967]: I0702 08:55:00.309968 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.314157 kubelet[1967]: I0702 08:55:00.314120 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cni-path" (OuterVolumeSpecName: "cni-path") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.314910 systemd[1]: var-lib-kubelet-pods-854d82cf\x2de776\x2d4c6c\x2d9089\x2dc94bcf8f3f4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljzbn.mount: Deactivated successfully. Jul 2 08:55:00.316487 kubelet[1967]: I0702 08:55:00.316453 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:55:00.316667 kubelet[1967]: I0702 08:55:00.316648 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.316794 kubelet[1967]: I0702 08:55:00.316775 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.316918 kubelet[1967]: I0702 08:55:00.316900 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.317037 kubelet[1967]: I0702 08:55:00.317020 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hostproc" (OuterVolumeSpecName: "hostproc") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.317164 kubelet[1967]: I0702 08:55:00.317142 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.325241 kubelet[1967]: I0702 08:55:00.325171 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:55:00.325449 systemd[1]: var-lib-kubelet-pods-854d82cf\x2de776\x2d4c6c\x2d9089\x2dc94bcf8f3f4f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:55:00.332608 systemd[1]: var-lib-kubelet-pods-854d82cf\x2de776\x2d4c6c\x2d9089\x2dc94bcf8f3f4f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:55:00.332733 systemd[1]: var-lib-kubelet-pods-854d82cf\x2de776\x2d4c6c\x2d9089\x2dc94bcf8f3f4f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:55:00.335011 kubelet[1967]: I0702 08:55:00.334941 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:55:00.335132 kubelet[1967]: I0702 08:55:00.335099 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:55:00.335236 kubelet[1967]: I0702 08:55:00.335162 1967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-kube-api-access-ljzbn" (OuterVolumeSpecName: "kube-api-access-ljzbn") pod "854d82cf-e776-4c6c-9089-c94bcf8f3f4f" (UID: "854d82cf-e776-4c6c-9089-c94bcf8f3f4f"). InnerVolumeSpecName "kube-api-access-ljzbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:55:00.407838 kubelet[1967]: I0702 08:55:00.407777 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-config-path\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408032 kubelet[1967]: I0702 08:55:00.408020 1967 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-bpf-maps\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408116 kubelet[1967]: I0702 08:55:00.408106 1967 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-net\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408196 kubelet[1967]: I0702 08:55:00.408186 1967 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-clustermesh-secrets\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408283 kubelet[1967]: I0702 08:55:00.408262 1967 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cni-path\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408357 kubelet[1967]: I0702 08:55:00.408347 1967 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-host-proc-sys-kernel\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408429 kubelet[1967]: I0702 08:55:00.408419 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-cgroup\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408505 kubelet[1967]: I0702 08:55:00.408495 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-ipsec-secrets\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408577 kubelet[1967]: I0702 08:55:00.408567 1967 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hubble-tls\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408694 kubelet[1967]: I0702 08:55:00.408682 1967 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-hostproc\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408773 kubelet[1967]: I0702 08:55:00.408763 1967 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-lib-modules\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408849 kubelet[1967]: I0702 08:55:00.408837 1967 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-xtables-lock\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408926 kubelet[1967]: I0702 08:55:00.408916 1967 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-cilium-run\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.408998 kubelet[1967]: I0702 08:55:00.408989 1967 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-etc-cni-netd\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.409074 kubelet[1967]: I0702 08:55:00.409062 1967 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ljzbn\" (UniqueName: \"kubernetes.io/projected/854d82cf-e776-4c6c-9089-c94bcf8f3f4f-kube-api-access-ljzbn\") on node \"ci-3510-3-5-3-17f1331597.novalocal\" DevicePath \"\"" Jul 2 08:55:00.742206 kubelet[1967]: I0702 08:55:00.742162 1967 scope.go:117] "RemoveContainer" containerID="657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad" Jul 2 08:55:00.743536 env[1132]: time="2024-07-02T08:55:00.743476693Z" level=info msg="RemoveContainer for \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\"" Jul 2 08:55:00.749709 env[1132]: time="2024-07-02T08:55:00.747279346Z" level=info msg="RemoveContainer for \"657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad\" returns successfully" Jul 2 08:55:00.761721 systemd[1]: Removed slice kubepods-burstable-pod854d82cf_e776_4c6c_9089_c94bcf8f3f4f.slice. Jul 2 08:55:00.830956 sshd[3793]: Accepted publickey for core from 172.24.4.1 port 43292 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:55:00.832300 sshd[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:55:00.838071 systemd-logind[1127]: New session 25 of user core. Jul 2 08:55:00.838461 systemd[1]: Started session-25.scope. Jul 2 08:55:00.878767 kubelet[1967]: I0702 08:55:00.878735 1967 topology_manager.go:215] "Topology Admit Handler" podUID="f7df974d-7799-47bb-9c7a-215b8e01d8fd" podNamespace="kube-system" podName="cilium-rv96c" Jul 2 08:55:00.879009 kubelet[1967]: E0702 08:55:00.878995 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" containerName="mount-cgroup" Jul 2 08:55:00.879133 kubelet[1967]: I0702 08:55:00.879122 1967 memory_manager.go:354] "RemoveStaleState removing state" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" containerName="mount-cgroup" Jul 2 08:55:00.879229 kubelet[1967]: I0702 08:55:00.879218 1967 memory_manager.go:354] "RemoveStaleState removing state" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" containerName="mount-cgroup" Jul 2 08:55:00.879339 kubelet[1967]: E0702 08:55:00.879328 1967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" containerName="mount-cgroup" Jul 2 08:55:00.884940 systemd[1]: Created slice kubepods-burstable-podf7df974d_7799_47bb_9c7a_215b8e01d8fd.slice. Jul 2 08:55:01.012982 kubelet[1967]: I0702 08:55:01.012834 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-host-proc-sys-net\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.012982 kubelet[1967]: I0702 08:55:01.012896 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsb9l\" (UniqueName: \"kubernetes.io/projected/f7df974d-7799-47bb-9c7a-215b8e01d8fd-kube-api-access-lsb9l\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.012982 kubelet[1967]: I0702 08:55:01.012925 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-bpf-maps\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.012982 kubelet[1967]: I0702 08:55:01.012950 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7df974d-7799-47bb-9c7a-215b8e01d8fd-clustermesh-secrets\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.012982 kubelet[1967]: I0702 08:55:01.012976 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7df974d-7799-47bb-9c7a-215b8e01d8fd-hubble-tls\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013000 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-xtables-lock\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013023 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-lib-modules\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013046 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-host-proc-sys-kernel\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013069 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-cilium-run\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013091 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-hostproc\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013113 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-cilium-cgroup\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013137 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-etc-cni-netd\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013160 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7df974d-7799-47bb-9c7a-215b8e01d8fd-cni-path\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013188 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7df974d-7799-47bb-9c7a-215b8e01d8fd-cilium-config-path\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.013455 kubelet[1967]: I0702 08:55:01.013214 1967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7df974d-7799-47bb-9c7a-215b8e01d8fd-cilium-ipsec-secrets\") pod \"cilium-rv96c\" (UID: \"f7df974d-7799-47bb-9c7a-215b8e01d8fd\") " pod="kube-system/cilium-rv96c" Jul 2 08:55:01.188311 env[1132]: time="2024-07-02T08:55:01.188247421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv96c,Uid:f7df974d-7799-47bb-9c7a-215b8e01d8fd,Namespace:kube-system,Attempt:0,}" Jul 2 08:55:01.215298 env[1132]: time="2024-07-02T08:55:01.215171310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:55:01.215521 env[1132]: time="2024-07-02T08:55:01.215298247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:55:01.215521 env[1132]: time="2024-07-02T08:55:01.215331019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:55:01.216751 env[1132]: time="2024-07-02T08:55:01.215807912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7 pid=3846 runtime=io.containerd.runc.v2 Jul 2 08:55:01.238398 systemd[1]: Started cri-containerd-9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7.scope. Jul 2 08:55:01.290687 env[1132]: time="2024-07-02T08:55:01.290553201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv96c,Uid:f7df974d-7799-47bb-9c7a-215b8e01d8fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\"" Jul 2 08:55:01.293974 env[1132]: time="2024-07-02T08:55:01.293916791Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:55:01.331405 env[1132]: time="2024-07-02T08:55:01.331318224Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6\"" Jul 2 08:55:01.332321 env[1132]: time="2024-07-02T08:55:01.332257723Z" level=info msg="StartContainer for \"e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6\"" Jul 2 08:55:01.351077 systemd[1]: Started cri-containerd-e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6.scope. Jul 2 08:55:01.397000 env[1132]: time="2024-07-02T08:55:01.396946857Z" level=info msg="StartContainer for \"e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6\" returns successfully" Jul 2 08:55:01.410749 systemd[1]: cri-containerd-e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6.scope: Deactivated successfully. Jul 2 08:55:01.447341 env[1132]: time="2024-07-02T08:55:01.447292145Z" level=info msg="shim disconnected" id=e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6 Jul 2 08:55:01.447656 env[1132]: time="2024-07-02T08:55:01.447633584Z" level=warning msg="cleaning up after shim disconnected" id=e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6 namespace=k8s.io Jul 2 08:55:01.447749 env[1132]: time="2024-07-02T08:55:01.447731968Z" level=info msg="cleaning up dead shim" Jul 2 08:55:01.456809 env[1132]: time="2024-07-02T08:55:01.456765748Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3930 runtime=io.containerd.runc.v2\n" Jul 2 08:55:01.758291 env[1132]: time="2024-07-02T08:55:01.758178117Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:55:01.784279 env[1132]: time="2024-07-02T08:55:01.784207440Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b\"" Jul 2 08:55:01.784959 env[1132]: time="2024-07-02T08:55:01.784918371Z" level=info msg="StartContainer for \"8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b\"" Jul 2 08:55:01.821174 systemd[1]: Started cri-containerd-8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b.scope. Jul 2 08:55:01.855758 env[1132]: time="2024-07-02T08:55:01.855696380Z" level=info msg="StartContainer for \"8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b\" returns successfully" Jul 2 08:55:01.859919 systemd[1]: cri-containerd-8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b.scope: Deactivated successfully. Jul 2 08:55:01.892165 env[1132]: time="2024-07-02T08:55:01.892115443Z" level=info msg="shim disconnected" id=8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b Jul 2 08:55:01.892428 env[1132]: time="2024-07-02T08:55:01.892406648Z" level=warning msg="cleaning up after shim disconnected" id=8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b namespace=k8s.io Jul 2 08:55:01.892506 env[1132]: time="2024-07-02T08:55:01.892490295Z" level=info msg="cleaning up dead shim" Jul 2 08:55:01.902059 env[1132]: time="2024-07-02T08:55:01.902012279Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3993 runtime=io.containerd.runc.v2\n" Jul 2 08:55:01.929001 kubelet[1967]: I0702 08:55:01.928687 1967 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="854d82cf-e776-4c6c-9089-c94bcf8f3f4f" path="/var/lib/kubelet/pods/854d82cf-e776-4c6c-9089-c94bcf8f3f4f/volumes" Jul 2 08:55:02.089369 kubelet[1967]: E0702 08:55:02.088932 1967 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:55:02.767537 env[1132]: time="2024-07-02T08:55:02.767164134Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:55:02.815955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558199025.mount: Deactivated successfully. Jul 2 08:55:02.840921 env[1132]: time="2024-07-02T08:55:02.840848757Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e\"" Jul 2 08:55:02.841743 env[1132]: time="2024-07-02T08:55:02.841696285Z" level=info msg="StartContainer for \"2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e\"" Jul 2 08:55:02.872197 systemd[1]: Started cri-containerd-2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e.scope. Jul 2 08:55:02.927710 env[1132]: time="2024-07-02T08:55:02.927455123Z" level=info msg="StartContainer for \"2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e\" returns successfully" Jul 2 08:55:02.937893 systemd[1]: cri-containerd-2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e.scope: Deactivated successfully. Jul 2 08:55:02.964645 env[1132]: time="2024-07-02T08:55:02.964578174Z" level=info msg="shim disconnected" id=2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e Jul 2 08:55:02.964833 env[1132]: time="2024-07-02T08:55:02.964684424Z" level=warning msg="cleaning up after shim disconnected" id=2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e namespace=k8s.io Jul 2 08:55:02.964833 env[1132]: time="2024-07-02T08:55:02.964703529Z" level=info msg="cleaning up dead shim" Jul 2 08:55:02.972724 env[1132]: time="2024-07-02T08:55:02.972606291Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4053 runtime=io.containerd.runc.v2\n" Jul 2 08:55:03.127796 systemd[1]: run-containerd-runc-k8s.io-2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e-runc.gC3EBO.mount: Deactivated successfully. Jul 2 08:55:03.128018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e-rootfs.mount: Deactivated successfully. Jul 2 08:55:03.341419 kubelet[1967]: W0702 08:55:03.341369 1967 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854d82cf_e776_4c6c_9089_c94bcf8f3f4f.slice/cri-containerd-657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad.scope WatchSource:0}: container "657dfc19985d0146c35eede9dd0750e404eab195061285802ad2fe20ed19afad" in namespace "k8s.io": not found Jul 2 08:55:03.768186 env[1132]: time="2024-07-02T08:55:03.768136550Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:55:03.802126 env[1132]: time="2024-07-02T08:55:03.802041905Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27\"" Jul 2 08:55:03.803278 env[1132]: time="2024-07-02T08:55:03.803226504Z" level=info msg="StartContainer for \"ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27\"" Jul 2 08:55:03.854673 systemd[1]: Started cri-containerd-ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27.scope. Jul 2 08:55:03.887887 systemd[1]: cri-containerd-ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27.scope: Deactivated successfully. Jul 2 08:55:03.894571 env[1132]: time="2024-07-02T08:55:03.894526782Z" level=info msg="StartContainer for \"ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27\" returns successfully" Jul 2 08:55:03.924940 env[1132]: time="2024-07-02T08:55:03.924880125Z" level=info msg="shim disconnected" id=ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27 Jul 2 08:55:03.925219 env[1132]: time="2024-07-02T08:55:03.925001953Z" level=warning msg="cleaning up after shim disconnected" id=ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27 namespace=k8s.io Jul 2 08:55:03.925219 env[1132]: time="2024-07-02T08:55:03.925017412Z" level=info msg="cleaning up dead shim" Jul 2 08:55:03.934089 env[1132]: time="2024-07-02T08:55:03.934035653Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4107 runtime=io.containerd.runc.v2\n" Jul 2 08:55:04.127599 systemd[1]: run-containerd-runc-k8s.io-ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27-runc.H5RxHP.mount: Deactivated successfully. Jul 2 08:55:04.127840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27-rootfs.mount: Deactivated successfully. Jul 2 08:55:04.781666 env[1132]: time="2024-07-02T08:55:04.781529918Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:55:04.822986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468838786.mount: Deactivated successfully. Jul 2 08:55:04.850272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3956245774.mount: Deactivated successfully. Jul 2 08:55:04.851816 env[1132]: time="2024-07-02T08:55:04.851774279Z" level=info msg="CreateContainer within sandbox \"9c3ce82dde722aa2ba28de7744f09dd5bee3c34f5b7cdaa6210a765fa40ecef7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c2f4f4487ca59e48b12174962d2a1280724a9f3d8dde6ee00ea1791c0a9e3e8f\"" Jul 2 08:55:04.859881 env[1132]: time="2024-07-02T08:55:04.859839395Z" level=info msg="StartContainer for \"c2f4f4487ca59e48b12174962d2a1280724a9f3d8dde6ee00ea1791c0a9e3e8f\"" Jul 2 08:55:04.884662 systemd[1]: Started cri-containerd-c2f4f4487ca59e48b12174962d2a1280724a9f3d8dde6ee00ea1791c0a9e3e8f.scope. Jul 2 08:55:04.938376 env[1132]: time="2024-07-02T08:55:04.938323779Z" level=info msg="StartContainer for \"c2f4f4487ca59e48b12174962d2a1280724a9f3d8dde6ee00ea1791c0a9e3e8f\" returns successfully" Jul 2 08:55:06.025657 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 08:55:06.069651 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 2 08:55:06.468819 kubelet[1967]: W0702 08:55:06.468692 1967 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7df974d_7799_47bb_9c7a_215b8e01d8fd.slice/cri-containerd-e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6.scope WatchSource:0}: task e8877c8c76aa8887d6b08c4a9caa2db98f9029a345c9e11bfa7f0b17956e52e6 not found: not found Jul 2 08:55:07.691550 systemd[1]: run-containerd-runc-k8s.io-c2f4f4487ca59e48b12174962d2a1280724a9f3d8dde6ee00ea1791c0a9e3e8f-runc.Q91bVN.mount: Deactivated successfully. Jul 2 08:55:09.156135 systemd-networkd[972]: lxc_health: Link UP Jul 2 08:55:09.201253 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:55:09.200636 systemd-networkd[972]: lxc_health: Gained carrier Jul 2 08:55:09.252343 kubelet[1967]: I0702 08:55:09.252280 1967 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rv96c" podStartSLOduration=9.248977372 podStartE2EDuration="9.248977372s" podCreationTimestamp="2024-07-02 08:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:55:05.849893006 +0000 UTC m=+164.199009611" watchObservedRunningTime="2024-07-02 08:55:09.248977372 +0000 UTC m=+167.598093947" Jul 2 08:55:09.576442 kubelet[1967]: W0702 08:55:09.576405 1967 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7df974d_7799_47bb_9c7a_215b8e01d8fd.slice/cri-containerd-8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b.scope WatchSource:0}: task 8d5cdc79e1a7b1b8ceaf9c79e1884a7e85d325af4d2f097572b13efe4ff8ad4b not found: not found Jul 2 08:55:10.780074 systemd-networkd[972]: lxc_health: Gained IPv6LL Jul 2 08:55:12.182378 systemd[1]: run-containerd-runc-k8s.io-c2f4f4487ca59e48b12174962d2a1280724a9f3d8dde6ee00ea1791c0a9e3e8f-runc.rMDBoI.mount: Deactivated successfully. Jul 2 08:55:12.687235 kubelet[1967]: W0702 08:55:12.687191 1967 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7df974d_7799_47bb_9c7a_215b8e01d8fd.slice/cri-containerd-2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e.scope WatchSource:0}: task 2899c19c5b01f6e6443e3a7706303bc722c22f60f0cbdd0e392210afcc46f60e not found: not found Jul 2 08:55:14.677325 sshd[3793]: pam_unix(sshd:session): session closed for user core Jul 2 08:55:14.683084 systemd[1]: sshd@24-172.24.4.4:22-172.24.4.1:43292.service: Deactivated successfully. Jul 2 08:55:14.684826 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:55:14.686274 systemd-logind[1127]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:55:14.689246 systemd-logind[1127]: Removed session 25. Jul 2 08:55:15.798101 kubelet[1967]: W0702 08:55:15.798006 1967 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7df974d_7799_47bb_9c7a_215b8e01d8fd.slice/cri-containerd-ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27.scope WatchSource:0}: task ab81bb2b2ba0ff4e237f53f98b1b51698aa827d61158990bfc3df042c313ea27 not found: not found