Jul 2 08:45:11.002280 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 08:45:11.002319 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:45:11.002376 kernel: BIOS-provided physical RAM map: Jul 2 08:45:11.002390 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 08:45:11.002402 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 08:45:11.002414 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 08:45:11.002429 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 2 08:45:11.002442 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 2 08:45:11.002457 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 08:45:11.002470 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 08:45:11.002482 kernel: NX (Execute Disable) protection: active Jul 2 08:45:11.002494 kernel: SMBIOS 2.8 present. Jul 2 08:45:11.002506 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jul 2 08:45:11.002519 kernel: Hypervisor detected: KVM Jul 2 08:45:11.002534 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 08:45:11.002550 kernel: kvm-clock: cpu 0, msr 29192001, primary cpu clock Jul 2 08:45:11.002563 kernel: kvm-clock: using sched offset of 7054657174 cycles Jul 2 08:45:11.002577 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 08:45:11.002591 kernel: tsc: Detected 1996.249 MHz processor Jul 2 08:45:11.002605 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 08:45:11.002620 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 08:45:11.002633 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 2 08:45:11.002647 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 08:45:11.002664 kernel: ACPI: Early table checksum verification disabled Jul 2 08:45:11.002677 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jul 2 08:45:11.002691 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:45:11.002705 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:45:11.002718 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:45:11.002732 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 08:45:11.002746 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:45:11.002759 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:45:11.002773 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jul 2 08:45:11.002790 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jul 2 08:45:11.002803 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 08:45:11.002817 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jul 2 08:45:11.002831 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jul 2 08:45:11.002844 kernel: No NUMA configuration found Jul 2 08:45:11.002858 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jul 2 08:45:11.002871 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jul 2 08:45:11.002885 kernel: Zone ranges: Jul 2 08:45:11.002907 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 08:45:11.002922 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jul 2 08:45:11.002936 kernel: Normal empty Jul 2 08:45:11.002950 kernel: Movable zone start for each node Jul 2 08:45:11.002964 kernel: Early memory node ranges Jul 2 08:45:11.002978 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 08:45:11.002995 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 2 08:45:11.003009 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jul 2 08:45:11.003023 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 08:45:11.003037 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 08:45:11.003052 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jul 2 08:45:11.003066 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 08:45:11.003080 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 08:45:11.003094 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 08:45:11.003108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 08:45:11.003126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 08:45:11.003140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 08:45:11.003154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 08:45:11.003168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 08:45:11.003182 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 08:45:11.003196 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 08:45:11.003211 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 08:45:11.003225 kernel: Booting paravirtualized kernel on KVM Jul 2 08:45:11.003239 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 08:45:11.003254 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 08:45:11.003271 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 08:45:11.003285 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 08:45:11.003299 kernel: pcpu-alloc: [0] 0 1 Jul 2 08:45:11.003313 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Jul 2 08:45:11.007351 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 08:45:11.007366 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jul 2 08:45:11.007374 kernel: Policy zone: DMA32 Jul 2 08:45:11.007383 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:45:11.007395 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:45:11.007403 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:45:11.007411 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 08:45:11.007418 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:45:11.007427 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 123076K reserved, 0K cma-reserved) Jul 2 08:45:11.007435 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:45:11.007443 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 08:45:11.007450 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 08:45:11.007460 kernel: rcu: Hierarchical RCU implementation. Jul 2 08:45:11.007468 kernel: rcu: RCU event tracing is enabled. Jul 2 08:45:11.007476 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:45:11.007484 kernel: Rude variant of Tasks RCU enabled. Jul 2 08:45:11.007492 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:45:11.007499 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:45:11.007518 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:45:11.007526 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 08:45:11.007534 kernel: Console: colour VGA+ 80x25 Jul 2 08:45:11.007543 kernel: printk: console [tty0] enabled Jul 2 08:45:11.007551 kernel: printk: console [ttyS0] enabled Jul 2 08:45:11.007558 kernel: ACPI: Core revision 20210730 Jul 2 08:45:11.007566 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 08:45:11.007574 kernel: x2apic enabled Jul 2 08:45:11.007581 kernel: Switched APIC routing to physical x2apic. Jul 2 08:45:11.007589 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 08:45:11.007597 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 08:45:11.007604 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 2 08:45:11.007614 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 08:45:11.007624 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 08:45:11.007632 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 08:45:11.007640 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 08:45:11.007649 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 08:45:11.007657 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 08:45:11.007665 kernel: Speculative Store Bypass: Vulnerable Jul 2 08:45:11.007673 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 2 08:45:11.007681 kernel: Freeing SMP alternatives memory: 32K Jul 2 08:45:11.007690 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:45:11.007700 kernel: LSM: Security Framework initializing Jul 2 08:45:11.007708 kernel: SELinux: Initializing. Jul 2 08:45:11.007716 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:45:11.007725 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:45:11.007733 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 2 08:45:11.007742 kernel: Performance Events: AMD PMU driver. Jul 2 08:45:11.007750 kernel: ... version: 0 Jul 2 08:45:11.007758 kernel: ... bit width: 48 Jul 2 08:45:11.007766 kernel: ... generic registers: 4 Jul 2 08:45:11.007782 kernel: ... value mask: 0000ffffffffffff Jul 2 08:45:11.007791 kernel: ... max period: 00007fffffffffff Jul 2 08:45:11.007801 kernel: ... fixed-purpose events: 0 Jul 2 08:45:11.007809 kernel: ... event mask: 000000000000000f Jul 2 08:45:11.007818 kernel: signal: max sigframe size: 1440 Jul 2 08:45:11.007826 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:45:11.007835 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:45:11.007844 kernel: x86: Booting SMP configuration: Jul 2 08:45:11.007854 kernel: .... node #0, CPUs: #1 Jul 2 08:45:11.007862 kernel: kvm-clock: cpu 1, msr 29192041, secondary cpu clock Jul 2 08:45:11.007871 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Jul 2 08:45:11.007879 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:45:11.007888 kernel: smpboot: Max logical packages: 2 Jul 2 08:45:11.007896 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 2 08:45:11.007905 kernel: devtmpfs: initialized Jul 2 08:45:11.007913 kernel: x86/mm: Memory block size: 128MB Jul 2 08:45:11.007922 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:45:11.007933 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:45:11.007941 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:45:11.007950 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:45:11.007958 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:45:11.007967 kernel: audit: type=2000 audit(1719909910.019:1): state=initialized audit_enabled=0 res=1 Jul 2 08:45:11.007975 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:45:11.007984 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 08:45:11.007992 kernel: cpuidle: using governor menu Jul 2 08:45:11.008001 kernel: ACPI: bus type PCI registered Jul 2 08:45:11.008011 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:45:11.008020 kernel: dca service started, version 1.12.1 Jul 2 08:45:11.008028 kernel: PCI: Using configuration type 1 for base access Jul 2 08:45:11.008037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 08:45:11.008046 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:45:11.008054 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:45:11.008063 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:45:11.008071 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:45:11.008080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:45:11.008090 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 08:45:11.008099 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 08:45:11.008107 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 08:45:11.008116 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:45:11.008124 kernel: ACPI: Interpreter enabled Jul 2 08:45:11.008133 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 08:45:11.008142 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 08:45:11.008150 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 08:45:11.008159 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 08:45:11.008171 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 08:45:11.008314 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:45:11.008431 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 08:45:11.008445 kernel: acpiphp: Slot [3] registered Jul 2 08:45:11.008454 kernel: acpiphp: Slot [4] registered Jul 2 08:45:11.008462 kernel: acpiphp: Slot [5] registered Jul 2 08:45:11.008471 kernel: acpiphp: Slot [6] registered Jul 2 08:45:11.008483 kernel: acpiphp: Slot [7] registered Jul 2 08:45:11.008491 kernel: acpiphp: Slot [8] registered Jul 2 08:45:11.008500 kernel: acpiphp: Slot [9] registered Jul 2 08:45:11.008508 kernel: acpiphp: Slot [10] registered Jul 2 08:45:11.008517 kernel: acpiphp: Slot [11] registered Jul 2 08:45:11.008525 kernel: acpiphp: Slot [12] registered Jul 2 08:45:11.008534 kernel: acpiphp: Slot [13] registered Jul 2 08:45:11.008542 kernel: acpiphp: Slot [14] registered Jul 2 08:45:11.008550 kernel: acpiphp: Slot [15] registered Jul 2 08:45:11.008559 kernel: acpiphp: Slot [16] registered Jul 2 08:45:11.008569 kernel: acpiphp: Slot [17] registered Jul 2 08:45:11.008578 kernel: acpiphp: Slot [18] registered Jul 2 08:45:11.008586 kernel: acpiphp: Slot [19] registered Jul 2 08:45:11.008594 kernel: acpiphp: Slot [20] registered Jul 2 08:45:11.008603 kernel: acpiphp: Slot [21] registered Jul 2 08:45:11.008611 kernel: acpiphp: Slot [22] registered Jul 2 08:45:11.008619 kernel: acpiphp: Slot [23] registered Jul 2 08:45:11.008628 kernel: acpiphp: Slot [24] registered Jul 2 08:45:11.008636 kernel: acpiphp: Slot [25] registered Jul 2 08:45:11.008646 kernel: acpiphp: Slot [26] registered Jul 2 08:45:11.008655 kernel: acpiphp: Slot [27] registered Jul 2 08:45:11.008663 kernel: acpiphp: Slot [28] registered Jul 2 08:45:11.008672 kernel: acpiphp: Slot [29] registered Jul 2 08:45:11.008680 kernel: acpiphp: Slot [30] registered Jul 2 08:45:11.008688 kernel: acpiphp: Slot [31] registered Jul 2 08:45:11.008697 kernel: PCI host bridge to bus 0000:00 Jul 2 08:45:11.008798 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 08:45:11.008880 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 08:45:11.008964 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 08:45:11.009041 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 08:45:11.009127 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 08:45:11.009212 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 08:45:11.009315 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 08:45:11.009441 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 08:45:11.009543 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 08:45:11.009634 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 2 08:45:11.009724 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 08:45:11.009811 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 08:45:11.009898 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 08:45:11.009986 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 08:45:11.010080 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 08:45:11.010175 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 08:45:11.010264 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 08:45:11.010375 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 08:45:11.010461 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 08:45:11.010543 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 08:45:11.010626 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 2 08:45:11.010713 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 2 08:45:11.010796 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 08:45:11.010889 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 08:45:11.010972 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 2 08:45:11.011055 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 2 08:45:11.011145 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 08:45:11.011227 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 2 08:45:11.011320 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 08:45:11.015522 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 08:45:11.015613 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 2 08:45:11.015696 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 08:45:11.015799 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 08:45:11.015884 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 2 08:45:11.015967 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 08:45:11.016071 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 08:45:11.016159 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 2 08:45:11.016247 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 08:45:11.016259 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 08:45:11.016268 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 08:45:11.016276 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 08:45:11.016284 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 08:45:11.016292 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 08:45:11.016303 kernel: iommu: Default domain type: Translated Jul 2 08:45:11.016311 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 08:45:11.016414 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 08:45:11.016498 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 08:45:11.016580 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 08:45:11.016592 kernel: vgaarb: loaded Jul 2 08:45:11.016600 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:45:11.016609 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:45:11.016617 kernel: PTP clock support registered Jul 2 08:45:11.016628 kernel: PCI: Using ACPI for IRQ routing Jul 2 08:45:11.016636 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 08:45:11.016645 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 08:45:11.016653 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 2 08:45:11.016660 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 08:45:11.016668 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:45:11.016676 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:45:11.016684 kernel: pnp: PnP ACPI init Jul 2 08:45:11.016770 kernel: pnp 00:03: [dma 2] Jul 2 08:45:11.016793 kernel: pnp: PnP ACPI: found 5 devices Jul 2 08:45:11.016801 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 08:45:11.016809 kernel: NET: Registered PF_INET protocol family Jul 2 08:45:11.016818 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:45:11.016826 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 08:45:11.016834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:45:11.016842 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:45:11.016851 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 08:45:11.016861 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 08:45:11.016869 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:45:11.016877 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:45:11.016885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:45:11.016893 kernel: NET: Registered PF_XDP protocol family Jul 2 08:45:11.016970 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 08:45:11.017048 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 08:45:11.017121 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 08:45:11.017194 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 08:45:11.017270 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 08:45:11.017372 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 08:45:11.017459 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 08:45:11.017541 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 08:45:11.017553 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:45:11.017561 kernel: Initialise system trusted keyrings Jul 2 08:45:11.017569 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 08:45:11.017580 kernel: Key type asymmetric registered Jul 2 08:45:11.017588 kernel: Asymmetric key parser 'x509' registered Jul 2 08:45:11.017596 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 08:45:11.017604 kernel: io scheduler mq-deadline registered Jul 2 08:45:11.017612 kernel: io scheduler kyber registered Jul 2 08:45:11.017621 kernel: io scheduler bfq registered Jul 2 08:45:11.017629 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 08:45:11.017637 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 08:45:11.017645 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 08:45:11.017654 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 08:45:11.017663 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 08:45:11.017671 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:45:11.017679 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 08:45:11.017688 kernel: random: crng init done Jul 2 08:45:11.017696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 08:45:11.017704 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 08:45:11.017713 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 08:45:11.017721 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 08:45:11.017801 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 2 08:45:11.017881 kernel: rtc_cmos 00:04: registered as rtc0 Jul 2 08:45:11.017956 kernel: rtc_cmos 00:04: setting system clock to 2024-07-02T08:45:10 UTC (1719909910) Jul 2 08:45:11.018028 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 2 08:45:11.018040 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:45:11.018048 kernel: Segment Routing with IPv6 Jul 2 08:45:11.018056 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:45:11.018064 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:45:11.018072 kernel: Key type dns_resolver registered Jul 2 08:45:11.018083 kernel: IPI shorthand broadcast: enabled Jul 2 08:45:11.018091 kernel: sched_clock: Marking stable (700073212, 117337745)->(848002779, -30591822) Jul 2 08:45:11.018099 kernel: registered taskstats version 1 Jul 2 08:45:11.018107 kernel: Loading compiled-in X.509 certificates Jul 2 08:45:11.018115 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 08:45:11.018123 kernel: Key type .fscrypt registered Jul 2 08:45:11.018131 kernel: Key type fscrypt-provisioning registered Jul 2 08:45:11.018139 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:45:11.018150 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:45:11.018158 kernel: ima: No architecture policies found Jul 2 08:45:11.018166 kernel: clk: Disabling unused clocks Jul 2 08:45:11.018174 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 08:45:11.018182 kernel: Write protecting the kernel read-only data: 28672k Jul 2 08:45:11.018190 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 08:45:11.018198 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 08:45:11.018206 kernel: Run /init as init process Jul 2 08:45:11.018214 kernel: with arguments: Jul 2 08:45:11.018223 kernel: /init Jul 2 08:45:11.018231 kernel: with environment: Jul 2 08:45:11.018239 kernel: HOME=/ Jul 2 08:45:11.018246 kernel: TERM=linux Jul 2 08:45:11.018254 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:45:11.018265 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:45:11.018275 systemd[1]: Detected virtualization kvm. Jul 2 08:45:11.018284 systemd[1]: Detected architecture x86-64. Jul 2 08:45:11.018295 systemd[1]: Running in initrd. Jul 2 08:45:11.018303 systemd[1]: No hostname configured, using default hostname. Jul 2 08:45:11.018312 systemd[1]: Hostname set to . Jul 2 08:45:11.018321 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:45:11.018435 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:45:11.018444 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:45:11.018453 systemd[1]: Reached target cryptsetup.target. Jul 2 08:45:11.018461 systemd[1]: Reached target paths.target. Jul 2 08:45:11.018473 systemd[1]: Reached target slices.target. Jul 2 08:45:11.018481 systemd[1]: Reached target swap.target. Jul 2 08:45:11.018490 systemd[1]: Reached target timers.target. Jul 2 08:45:11.018499 systemd[1]: Listening on iscsid.socket. Jul 2 08:45:11.018507 systemd[1]: Listening on iscsiuio.socket. Jul 2 08:45:11.018516 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:45:11.018524 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:45:11.018535 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:45:11.018543 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:45:11.018552 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:45:11.018561 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:45:11.018569 systemd[1]: Reached target sockets.target. Jul 2 08:45:11.018586 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:45:11.018596 systemd[1]: Finished network-cleanup.service. Jul 2 08:45:11.018607 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:45:11.018615 systemd[1]: Starting systemd-journald.service... Jul 2 08:45:11.018624 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:45:11.018633 systemd[1]: Starting systemd-resolved.service... Jul 2 08:45:11.018642 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 08:45:11.018651 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:45:11.018660 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:45:11.018672 systemd-journald[185]: Journal started Jul 2 08:45:11.018720 systemd-journald[185]: Runtime Journal (/run/log/journal/940ed3d08871495a80eaeb6c0430add0) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:45:10.979638 systemd-modules-load[186]: Inserted module 'overlay' Jul 2 08:45:11.038577 systemd[1]: Started systemd-journald.service. Jul 2 08:45:11.038605 kernel: audit: type=1130 audit(1719909911.032:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.028979 systemd-resolved[187]: Positive Trust Anchors: Jul 2 08:45:11.045105 kernel: audit: type=1130 audit(1719909911.038:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.045124 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:45:11.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.028988 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:45:11.050388 kernel: audit: type=1130 audit(1719909911.045:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.029026 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:45:11.058126 kernel: Bridge firewalling registered Jul 2 08:45:11.058147 kernel: audit: type=1130 audit(1719909911.050:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.031617 systemd-resolved[187]: Defaulting to hostname 'linux'. Jul 2 08:45:11.039125 systemd[1]: Started systemd-resolved.service. Jul 2 08:45:11.045732 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 08:45:11.051008 systemd[1]: Reached target nss-lookup.target. Jul 2 08:45:11.052289 systemd-modules-load[186]: Inserted module 'br_netfilter' Jul 2 08:45:11.059528 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 08:45:11.075922 kernel: audit: type=1130 audit(1719909911.071:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.060649 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:45:11.067165 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:45:11.085784 kernel: SCSI subsystem initialized Jul 2 08:45:11.086411 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 08:45:11.090741 kernel: audit: type=1130 audit(1719909911.086:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.091489 systemd[1]: Starting dracut-cmdline.service... Jul 2 08:45:11.109257 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:45:11.109340 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:45:11.109355 dracut-cmdline[201]: dracut-dracut-053 Jul 2 08:45:11.109355 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:45:11.117164 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 08:45:11.117464 systemd-modules-load[186]: Inserted module 'dm_multipath' Jul 2 08:45:11.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.118791 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:45:11.123748 kernel: audit: type=1130 audit(1719909911.118:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.123179 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:45:11.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.131758 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:45:11.136364 kernel: audit: type=1130 audit(1719909911.131:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.165344 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:45:11.185783 kernel: iscsi: registered transport (tcp) Jul 2 08:45:11.211353 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:45:11.211413 kernel: QLogic iSCSI HBA Driver Jul 2 08:45:11.264474 systemd[1]: Finished dracut-cmdline.service. Jul 2 08:45:11.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.272412 kernel: audit: type=1130 audit(1719909911.265:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.267603 systemd[1]: Starting dracut-pre-udev.service... Jul 2 08:45:11.360470 kernel: raid6: sse2x4 gen() 8939 MB/s Jul 2 08:45:11.377439 kernel: raid6: sse2x4 xor() 6785 MB/s Jul 2 08:45:11.394459 kernel: raid6: sse2x2 gen() 14384 MB/s Jul 2 08:45:11.411461 kernel: raid6: sse2x2 xor() 8581 MB/s Jul 2 08:45:11.428428 kernel: raid6: sse2x1 gen() 11182 MB/s Jul 2 08:45:11.446239 kernel: raid6: sse2x1 xor() 6973 MB/s Jul 2 08:45:11.446373 kernel: raid6: using algorithm sse2x2 gen() 14384 MB/s Jul 2 08:45:11.446404 kernel: raid6: .... xor() 8581 MB/s, rmw enabled Jul 2 08:45:11.447007 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 08:45:11.462921 kernel: xor: measuring software checksum speed Jul 2 08:45:11.463012 kernel: prefetch64-sse : 18492 MB/sec Jul 2 08:45:11.465296 kernel: generic_sse : 16842 MB/sec Jul 2 08:45:11.465391 kernel: xor: using function: prefetch64-sse (18492 MB/sec) Jul 2 08:45:11.578409 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 08:45:11.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.595000 audit: BPF prog-id=7 op=LOAD Jul 2 08:45:11.595000 audit: BPF prog-id=8 op=LOAD Jul 2 08:45:11.593418 systemd[1]: Finished dracut-pre-udev.service. Jul 2 08:45:11.597262 systemd[1]: Starting systemd-udevd.service... Jul 2 08:45:11.633097 systemd-udevd[385]: Using default interface naming scheme 'v252'. Jul 2 08:45:11.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.644874 systemd[1]: Started systemd-udevd.service. Jul 2 08:45:11.646073 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 08:45:11.663316 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Jul 2 08:45:11.697838 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 08:45:11.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.700580 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:45:11.737449 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:45:11.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:11.820362 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jul 2 08:45:11.824776 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:45:11.824802 kernel: GPT:17805311 != 41943039 Jul 2 08:45:11.824814 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:45:11.826019 kernel: GPT:17805311 != 41943039 Jul 2 08:45:11.826595 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:45:11.828555 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:45:11.852362 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (434) Jul 2 08:45:11.865766 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 08:45:11.913573 kernel: libata version 3.00 loaded. Jul 2 08:45:11.913598 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 08:45:11.913752 kernel: scsi host0: ata_piix Jul 2 08:45:11.913896 kernel: scsi host1: ata_piix Jul 2 08:45:11.914006 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 2 08:45:11.914019 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 2 08:45:11.916012 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 08:45:11.916611 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 08:45:11.921640 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 08:45:11.925901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:45:11.928406 systemd[1]: Starting disk-uuid.service... Jul 2 08:45:11.939550 disk-uuid[460]: Primary Header is updated. Jul 2 08:45:11.939550 disk-uuid[460]: Secondary Entries is updated. Jul 2 08:45:11.939550 disk-uuid[460]: Secondary Header is updated. Jul 2 08:45:11.947355 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:45:11.955363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:45:11.962363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:45:12.967377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:45:12.967980 disk-uuid[461]: The operation has completed successfully. Jul 2 08:45:13.034097 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:45:13.036056 systemd[1]: Finished disk-uuid.service. Jul 2 08:45:13.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.052541 systemd[1]: Starting verity-setup.service... Jul 2 08:45:13.079350 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 2 08:45:13.192269 systemd[1]: Found device dev-mapper-usr.device. Jul 2 08:45:13.196469 systemd[1]: Finished verity-setup.service. Jul 2 08:45:13.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.200231 systemd[1]: Mounting sysusr-usr.mount... Jul 2 08:45:13.325394 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 08:45:13.326542 systemd[1]: Mounted sysusr-usr.mount. Jul 2 08:45:13.327928 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 08:45:13.329554 systemd[1]: Starting ignition-setup.service... Jul 2 08:45:13.332001 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 08:45:13.350601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:45:13.350670 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:45:13.350683 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:45:13.373678 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:45:13.391233 systemd[1]: Finished ignition-setup.service. Jul 2 08:45:13.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.394699 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 08:45:13.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.466000 audit: BPF prog-id=9 op=LOAD Jul 2 08:45:13.465235 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 08:45:13.467248 systemd[1]: Starting systemd-networkd.service... Jul 2 08:45:13.501897 systemd-networkd[631]: lo: Link UP Jul 2 08:45:13.502663 systemd-networkd[631]: lo: Gained carrier Jul 2 08:45:13.503939 systemd-networkd[631]: Enumeration completed Jul 2 08:45:13.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.504043 systemd[1]: Started systemd-networkd.service. Jul 2 08:45:13.504609 systemd[1]: Reached target network.target. Jul 2 08:45:13.505055 systemd-networkd[631]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:45:13.506696 systemd[1]: Starting iscsiuio.service... Jul 2 08:45:13.511426 systemd-networkd[631]: eth0: Link UP Jul 2 08:45:13.512055 systemd-networkd[631]: eth0: Gained carrier Jul 2 08:45:13.514913 systemd[1]: Started iscsiuio.service. Jul 2 08:45:13.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.516174 systemd[1]: Starting iscsid.service... Jul 2 08:45:13.519228 iscsid[636]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:45:13.519228 iscsid[636]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 08:45:13.519228 iscsid[636]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 08:45:13.519228 iscsid[636]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 08:45:13.519228 iscsid[636]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:45:13.519228 iscsid[636]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 08:45:13.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.522551 systemd[1]: Started iscsid.service. Jul 2 08:45:13.524646 systemd[1]: Starting dracut-initqueue.service... Jul 2 08:45:13.531833 systemd-networkd[631]: eth0: DHCPv4 address 172.24.4.53/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:45:13.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.536158 systemd[1]: Finished dracut-initqueue.service. Jul 2 08:45:13.536688 systemd[1]: Reached target remote-fs-pre.target. Jul 2 08:45:13.537101 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:45:13.537562 systemd[1]: Reached target remote-fs.target. Jul 2 08:45:13.538839 systemd[1]: Starting dracut-pre-mount.service... Jul 2 08:45:13.548443 systemd[1]: Finished dracut-pre-mount.service. Jul 2 08:45:13.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.726730 ignition[575]: Ignition 2.14.0 Jul 2 08:45:13.726739 ignition[575]: Stage: fetch-offline Jul 2 08:45:13.726824 ignition[575]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:45:13.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:13.729225 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 08:45:13.726847 ignition[575]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:45:13.730954 systemd[1]: Starting ignition-fetch.service... Jul 2 08:45:13.727853 ignition[575]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:45:13.727942 ignition[575]: parsed url from cmdline: "" Jul 2 08:45:13.727946 ignition[575]: no config URL provided Jul 2 08:45:13.727951 ignition[575]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:45:13.727958 ignition[575]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:45:13.727963 ignition[575]: failed to fetch config: resource requires networking Jul 2 08:45:13.728456 ignition[575]: Ignition finished successfully Jul 2 08:45:13.736405 systemd-resolved[187]: Detected conflict on linux IN A 172.24.4.53 Jul 2 08:45:13.736414 systemd-resolved[187]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jul 2 08:45:13.750959 ignition[654]: Ignition 2.14.0 Jul 2 08:45:13.750984 ignition[654]: Stage: fetch Jul 2 08:45:13.751377 ignition[654]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:45:13.751457 ignition[654]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:45:13.754202 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:45:13.754504 ignition[654]: parsed url from cmdline: "" Jul 2 08:45:13.754514 ignition[654]: no config URL provided Jul 2 08:45:13.754526 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:45:13.754543 ignition[654]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:45:13.762087 ignition[654]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 08:45:13.762157 ignition[654]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 08:45:13.764065 ignition[654]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 08:45:14.114726 ignition[654]: GET result: OK Jul 2 08:45:14.115061 ignition[654]: parsing config with SHA512: b7cfdad6aa9399208d8b1c73e3cf7f3366ad2d8ab5f785b00994ddfd59f591feb85f01fa0044dc9bf2e71b77a97979355988711c94204c430fa2638f1cd0c9e2 Jul 2 08:45:14.137927 unknown[654]: fetched base config from "system" Jul 2 08:45:14.137965 unknown[654]: fetched base config from "system" Jul 2 08:45:14.137979 unknown[654]: fetched user config from "openstack" Jul 2 08:45:14.139855 ignition[654]: fetch: fetch complete Jul 2 08:45:14.142826 systemd[1]: Finished ignition-fetch.service. Jul 2 08:45:14.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.139870 ignition[654]: fetch: fetch passed Jul 2 08:45:14.139975 ignition[654]: Ignition finished successfully Jul 2 08:45:14.146755 systemd[1]: Starting ignition-kargs.service... Jul 2 08:45:14.172701 ignition[660]: Ignition 2.14.0 Jul 2 08:45:14.172729 ignition[660]: Stage: kargs Jul 2 08:45:14.172966 ignition[660]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:45:14.173011 ignition[660]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:45:14.175212 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:45:14.178490 ignition[660]: kargs: kargs passed Jul 2 08:45:14.178630 ignition[660]: Ignition finished successfully Jul 2 08:45:14.180509 systemd[1]: Finished ignition-kargs.service. Jul 2 08:45:14.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.185066 systemd[1]: Starting ignition-disks.service... Jul 2 08:45:14.204750 ignition[665]: Ignition 2.14.0 Jul 2 08:45:14.204776 ignition[665]: Stage: disks Jul 2 08:45:14.205008 ignition[665]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:45:14.205048 ignition[665]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:45:14.207135 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:45:14.209868 ignition[665]: disks: disks passed Jul 2 08:45:14.209968 ignition[665]: Ignition finished successfully Jul 2 08:45:14.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.212015 systemd[1]: Finished ignition-disks.service. Jul 2 08:45:14.214001 systemd[1]: Reached target initrd-root-device.target. Jul 2 08:45:14.216084 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:45:14.218242 systemd[1]: Reached target local-fs.target. Jul 2 08:45:14.220565 systemd[1]: Reached target sysinit.target. Jul 2 08:45:14.222754 systemd[1]: Reached target basic.target. Jul 2 08:45:14.226708 systemd[1]: Starting systemd-fsck-root.service... Jul 2 08:45:14.259582 systemd-fsck[673]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 08:45:14.270982 systemd[1]: Finished systemd-fsck-root.service. Jul 2 08:45:14.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.274247 systemd[1]: Mounting sysroot.mount... Jul 2 08:45:14.298386 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 08:45:14.297396 systemd[1]: Mounted sysroot.mount. Jul 2 08:45:14.299758 systemd[1]: Reached target initrd-root-fs.target. Jul 2 08:45:14.303313 systemd[1]: Mounting sysroot-usr.mount... Jul 2 08:45:14.305160 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 08:45:14.306675 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 2 08:45:14.307930 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:45:14.308003 systemd[1]: Reached target ignition-diskful.target. Jul 2 08:45:14.315057 systemd[1]: Mounted sysroot-usr.mount. Jul 2 08:45:14.324438 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:45:14.327793 systemd[1]: Starting initrd-setup-root.service... Jul 2 08:45:14.351178 initrd-setup-root[685]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:45:14.354945 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (680) Jul 2 08:45:14.356739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:45:14.357894 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:45:14.357948 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:45:14.366473 initrd-setup-root[707]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:45:14.377211 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:45:14.381477 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:45:14.385052 initrd-setup-root[727]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:45:14.481237 systemd[1]: Finished initrd-setup-root.service. Jul 2 08:45:14.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.487494 systemd[1]: Starting ignition-mount.service... Jul 2 08:45:14.489875 systemd[1]: Starting sysroot-boot.service... Jul 2 08:45:14.507983 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 08:45:14.508291 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 08:45:14.528898 ignition[747]: INFO : Ignition 2.14.0 Jul 2 08:45:14.528898 ignition[747]: INFO : Stage: mount Jul 2 08:45:14.530911 ignition[747]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:45:14.530911 ignition[747]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:45:14.530911 ignition[747]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:45:14.534570 ignition[747]: INFO : mount: mount passed Jul 2 08:45:14.534570 ignition[747]: INFO : Ignition finished successfully Jul 2 08:45:14.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.534647 systemd[1]: Finished ignition-mount.service. Jul 2 08:45:14.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.556641 systemd[1]: Finished sysroot-boot.service. Jul 2 08:45:14.577766 coreos-metadata[679]: Jul 02 08:45:14.577 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 08:45:14.596178 coreos-metadata[679]: Jul 02 08:45:14.596 INFO Fetch successful Jul 2 08:45:14.596903 coreos-metadata[679]: Jul 02 08:45:14.596 INFO wrote hostname ci-3510-3-5-4-c82a94ccd3.novalocal to /sysroot/etc/hostname Jul 2 08:45:14.602257 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 08:45:14.602537 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 2 08:45:14.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:14.606214 systemd[1]: Starting ignition-files.service... Jul 2 08:45:14.618653 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:45:14.634402 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (756) Jul 2 08:45:14.641155 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:45:14.641225 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:45:14.641253 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:45:14.655956 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:45:14.678274 ignition[775]: INFO : Ignition 2.14.0 Jul 2 08:45:14.678274 ignition[775]: INFO : Stage: files Jul 2 08:45:14.681110 ignition[775]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:45:14.681110 ignition[775]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:45:14.681110 ignition[775]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:45:14.688721 ignition[775]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:45:14.688721 ignition[775]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:45:14.688721 ignition[775]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:45:14.695449 ignition[775]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:45:14.695449 ignition[775]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:45:14.699669 ignition[775]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:45:14.699669 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 08:45:14.699669 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 08:45:14.699669 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:45:14.699669 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 08:45:14.697718 unknown[775]: wrote ssh authorized keys file for user: core Jul 2 08:45:14.760065 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:45:15.070707 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:45:15.090689 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:45:15.090689 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 08:45:15.508910 systemd-networkd[631]: eth0: Gained IPv6LL Jul 2 08:45:15.623366 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 08:45:16.067490 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:45:16.067490 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:45:16.071748 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 08:45:16.563399 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 08:45:18.214616 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:45:18.214616 ignition[775]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:45:18.214616 ignition[775]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:45:18.214616 ignition[775]: INFO : files: op(e): [started] processing unit "containerd.service" Jul 2 08:45:18.232519 ignition[775]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(e): [finished] processing unit "containerd.service" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:45:18.235395 ignition[775]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:45:18.257642 kernel: kauditd_printk_skb: 27 callbacks suppressed Jul 2 08:45:18.257663 kernel: audit: type=1130 audit(1719909918.250:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.257715 ignition[775]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:45:18.257715 ignition[775]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:45:18.257715 ignition[775]: INFO : files: files passed Jul 2 08:45:18.257715 ignition[775]: INFO : Ignition finished successfully Jul 2 08:45:18.246419 systemd[1]: Finished ignition-files.service. Jul 2 08:45:18.255460 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 08:45:18.257027 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 08:45:18.258593 systemd[1]: Starting ignition-quench.service... Jul 2 08:45:18.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.278832 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:45:18.292218 kernel: audit: type=1130 audit(1719909918.279:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.292278 kernel: audit: type=1131 audit(1719909918.279:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.278928 systemd[1]: Finished ignition-quench.service. Jul 2 08:45:18.302116 kernel: audit: type=1130 audit(1719909918.292:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.302243 initrd-setup-root-after-ignition[800]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:45:18.287772 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 08:45:18.292712 systemd[1]: Reached target ignition-complete.target. Jul 2 08:45:18.303213 systemd[1]: Starting initrd-parse-etc.service... Jul 2 08:45:18.322702 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:45:18.322797 systemd[1]: Finished initrd-parse-etc.service. Jul 2 08:45:18.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.325638 systemd[1]: Reached target initrd-fs.target. Jul 2 08:45:18.332146 kernel: audit: type=1130 audit(1719909918.324:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.332167 kernel: audit: type=1131 audit(1719909918.325:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.331636 systemd[1]: Reached target initrd.target. Jul 2 08:45:18.332625 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 08:45:18.333392 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 08:45:18.344265 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 08:45:18.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.348586 systemd[1]: Starting initrd-cleanup.service... Jul 2 08:45:18.349480 kernel: audit: type=1130 audit(1719909918.344:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.358625 systemd[1]: Stopped target nss-lookup.target. Jul 2 08:45:18.359816 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 08:45:18.360958 systemd[1]: Stopped target timers.target. Jul 2 08:45:18.362015 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:45:18.362732 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 08:45:18.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.367277 systemd[1]: Stopped target initrd.target. Jul 2 08:45:18.367906 kernel: audit: type=1131 audit(1719909918.363:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.368490 systemd[1]: Stopped target basic.target. Jul 2 08:45:18.369514 systemd[1]: Stopped target ignition-complete.target. Jul 2 08:45:18.370547 systemd[1]: Stopped target ignition-diskful.target. Jul 2 08:45:18.371620 systemd[1]: Stopped target initrd-root-device.target. Jul 2 08:45:18.372679 systemd[1]: Stopped target remote-fs.target. Jul 2 08:45:18.373667 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 08:45:18.374681 systemd[1]: Stopped target sysinit.target. Jul 2 08:45:18.375689 systemd[1]: Stopped target local-fs.target. Jul 2 08:45:18.376753 systemd[1]: Stopped target local-fs-pre.target. Jul 2 08:45:18.377789 systemd[1]: Stopped target swap.target. Jul 2 08:45:18.378717 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:45:18.379376 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 08:45:18.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.383617 systemd[1]: Stopped target cryptsetup.target. Jul 2 08:45:18.384477 kernel: audit: type=1131 audit(1719909918.380:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.385010 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:45:18.385712 systemd[1]: Stopped dracut-initqueue.service. Jul 2 08:45:18.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.386869 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:45:18.390884 kernel: audit: type=1131 audit(1719909918.386:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.387051 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 08:45:18.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.391473 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:45:18.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.391649 systemd[1]: Stopped ignition-files.service. Jul 2 08:45:18.393213 systemd[1]: Stopping ignition-mount.service... Jul 2 08:45:18.398096 systemd[1]: Stopping iscsid.service... Jul 2 08:45:18.401137 iscsid[636]: iscsid shutting down. Jul 2 08:45:18.398610 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:45:18.398742 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 08:45:18.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.407970 systemd[1]: Stopping sysroot-boot.service... Jul 2 08:45:18.415843 ignition[813]: INFO : Ignition 2.14.0 Jul 2 08:45:18.415843 ignition[813]: INFO : Stage: umount Jul 2 08:45:18.415843 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:45:18.415843 ignition[813]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:45:18.415843 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:45:18.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.408521 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:45:18.424644 ignition[813]: INFO : umount: umount passed Jul 2 08:45:18.424644 ignition[813]: INFO : Ignition finished successfully Jul 2 08:45:18.408705 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 08:45:18.409351 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:45:18.409518 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 08:45:18.412381 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 08:45:18.412483 systemd[1]: Stopped iscsid.service. Jul 2 08:45:18.414844 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:45:18.415256 systemd[1]: Finished initrd-cleanup.service. Jul 2 08:45:18.416949 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:45:18.417406 systemd[1]: Stopped ignition-mount.service. Jul 2 08:45:18.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.419656 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:45:18.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.419700 systemd[1]: Stopped ignition-disks.service. Jul 2 08:45:18.420174 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:45:18.420216 systemd[1]: Stopped ignition-kargs.service. Jul 2 08:45:18.420771 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:45:18.420809 systemd[1]: Stopped ignition-fetch.service. Jul 2 08:45:18.421265 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:45:18.421302 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 08:45:18.421841 systemd[1]: Stopped target paths.target. Jul 2 08:45:18.426582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:45:18.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.430441 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 08:45:18.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.430914 systemd[1]: Stopped target slices.target. Jul 2 08:45:18.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.431288 systemd[1]: Stopped target sockets.target. Jul 2 08:45:18.431752 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:45:18.431786 systemd[1]: Closed iscsid.socket. Jul 2 08:45:18.432194 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:45:18.432244 systemd[1]: Stopped ignition-setup.service. Jul 2 08:45:18.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.432883 systemd[1]: Stopping iscsiuio.service... Jul 2 08:45:18.436024 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 08:45:18.460000 audit: BPF prog-id=6 op=UNLOAD Jul 2 08:45:18.436108 systemd[1]: Stopped iscsiuio.service. Jul 2 08:45:18.436841 systemd[1]: Stopped target network.target. Jul 2 08:45:18.437241 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:45:18.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.437269 systemd[1]: Closed iscsiuio.socket. Jul 2 08:45:18.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.437819 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:45:18.438394 systemd[1]: Stopping systemd-resolved.service... Jul 2 08:45:18.442359 systemd-networkd[631]: eth0: DHCPv6 lease lost Jul 2 08:45:18.466000 audit: BPF prog-id=9 op=UNLOAD Jul 2 08:45:18.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.443712 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:45:18.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.443804 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:45:18.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.446168 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:45:18.446581 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:45:18.446615 systemd[1]: Closed systemd-networkd.socket. Jul 2 08:45:18.447733 systemd[1]: Stopping network-cleanup.service... Jul 2 08:45:18.448174 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:45:18.448223 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 08:45:18.448710 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:45:18.448750 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:45:18.449828 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:45:18.449865 systemd[1]: Stopped systemd-modules-load.service. Jul 2 08:45:18.451044 systemd[1]: Stopping systemd-udevd.service... Jul 2 08:45:18.456467 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:45:18.456957 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:45:18.457052 systemd[1]: Stopped systemd-resolved.service. Jul 2 08:45:18.462418 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:45:18.462515 systemd[1]: Stopped network-cleanup.service. Jul 2 08:45:18.463517 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:45:18.463644 systemd[1]: Stopped systemd-udevd.service. Jul 2 08:45:18.464674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:45:18.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.464707 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 08:45:18.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.465232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:45:18.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.465259 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 08:45:18.466049 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:45:18.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:18.466085 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 08:45:18.466955 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:45:18.466992 systemd[1]: Stopped dracut-cmdline.service. Jul 2 08:45:18.467833 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:45:18.467867 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 08:45:18.469403 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 08:45:18.470135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:45:18.470176 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 08:45:18.480563 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:45:18.480648 systemd[1]: Stopped sysroot-boot.service. Jul 2 08:45:18.481916 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:45:18.481988 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 08:45:18.482770 systemd[1]: Reached target initrd-switch-root.target. Jul 2 08:45:18.483661 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:45:18.483713 systemd[1]: Stopped initrd-setup-root.service. Jul 2 08:45:18.485258 systemd[1]: Starting initrd-switch-root.service... Jul 2 08:45:18.508614 systemd[1]: Switching root. Jul 2 08:45:18.510000 audit: BPF prog-id=5 op=UNLOAD Jul 2 08:45:18.510000 audit: BPF prog-id=4 op=UNLOAD Jul 2 08:45:18.510000 audit: BPF prog-id=3 op=UNLOAD Jul 2 08:45:18.513000 audit: BPF prog-id=8 op=UNLOAD Jul 2 08:45:18.513000 audit: BPF prog-id=7 op=UNLOAD Jul 2 08:45:18.531068 systemd-journald[185]: Journal stopped Jul 2 08:45:23.815746 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Jul 2 08:45:23.815799 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 08:45:23.815817 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 08:45:23.815831 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 08:45:23.815843 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:45:23.815853 kernel: SELinux: policy capability open_perms=1 Jul 2 08:45:23.815867 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:45:23.815877 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:45:23.815888 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:45:23.815898 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:45:23.815909 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:45:23.815920 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:45:23.815931 systemd[1]: Successfully loaded SELinux policy in 90.643ms. Jul 2 08:45:23.815945 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.186ms. Jul 2 08:45:23.815960 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:45:23.815972 systemd[1]: Detected virtualization kvm. Jul 2 08:45:23.815984 systemd[1]: Detected architecture x86-64. Jul 2 08:45:23.815995 systemd[1]: Detected first boot. Jul 2 08:45:23.816007 systemd[1]: Hostname set to . Jul 2 08:45:23.816019 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:45:23.816034 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 08:45:23.816050 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:45:23.816062 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:45:23.816075 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:45:23.816087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:45:23.816100 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:45:23.816112 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 08:45:23.816124 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 08:45:23.816137 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 08:45:23.816150 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 08:45:23.816161 systemd[1]: Created slice system-getty.slice. Jul 2 08:45:23.816173 systemd[1]: Created slice system-modprobe.slice. Jul 2 08:45:23.816184 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 08:45:23.816196 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 08:45:23.816207 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 08:45:23.816218 systemd[1]: Created slice user.slice. Jul 2 08:45:23.816229 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:45:23.816243 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 08:45:23.816254 systemd[1]: Set up automount boot.automount. Jul 2 08:45:23.816266 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 08:45:23.816278 systemd[1]: Reached target integritysetup.target. Jul 2 08:45:23.816289 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:45:23.816301 systemd[1]: Reached target remote-fs.target. Jul 2 08:45:23.816314 systemd[1]: Reached target slices.target. Jul 2 08:45:23.816351 systemd[1]: Reached target swap.target. Jul 2 08:45:23.816364 systemd[1]: Reached target torcx.target. Jul 2 08:45:23.816376 systemd[1]: Reached target veritysetup.target. Jul 2 08:45:23.816387 systemd[1]: Listening on systemd-coredump.socket. Jul 2 08:45:23.816398 systemd[1]: Listening on systemd-initctl.socket. Jul 2 08:45:23.816409 kernel: kauditd_printk_skb: 48 callbacks suppressed Jul 2 08:45:23.816421 kernel: audit: type=1400 audit(1719909923.636:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:45:23.816432 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:45:23.816445 kernel: audit: type=1335 audit(1719909923.636:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 08:45:23.816457 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:45:23.816468 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:45:23.816480 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:45:23.816491 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:45:23.816503 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:45:23.816515 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 08:45:23.816526 systemd[1]: Mounting dev-hugepages.mount... Jul 2 08:45:23.816538 systemd[1]: Mounting dev-mqueue.mount... Jul 2 08:45:23.816555 systemd[1]: Mounting media.mount... Jul 2 08:45:23.816566 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:23.816578 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 08:45:23.816590 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 08:45:23.816601 systemd[1]: Mounting tmp.mount... Jul 2 08:45:23.816612 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 08:45:23.816624 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:45:23.816636 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:45:23.816648 systemd[1]: Starting modprobe@configfs.service... Jul 2 08:45:23.816661 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:45:23.816672 systemd[1]: Starting modprobe@drm.service... Jul 2 08:45:23.816684 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:45:23.816695 systemd[1]: Starting modprobe@fuse.service... Jul 2 08:45:23.816706 systemd[1]: Starting modprobe@loop.service... Jul 2 08:45:23.816718 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:45:23.816729 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 08:45:23.816741 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 08:45:23.816753 systemd[1]: Starting systemd-journald.service... Jul 2 08:45:23.816766 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:45:23.816778 kernel: loop: module loaded Jul 2 08:45:23.816789 systemd[1]: Starting systemd-network-generator.service... Jul 2 08:45:23.816800 systemd[1]: Starting systemd-remount-fs.service... Jul 2 08:45:23.816812 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:45:23.816823 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:23.816835 systemd[1]: Mounted dev-hugepages.mount. Jul 2 08:45:23.816846 systemd[1]: Mounted dev-mqueue.mount. Jul 2 08:45:23.816857 systemd[1]: Mounted media.mount. Jul 2 08:45:23.816874 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 08:45:23.816886 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 08:45:23.816897 systemd[1]: Mounted tmp.mount. Jul 2 08:45:23.816909 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:45:23.816920 kernel: audit: type=1130 audit(1719909923.806:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.816931 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:45:23.816943 systemd[1]: Finished modprobe@configfs.service. Jul 2 08:45:23.816955 kernel: audit: type=1305 audit(1719909923.814:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:45:23.816966 kernel: audit: type=1300 audit(1719909923.814:92): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc8041f10 a2=4000 a3=7ffdc8041fac items=0 ppid=1 pid=964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:45:23.816979 kernel: audit: type=1327 audit(1719909923.814:92): proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:45:23.816993 systemd-journald[964]: Journal started Jul 2 08:45:23.817033 systemd-journald[964]: Runtime Journal (/run/log/journal/940ed3d08871495a80eaeb6c0430add0) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:45:23.636000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 08:45:23.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.814000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:45:23.814000 audit[964]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc8041f10 a2=4000 a3=7ffdc8041fac items=0 ppid=1 pid=964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:45:23.814000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:45:23.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.834167 kernel: audit: type=1130 audit(1719909923.826:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.834203 kernel: audit: type=1131 audit(1719909923.830:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.834574 systemd[1]: Started systemd-journald.service. Jul 2 08:45:23.838684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:45:23.838883 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:45:23.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.843099 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:45:23.844372 kernel: audit: type=1130 audit(1719909923.837:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.843255 systemd[1]: Finished modprobe@drm.service. Jul 2 08:45:23.843976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:45:23.844117 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:45:23.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.851465 kernel: audit: type=1130 audit(1719909923.842:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.851580 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:45:23.851760 systemd[1]: Finished modprobe@loop.service. Jul 2 08:45:23.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.853944 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:45:23.854792 systemd[1]: Finished systemd-network-generator.service. Jul 2 08:45:23.855552 systemd[1]: Finished systemd-remount-fs.service. Jul 2 08:45:23.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.859692 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 08:45:23.860379 kernel: fuse: init (API version 7.34) Jul 2 08:45:23.861023 systemd[1]: Reached target network-pre.target. Jul 2 08:45:23.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.866250 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 08:45:23.866799 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:45:23.874504 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 08:45:23.876076 systemd[1]: Starting systemd-journal-flush.service... Jul 2 08:45:23.876558 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:45:23.877866 systemd[1]: Starting systemd-random-seed.service... Jul 2 08:45:23.878488 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:45:23.879646 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:45:23.882989 systemd[1]: Starting systemd-sysusers.service... Jul 2 08:45:23.885826 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:45:23.886091 systemd[1]: Finished modprobe@fuse.service. Jul 2 08:45:23.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.886999 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 08:45:23.889235 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 08:45:23.901507 systemd-journald[964]: Time spent on flushing to /var/log/journal/940ed3d08871495a80eaeb6c0430add0 is 30.492ms for 1051 entries. Jul 2 08:45:23.901507 systemd-journald[964]: System Journal (/var/log/journal/940ed3d08871495a80eaeb6c0430add0) is 8.0M, max 584.8M, 576.8M free. Jul 2 08:45:23.953070 systemd-journald[964]: Received client request to flush runtime journal. Jul 2 08:45:23.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.897676 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 08:45:23.907894 systemd[1]: Finished systemd-random-seed.service. Jul 2 08:45:23.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.908442 systemd[1]: Reached target first-boot-complete.target. Jul 2 08:45:23.954757 udevadm[1003]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:45:23.936784 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:45:23.938272 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:45:23.939918 systemd[1]: Starting systemd-udev-settle.service... Jul 2 08:45:23.953980 systemd[1]: Finished systemd-journal-flush.service. Jul 2 08:45:23.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:23.983548 systemd[1]: Finished systemd-sysusers.service. Jul 2 08:45:23.985181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:45:24.062617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:45:24.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:24.921033 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 08:45:24.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:24.925081 systemd[1]: Starting systemd-udevd.service... Jul 2 08:45:24.971660 systemd-udevd[1011]: Using default interface naming scheme 'v252'. Jul 2 08:45:25.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:25.034529 systemd[1]: Started systemd-udevd.service. Jul 2 08:45:25.044600 systemd[1]: Starting systemd-networkd.service... Jul 2 08:45:25.069097 systemd[1]: Starting systemd-userdbd.service... Jul 2 08:45:25.112772 systemd[1]: Found device dev-ttyS0.device. Jul 2 08:45:25.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:25.176813 systemd[1]: Started systemd-userdbd.service. Jul 2 08:45:25.181365 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 08:45:25.209213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:45:25.241424 kernel: ACPI: button: Power Button [PWRF] Jul 2 08:45:25.228000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:45:25.228000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556a4a67ee60 a1=3207c a2=7f60cd954bc5 a3=5 items=108 ppid=1011 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:45:25.228000 audit: CWD cwd="/" Jul 2 08:45:25.228000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=1 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=2 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=3 name=(null) inode=14214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=4 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=5 name=(null) inode=14215 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=6 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=7 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=8 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=9 name=(null) inode=14217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=10 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=11 name=(null) inode=14218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=12 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=13 name=(null) inode=14219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=14 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=15 name=(null) inode=14220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=16 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=17 name=(null) inode=14221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=18 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=19 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=20 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=21 name=(null) inode=14223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=22 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=23 name=(null) inode=14224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=24 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=25 name=(null) inode=14225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=26 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=27 name=(null) inode=14226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=28 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=29 name=(null) inode=14227 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=30 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=31 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=32 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=33 name=(null) inode=14229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=34 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=35 name=(null) inode=14230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=36 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=37 name=(null) inode=14231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=38 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=39 name=(null) inode=14232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=40 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=41 name=(null) inode=14233 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=42 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=43 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=44 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=45 name=(null) inode=14235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=46 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=47 name=(null) inode=14236 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=48 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=49 name=(null) inode=14237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=50 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=51 name=(null) inode=14238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=52 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=53 name=(null) inode=14239 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=55 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=56 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=57 name=(null) inode=14241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=58 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=59 name=(null) inode=14242 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=60 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=61 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=62 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=63 name=(null) inode=14244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=64 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=65 name=(null) inode=14245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=66 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=67 name=(null) inode=14246 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=68 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=69 name=(null) inode=14247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=70 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=71 name=(null) inode=14248 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=72 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=73 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=74 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=75 name=(null) inode=14250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=76 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=77 name=(null) inode=14251 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=78 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=79 name=(null) inode=14252 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=80 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=81 name=(null) inode=14253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=82 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=83 name=(null) inode=14254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=84 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=85 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=86 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=87 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=88 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=89 name=(null) inode=14257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=90 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=91 name=(null) inode=14258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=92 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=93 name=(null) inode=14259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=94 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=95 name=(null) inode=14260 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=96 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=97 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=98 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=99 name=(null) inode=14262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=100 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=101 name=(null) inode=14263 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=102 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=103 name=(null) inode=14264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=104 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=105 name=(null) inode=14265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=106 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PATH item=107 name=(null) inode=14266 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:45:25.228000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 08:45:25.260886 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 08:45:25.284040 systemd-networkd[1020]: lo: Link UP Jul 2 08:45:25.284053 systemd-networkd[1020]: lo: Gained carrier Jul 2 08:45:25.284500 systemd-networkd[1020]: Enumeration completed Jul 2 08:45:25.284619 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:45:25.284820 systemd[1]: Started systemd-networkd.service. Jul 2 08:45:25.290414 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 08:45:25.286895 systemd-networkd[1020]: eth0: Link UP Jul 2 08:45:25.286907 systemd-networkd[1020]: eth0: Gained carrier Jul 2 08:45:25.298352 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:45:25.298479 systemd-networkd[1020]: eth0: DHCPv4 address 172.24.4.53/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:45:25.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:25.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:25.340814 systemd[1]: Finished systemd-udev-settle.service. Jul 2 08:45:25.342483 systemd[1]: Starting lvm2-activation-early.service... Jul 2 08:45:25.370858 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:45:25.398257 systemd[1]: Finished lvm2-activation-early.service. Jul 2 08:45:25.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:25.398861 systemd[1]: Reached target cryptsetup.target. Jul 2 08:45:25.400402 systemd[1]: Starting lvm2-activation.service... Jul 2 08:45:25.404772 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:45:25.430251 systemd[1]: Finished lvm2-activation.service. Jul 2 08:45:25.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:25.430833 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:45:25.431269 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:45:25.431293 systemd[1]: Reached target local-fs.target. Jul 2 08:45:25.431732 systemd[1]: Reached target machines.target. Jul 2 08:45:25.433200 systemd[1]: Starting ldconfig.service... Jul 2 08:45:25.434856 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:45:25.434910 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:45:25.436132 systemd[1]: Starting systemd-boot-update.service... Jul 2 08:45:25.437643 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 08:45:25.439459 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 08:45:25.443638 systemd[1]: Starting systemd-sysext.service... Jul 2 08:45:25.450491 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Jul 2 08:45:25.451663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 08:45:25.512272 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 08:45:25.524473 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 08:45:25.525543 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 08:45:25.615406 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 08:45:25.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:25.660892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 08:45:26.142711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:45:26.144552 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 08:45:26.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.192420 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:45:26.226930 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 08:45:26.272855 (sd-sysext)[1064]: Using extensions 'kubernetes'. Jul 2 08:45:26.277431 (sd-sysext)[1064]: Merged extensions into '/usr'. Jul 2 08:45:26.319864 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.321378 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31) Jul 2 08:45:26.321378 systemd-fsck[1060]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 08:45:26.323718 systemd[1]: Mounting usr-share-oem.mount... Jul 2 08:45:26.324964 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.326579 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:45:26.328368 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:45:26.333579 systemd[1]: Starting modprobe@loop.service... Jul 2 08:45:26.334119 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.334229 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:45:26.334770 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.339037 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 08:45:26.341157 systemd[1]: Mounted usr-share-oem.mount. Jul 2 08:45:26.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.342775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:45:26.343491 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:45:26.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.345292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:45:26.346372 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:45:26.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.349219 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:45:26.349560 systemd[1]: Finished modprobe@loop.service. Jul 2 08:45:26.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.353180 systemd[1]: Finished systemd-sysext.service. Jul 2 08:45:26.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.358752 systemd[1]: Mounting boot.mount... Jul 2 08:45:26.360466 systemd[1]: Starting ensure-sysext.service... Jul 2 08:45:26.361022 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:45:26.361157 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.362682 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 08:45:26.370122 systemd[1]: Reloading. Jul 2 08:45:26.374073 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 08:45:26.376092 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:45:26.378255 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:45:26.465613 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-07-02T08:45:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:45:26.466763 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-07-02T08:45:26Z" level=info msg="torcx already run" Jul 2 08:45:26.571963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:45:26.572125 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:45:26.599358 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:45:26.670380 systemd[1]: Mounted boot.mount. Jul 2 08:45:26.686843 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.687082 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.688285 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:45:26.689762 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:45:26.691252 systemd[1]: Starting modprobe@loop.service... Jul 2 08:45:26.691806 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.691942 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:45:26.692082 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.692994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:45:26.693139 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:45:26.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.695157 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.695405 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.697908 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:45:26.698566 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.698714 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:45:26.698855 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.699738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:45:26.699905 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:45:26.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.701641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:45:26.701821 systemd[1]: Finished modprobe@loop.service. Jul 2 08:45:26.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.705879 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:45:26.709497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:45:26.710031 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:45:26.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.711300 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.711737 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.713255 systemd[1]: Starting modprobe@drm.service... Jul 2 08:45:26.715106 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:45:26.717034 systemd[1]: Starting modprobe@loop.service... Jul 2 08:45:26.717700 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.718020 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:45:26.721762 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:45:26.722605 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:45:26.724066 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:45:26.726285 systemd[1]: Finished modprobe@drm.service. Jul 2 08:45:26.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.727505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:45:26.727738 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:45:26.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.728894 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:45:26.729132 systemd[1]: Finished modprobe@loop.service. Jul 2 08:45:26.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.731247 systemd[1]: Finished ensure-sysext.service. Jul 2 08:45:26.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.733508 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:45:26.733629 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:45:26.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.842883 systemd[1]: Finished systemd-boot-update.service. Jul 2 08:45:26.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.959994 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 08:45:26.965085 systemd[1]: Starting audit-rules.service... Jul 2 08:45:26.970209 systemd[1]: Starting clean-ca-certificates.service... Jul 2 08:45:26.975763 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 08:45:26.980569 systemd[1]: Starting systemd-resolved.service... Jul 2 08:45:26.990745 systemd[1]: Starting systemd-timesyncd.service... Jul 2 08:45:26.998533 systemd[1]: Starting systemd-update-utmp.service... Jul 2 08:45:26.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:26.999612 systemd[1]: Finished clean-ca-certificates.service. Jul 2 08:45:27.000580 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:45:27.015000 audit[1181]: SYSTEM_BOOT pid=1181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:45:27.017401 systemd[1]: Finished systemd-update-utmp.service. Jul 2 08:45:27.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:27.054888 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 08:45:27.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:45:27.074991 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:45:27.087000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:45:27.087000 audit[1199]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9d10e370 a2=420 a3=0 items=0 ppid=1176 pid=1199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:45:27.087000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 08:45:27.088460 augenrules[1199]: No rules Jul 2 08:45:27.089161 systemd[1]: Finished audit-rules.service. Jul 2 08:45:27.089879 systemd[1]: Finished ldconfig.service. Jul 2 08:45:27.091558 systemd[1]: Starting systemd-update-done.service... Jul 2 08:45:27.092558 systemd-networkd[1020]: eth0: Gained IPv6LL Jul 2 08:45:27.104395 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:45:27.110994 systemd[1]: Finished systemd-update-done.service. Jul 2 08:45:27.135123 systemd-resolved[1179]: Positive Trust Anchors: Jul 2 08:45:27.135137 systemd-resolved[1179]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:45:27.135175 systemd-resolved[1179]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:45:27.141837 systemd[1]: Started systemd-timesyncd.service. Jul 2 08:45:27.142417 systemd[1]: Reached target time-set.target. Jul 2 08:45:27.146096 systemd-resolved[1179]: Using system hostname 'ci-3510-3-5-4-c82a94ccd3.novalocal'. Jul 2 08:45:27.147564 systemd[1]: Started systemd-resolved.service. Jul 2 08:45:27.148073 systemd[1]: Reached target network.target. Jul 2 08:45:27.148503 systemd[1]: Reached target network-online.target. Jul 2 08:45:27.148958 systemd[1]: Reached target nss-lookup.target. Jul 2 08:45:27.149410 systemd[1]: Reached target sysinit.target. Jul 2 08:45:27.149952 systemd[1]: Started motdgen.path. Jul 2 08:45:27.150420 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 08:45:27.151016 systemd[1]: Started logrotate.timer. Jul 2 08:45:27.151579 systemd[1]: Started mdadm.timer. Jul 2 08:45:27.151968 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 08:45:27.152649 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:45:27.152678 systemd[1]: Reached target paths.target. Jul 2 08:45:27.153093 systemd[1]: Reached target timers.target. Jul 2 08:45:27.153817 systemd[1]: Listening on dbus.socket. Jul 2 08:45:27.155512 systemd[1]: Starting docker.socket... Jul 2 08:45:27.157853 systemd[1]: Listening on sshd.socket. Jul 2 08:45:27.158450 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:45:27.158737 systemd[1]: Listening on docker.socket. Jul 2 08:45:27.159378 systemd[1]: Reached target sockets.target. Jul 2 08:45:27.159909 systemd[1]: Reached target basic.target. Jul 2 08:45:27.160564 systemd[1]: System is tainted: cgroupsv1 Jul 2 08:45:27.160623 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:45:27.160648 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:45:27.161763 systemd[1]: Starting containerd.service... Jul 2 08:45:27.163953 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 08:45:27.165604 systemd[1]: Starting dbus.service... Jul 2 08:45:27.167214 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 08:45:27.169366 systemd[1]: Starting extend-filesystems.service... Jul 2 08:45:27.176175 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 08:45:27.178880 systemd[1]: Starting kubelet.service... Jul 2 08:45:27.181362 jq[1216]: false Jul 2 08:45:27.182740 systemd[1]: Starting motdgen.service... Jul 2 08:45:27.188394 systemd[1]: Starting prepare-helm.service... Jul 2 08:45:27.190067 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 08:45:27.194977 extend-filesystems[1217]: Found loop1 Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda1 Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda2 Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda3 Jul 2 08:45:27.194977 extend-filesystems[1217]: Found usr Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda4 Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda6 Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda7 Jul 2 08:45:27.194977 extend-filesystems[1217]: Found vda9 Jul 2 08:45:27.194977 extend-filesystems[1217]: Checking size of /dev/vda9 Jul 2 08:45:27.197729 systemd[1]: Starting sshd-keygen.service... Jul 2 08:45:27.203612 systemd[1]: Starting systemd-logind.service... Jul 2 08:45:27.209052 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:45:27.209127 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:45:27.210253 systemd[1]: Starting update-engine.service... Jul 2 08:45:27.211737 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 08:45:27.213957 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:45:27.269551 tar[1236]: linux-amd64/helm Jul 2 08:45:27.214213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 08:45:27.224276 systemd[1]: Created slice system-sshd.slice. Jul 2 08:45:27.270634 dbus-daemon[1215]: [system] SELinux support is enabled Jul 2 08:45:27.279918 jq[1233]: true Jul 2 08:45:27.241700 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:45:27.241918 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 08:45:27.258790 systemd-timesyncd[1180]: Contacted time server 54.38.114.34:123 (0.flatcar.pool.ntp.org). Jul 2 08:45:27.290729 jq[1240]: true Jul 2 08:45:27.261944 systemd-timesyncd[1180]: Initial clock synchronization to Tue 2024-07-02 08:45:27.404720 UTC. Jul 2 08:45:27.270785 systemd[1]: Started dbus.service. Jul 2 08:45:27.273355 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:45:27.273377 systemd[1]: Reached target system-config.target. Jul 2 08:45:27.273831 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:45:27.273846 systemd[1]: Reached target user-config.target. Jul 2 08:45:27.300909 extend-filesystems[1217]: Resized partition /dev/vda9 Jul 2 08:45:27.311308 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:45:27.311604 systemd[1]: Finished motdgen.service. Jul 2 08:45:27.322554 extend-filesystems[1270]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 08:45:27.367357 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jul 2 08:45:27.393317 env[1243]: time="2024-07-02T08:45:27.393215121Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 08:45:27.407723 update_engine[1232]: I0702 08:45:27.406540 1232 main.cc:92] Flatcar Update Engine starting Jul 2 08:45:27.467612 update_engine[1232]: I0702 08:45:27.416474 1232 update_check_scheduler.cc:74] Next update check in 2m15s Jul 2 08:45:27.467734 env[1243]: time="2024-07-02T08:45:27.445469855Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:45:27.416431 systemd[1]: Started update-engine.service. Jul 2 08:45:27.418978 systemd[1]: Started locksmithd.service. Jul 2 08:45:27.463168 systemd-logind[1229]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 08:45:27.463190 systemd-logind[1229]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:45:27.463460 systemd-logind[1229]: New seat seat0. Jul 2 08:45:27.465369 systemd[1]: Started systemd-logind.service. Jul 2 08:45:27.471551 env[1243]: time="2024-07-02T08:45:27.471186076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.473970297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.474044937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.474552950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.474599087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.474622000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.474634814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.474791037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.475142165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.475373830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:45:27.475451 env[1243]: time="2024-07-02T08:45:27.475395060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:45:27.476087 env[1243]: time="2024-07-02T08:45:27.475833281Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:45:27.476087 env[1243]: time="2024-07-02T08:45:27.475855704Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:45:27.495353 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jul 2 08:45:27.629580 bash[1279]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:45:27.632167 extend-filesystems[1270]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 08:45:27.632167 extend-filesystems[1270]: old_desc_blocks = 1, new_desc_blocks = 3 Jul 2 08:45:27.632167 extend-filesystems[1270]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.651453036Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.651682026Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.651765262Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.651888743Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652054564Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652098827Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652172816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652246685Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652285558Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652364506Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652406945Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652480223Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.652886775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:45:27.666664 env[1243]: time="2024-07-02T08:45:27.653227134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:45:27.632550 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 08:45:27.667652 extend-filesystems[1217]: Resized filesystem in /dev/vda9 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.654819500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.655010709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.655135633Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.655701554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.662891279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663425340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663444206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663460697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663513536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663529365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663544975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663585551Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663808729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663853523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.679190 env[1243]: time="2024-07-02T08:45:27.663872439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.643543 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:45:27.680371 env[1243]: time="2024-07-02T08:45:27.663886746Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:45:27.680371 env[1243]: time="2024-07-02T08:45:27.663906613Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 08:45:27.680371 env[1243]: time="2024-07-02T08:45:27.663939905Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:45:27.680371 env[1243]: time="2024-07-02T08:45:27.663973007Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 08:45:27.680371 env[1243]: time="2024-07-02T08:45:27.664043470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:45:27.643788 systemd[1]: Finished extend-filesystems.service. Jul 2 08:45:27.667575 systemd[1]: Started containerd.service. Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.664321681Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.664441256Z" level=info msg="Connect containerd service" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.664513581Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.666155310Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.667304766Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.667383604Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.667801247Z" level=info msg="containerd successfully booted in 0.291645s" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.674444136Z" level=info msg="Start subscribing containerd event" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.674669058Z" level=info msg="Start recovering state" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.674757424Z" level=info msg="Start event monitor" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.674776489Z" level=info msg="Start snapshots syncer" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.674788762Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:45:27.680875 env[1243]: time="2024-07-02T08:45:27.674797659Z" level=info msg="Start streaming server" Jul 2 08:45:28.103534 locksmithd[1282]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:45:28.488637 tar[1236]: linux-amd64/LICENSE Jul 2 08:45:28.488637 tar[1236]: linux-amd64/README.md Jul 2 08:45:28.498166 systemd[1]: Finished prepare-helm.service. Jul 2 08:45:28.597954 sshd_keygen[1258]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:45:28.642432 systemd[1]: Finished sshd-keygen.service. Jul 2 08:45:28.645956 systemd[1]: Starting issuegen.service... Jul 2 08:45:28.648055 systemd[1]: Started sshd@0-172.24.4.53:22-172.24.4.1:41702.service. Jul 2 08:45:28.658039 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:45:28.658289 systemd[1]: Finished issuegen.service. Jul 2 08:45:28.660269 systemd[1]: Starting systemd-user-sessions.service... Jul 2 08:45:28.671012 systemd[1]: Finished systemd-user-sessions.service. Jul 2 08:45:28.672954 systemd[1]: Started getty@tty1.service. Jul 2 08:45:28.674544 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 08:45:28.675169 systemd[1]: Reached target getty.target. Jul 2 08:45:28.793307 systemd[1]: Started kubelet.service. Jul 2 08:45:29.946956 sshd[1307]: Accepted publickey for core from 172.24.4.1 port 41702 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:45:29.952971 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:29.989286 systemd[1]: Created slice user-500.slice. Jul 2 08:45:29.991759 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 08:45:29.992314 systemd-logind[1229]: New session 1 of user core. Jul 2 08:45:30.007135 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 08:45:30.008935 systemd[1]: Starting user@500.service... Jul 2 08:45:30.020115 (systemd)[1329]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:30.116731 systemd[1329]: Queued start job for default target default.target. Jul 2 08:45:30.117321 systemd[1329]: Reached target paths.target. Jul 2 08:45:30.117413 systemd[1329]: Reached target sockets.target. Jul 2 08:45:30.117451 systemd[1329]: Reached target timers.target. Jul 2 08:45:30.117484 systemd[1329]: Reached target basic.target. Jul 2 08:45:30.117743 systemd[1]: Started user@500.service. Jul 2 08:45:30.121411 systemd[1]: Started session-1.scope. Jul 2 08:45:30.122508 systemd[1329]: Reached target default.target. Jul 2 08:45:30.122597 systemd[1329]: Startup finished in 95ms. Jul 2 08:45:30.637818 systemd[1]: Started sshd@1-172.24.4.53:22-172.24.4.1:41716.service. Jul 2 08:45:31.204441 kubelet[1321]: E0702 08:45:31.204282 1321 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:45:31.208751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:45:31.208927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:45:31.991403 sshd[1340]: Accepted publickey for core from 172.24.4.1 port 41716 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:45:31.994315 sshd[1340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:32.006450 systemd-logind[1229]: New session 2 of user core. Jul 2 08:45:32.007266 systemd[1]: Started session-2.scope. Jul 2 08:45:32.799510 sshd[1340]: pam_unix(sshd:session): session closed for user core Jul 2 08:45:32.800134 systemd[1]: Started sshd@2-172.24.4.53:22-172.24.4.1:41718.service. Jul 2 08:45:32.807216 systemd[1]: sshd@1-172.24.4.53:22-172.24.4.1:41716.service: Deactivated successfully. Jul 2 08:45:32.810003 systemd-logind[1229]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:45:32.811768 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:45:32.817664 systemd-logind[1229]: Removed session 2. Jul 2 08:45:34.337855 sshd[1347]: Accepted publickey for core from 172.24.4.1 port 41718 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:45:34.340743 sshd[1347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:34.351950 systemd-logind[1229]: New session 3 of user core. Jul 2 08:45:34.352778 systemd[1]: Started session-3.scope. Jul 2 08:45:34.464986 coreos-metadata[1214]: Jul 02 08:45:34.464 WARN failed to locate config-drive, using the metadata service API instead Jul 2 08:45:34.570084 coreos-metadata[1214]: Jul 02 08:45:34.570 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 08:45:34.894777 coreos-metadata[1214]: Jul 02 08:45:34.894 INFO Fetch successful Jul 2 08:45:34.894777 coreos-metadata[1214]: Jul 02 08:45:34.894 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:45:34.911928 coreos-metadata[1214]: Jul 02 08:45:34.911 INFO Fetch successful Jul 2 08:45:34.917542 unknown[1214]: wrote ssh authorized keys file for user: core Jul 2 08:45:34.948514 update-ssh-keys[1357]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:45:34.949274 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 08:45:34.950068 systemd[1]: Reached target multi-user.target. Jul 2 08:45:34.953087 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 08:45:34.974216 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 08:45:34.974834 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 08:45:34.975714 systemd[1]: Startup finished in 9.107s (kernel) + 16.060s (userspace) = 25.168s. Jul 2 08:45:34.989616 sshd[1347]: pam_unix(sshd:session): session closed for user core Jul 2 08:45:34.994639 systemd[1]: sshd@2-172.24.4.53:22-172.24.4.1:41718.service: Deactivated successfully. Jul 2 08:45:34.996718 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:45:34.996865 systemd-logind[1229]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:45:34.999322 systemd-logind[1229]: Removed session 3. Jul 2 08:45:41.210953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:45:41.211450 systemd[1]: Stopped kubelet.service. Jul 2 08:45:41.214417 systemd[1]: Starting kubelet.service... Jul 2 08:45:41.397390 systemd[1]: Started kubelet.service. Jul 2 08:45:41.719005 kubelet[1373]: E0702 08:45:41.718926 1373 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:45:41.726879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:45:41.727211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:45:45.038942 systemd[1]: Started sshd@3-172.24.4.53:22-172.24.4.1:42456.service. Jul 2 08:45:46.666689 sshd[1381]: Accepted publickey for core from 172.24.4.1 port 42456 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:45:46.669232 sshd[1381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:46.680658 systemd-logind[1229]: New session 4 of user core. Jul 2 08:45:46.681790 systemd[1]: Started session-4.scope. Jul 2 08:45:47.381953 sshd[1381]: pam_unix(sshd:session): session closed for user core Jul 2 08:45:47.388513 systemd[1]: Started sshd@4-172.24.4.53:22-172.24.4.1:42468.service. Jul 2 08:45:47.396044 systemd[1]: sshd@3-172.24.4.53:22-172.24.4.1:42456.service: Deactivated successfully. Jul 2 08:45:47.401547 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:45:47.403748 systemd-logind[1229]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:45:47.406979 systemd-logind[1229]: Removed session 4. Jul 2 08:45:48.747811 sshd[1386]: Accepted publickey for core from 172.24.4.1 port 42468 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:45:48.750392 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:48.760706 systemd-logind[1229]: New session 5 of user core. Jul 2 08:45:48.761283 systemd[1]: Started session-5.scope. Jul 2 08:45:49.359930 sshd[1386]: pam_unix(sshd:session): session closed for user core Jul 2 08:45:49.364891 systemd[1]: Started sshd@5-172.24.4.53:22-172.24.4.1:42470.service. Jul 2 08:45:49.373112 systemd[1]: sshd@4-172.24.4.53:22-172.24.4.1:42468.service: Deactivated successfully. Jul 2 08:45:49.376794 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:45:49.376800 systemd-logind[1229]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:45:49.383677 systemd-logind[1229]: Removed session 5. Jul 2 08:45:50.834079 sshd[1393]: Accepted publickey for core from 172.24.4.1 port 42470 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:45:50.839644 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:50.853770 systemd-logind[1229]: New session 6 of user core. Jul 2 08:45:50.855112 systemd[1]: Started session-6.scope. Jul 2 08:45:51.363909 sshd[1393]: pam_unix(sshd:session): session closed for user core Jul 2 08:45:51.366209 systemd[1]: Started sshd@6-172.24.4.53:22-172.24.4.1:42476.service. Jul 2 08:45:51.376154 systemd[1]: sshd@5-172.24.4.53:22-172.24.4.1:42470.service: Deactivated successfully. Jul 2 08:45:51.378763 systemd-logind[1229]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:45:51.378780 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:45:51.384929 systemd-logind[1229]: Removed session 6. Jul 2 08:45:51.960820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:45:51.961513 systemd[1]: Stopped kubelet.service. Jul 2 08:45:51.965319 systemd[1]: Starting kubelet.service... Jul 2 08:45:52.179475 systemd[1]: Started kubelet.service. Jul 2 08:45:52.591448 kubelet[1411]: E0702 08:45:52.591260 1411 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:45:52.595984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:45:52.596608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:45:52.835380 sshd[1400]: Accepted publickey for core from 172.24.4.1 port 42476 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:45:52.840372 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:45:52.854760 systemd-logind[1229]: New session 7 of user core. Jul 2 08:45:52.855220 systemd[1]: Started session-7.scope. Jul 2 08:45:53.301132 sudo[1421]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:45:53.301688 sudo[1421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:45:53.328897 systemd[1]: Starting docker.service... Jul 2 08:45:53.372944 env[1431]: time="2024-07-02T08:45:53.372874833Z" level=info msg="Starting up" Jul 2 08:45:53.374257 env[1431]: time="2024-07-02T08:45:53.374238596Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:45:53.374366 env[1431]: time="2024-07-02T08:45:53.374342017Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:45:53.374498 env[1431]: time="2024-07-02T08:45:53.374479545Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:45:53.374572 env[1431]: time="2024-07-02T08:45:53.374557140Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:45:53.379985 env[1431]: time="2024-07-02T08:45:53.379903267Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:45:53.379985 env[1431]: time="2024-07-02T08:45:53.379975991Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:45:53.380130 env[1431]: time="2024-07-02T08:45:53.379999370Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:45:53.380130 env[1431]: time="2024-07-02T08:45:53.380013846Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:45:53.389591 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1527879325-merged.mount: Deactivated successfully. Jul 2 08:45:53.741500 env[1431]: time="2024-07-02T08:45:53.741441246Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 2 08:45:53.741833 env[1431]: time="2024-07-02T08:45:53.741797816Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 2 08:45:53.742412 env[1431]: time="2024-07-02T08:45:53.742322872Z" level=info msg="Loading containers: start." Jul 2 08:45:53.932604 kernel: Initializing XFRM netlink socket Jul 2 08:45:53.978509 env[1431]: time="2024-07-02T08:45:53.978444915Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 08:45:54.088980 systemd-networkd[1020]: docker0: Link UP Jul 2 08:45:54.106584 env[1431]: time="2024-07-02T08:45:54.106544653Z" level=info msg="Loading containers: done." Jul 2 08:45:54.124838 env[1431]: time="2024-07-02T08:45:54.124805044Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:45:54.125134 env[1431]: time="2024-07-02T08:45:54.125115803Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 08:45:54.125320 env[1431]: time="2024-07-02T08:45:54.125300264Z" level=info msg="Daemon has completed initialization" Jul 2 08:45:54.155502 systemd[1]: Started docker.service. Jul 2 08:45:54.166437 env[1431]: time="2024-07-02T08:45:54.166388477Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:45:56.040325 env[1243]: time="2024-07-02T08:45:56.040213909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 08:45:56.865968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2712405394.mount: Deactivated successfully. Jul 2 08:45:59.771847 env[1243]: time="2024-07-02T08:45:59.771663333Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:45:59.774879 env[1243]: time="2024-07-02T08:45:59.774731919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:45:59.779274 env[1243]: time="2024-07-02T08:45:59.779216694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:45:59.784431 env[1243]: time="2024-07-02T08:45:59.784407088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:45:59.787167 env[1243]: time="2024-07-02T08:45:59.787141741Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 08:45:59.805196 env[1243]: time="2024-07-02T08:45:59.805164275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 08:46:02.710779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:46:02.711039 systemd[1]: Stopped kubelet.service. Jul 2 08:46:02.712891 systemd[1]: Starting kubelet.service... Jul 2 08:46:02.796772 systemd[1]: Started kubelet.service. Jul 2 08:46:02.878689 kubelet[1573]: E0702 08:46:02.878633 1573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:46:02.880458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:46:02.880614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:46:03.194391 env[1243]: time="2024-07-02T08:46:03.194145017Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:03.200638 env[1243]: time="2024-07-02T08:46:03.200547890Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:03.207707 env[1243]: time="2024-07-02T08:46:03.207564696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:03.213189 env[1243]: time="2024-07-02T08:46:03.213106348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:03.215820 env[1243]: time="2024-07-02T08:46:03.215716495Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 08:46:03.242796 env[1243]: time="2024-07-02T08:46:03.242682376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 08:46:05.491426 env[1243]: time="2024-07-02T08:46:05.491367486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:05.493851 env[1243]: time="2024-07-02T08:46:05.493801349Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:05.497840 env[1243]: time="2024-07-02T08:46:05.497762319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:05.502223 env[1243]: time="2024-07-02T08:46:05.502159961Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:05.504172 env[1243]: time="2024-07-02T08:46:05.504082846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 08:46:05.515433 env[1243]: time="2024-07-02T08:46:05.515400396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 08:46:07.581876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652301996.mount: Deactivated successfully. Jul 2 08:46:08.428911 env[1243]: time="2024-07-02T08:46:08.428842701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:08.431013 env[1243]: time="2024-07-02T08:46:08.430964827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:08.432920 env[1243]: time="2024-07-02T08:46:08.432872607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:08.435384 env[1243]: time="2024-07-02T08:46:08.435297911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:08.436427 env[1243]: time="2024-07-02T08:46:08.436375098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 08:46:08.447667 env[1243]: time="2024-07-02T08:46:08.447594822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:46:09.072934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1511674107.mount: Deactivated successfully. Jul 2 08:46:09.094469 env[1243]: time="2024-07-02T08:46:09.094403483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:09.098145 env[1243]: time="2024-07-02T08:46:09.098058073Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:09.103636 env[1243]: time="2024-07-02T08:46:09.103566908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:09.105589 env[1243]: time="2024-07-02T08:46:09.105538030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:09.106960 env[1243]: time="2024-07-02T08:46:09.106904975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 08:46:09.128806 env[1243]: time="2024-07-02T08:46:09.128743364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:46:09.805651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount413958723.mount: Deactivated successfully. Jul 2 08:46:12.814805 update_engine[1232]: I0702 08:46:12.814739 1232 update_attempter.cc:509] Updating boot flags... Jul 2 08:46:12.883701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 08:46:12.884214 systemd[1]: Stopped kubelet.service. Jul 2 08:46:12.893938 systemd[1]: Starting kubelet.service... Jul 2 08:46:13.349208 systemd[1]: Started kubelet.service. Jul 2 08:46:13.461859 kubelet[1626]: E0702 08:46:13.461806 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:46:13.463194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:46:13.463364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:46:15.578843 env[1243]: time="2024-07-02T08:46:15.578761127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:15.585016 env[1243]: time="2024-07-02T08:46:15.584960562Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:15.590548 env[1243]: time="2024-07-02T08:46:15.590499233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:15.594959 env[1243]: time="2024-07-02T08:46:15.594879157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:15.597503 env[1243]: time="2024-07-02T08:46:15.597445494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 08:46:15.620799 env[1243]: time="2024-07-02T08:46:15.620738068Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 08:46:16.346670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098482243.mount: Deactivated successfully. Jul 2 08:46:17.534677 env[1243]: time="2024-07-02T08:46:17.534543796Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:17.537059 env[1243]: time="2024-07-02T08:46:17.536985299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:17.540256 env[1243]: time="2024-07-02T08:46:17.540225011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:17.544071 env[1243]: time="2024-07-02T08:46:17.544048558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:17.545671 env[1243]: time="2024-07-02T08:46:17.545603146Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 08:46:21.729622 systemd[1]: Stopped kubelet.service. Jul 2 08:46:21.736515 systemd[1]: Starting kubelet.service... Jul 2 08:46:21.776928 systemd[1]: Reloading. Jul 2 08:46:21.903593 /usr/lib/systemd/system-generators/torcx-generator[1728]: time="2024-07-02T08:46:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:46:21.903627 /usr/lib/systemd/system-generators/torcx-generator[1728]: time="2024-07-02T08:46:21Z" level=info msg="torcx already run" Jul 2 08:46:22.004630 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:46:22.004650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:46:22.029712 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:46:22.385157 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:46:22.385413 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:46:22.386008 systemd[1]: Stopped kubelet.service. Jul 2 08:46:22.389962 systemd[1]: Starting kubelet.service... Jul 2 08:46:22.508476 systemd[1]: Started kubelet.service. Jul 2 08:46:22.991902 kubelet[1790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:46:22.991902 kubelet[1790]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:46:22.991902 kubelet[1790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:46:22.992279 kubelet[1790]: I0702 08:46:22.992030 1790 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:46:23.715453 kubelet[1790]: I0702 08:46:23.715415 1790 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:46:23.715583 kubelet[1790]: I0702 08:46:23.715466 1790 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:46:23.715908 kubelet[1790]: I0702 08:46:23.715884 1790 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:46:23.756479 kubelet[1790]: E0702 08:46:23.756458 1790 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.756728 kubelet[1790]: I0702 08:46:23.756713 1790 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:46:23.782087 kubelet[1790]: I0702 08:46:23.782069 1790 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:46:23.782582 kubelet[1790]: I0702 08:46:23.782568 1790 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:46:23.782827 kubelet[1790]: I0702 08:46:23.782811 1790 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:46:23.783649 kubelet[1790]: I0702 08:46:23.783635 1790 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:46:23.783729 kubelet[1790]: I0702 08:46:23.783718 1790 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:46:23.784797 kubelet[1790]: I0702 08:46:23.784784 1790 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:46:23.786582 kubelet[1790]: I0702 08:46:23.786570 1790 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:46:23.787025 kubelet[1790]: I0702 08:46:23.787012 1790 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:46:23.787116 kubelet[1790]: I0702 08:46:23.787106 1790 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:46:23.787211 kubelet[1790]: I0702 08:46:23.787200 1790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:46:23.787283 kubelet[1790]: W0702 08:46:23.787206 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-4-c82a94ccd3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.787375 kubelet[1790]: E0702 08:46:23.787318 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-4-c82a94ccd3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.791345 kubelet[1790]: I0702 08:46:23.791311 1790 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:46:23.793254 kubelet[1790]: W0702 08:46:23.793240 1790 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:46:23.793813 kubelet[1790]: I0702 08:46:23.793799 1790 server.go:1232] "Started kubelet" Jul 2 08:46:23.793999 kubelet[1790]: W0702 08:46:23.793962 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.794094 kubelet[1790]: E0702 08:46:23.794084 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.802101 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 08:46:23.802272 kubelet[1790]: I0702 08:46:23.802257 1790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:46:23.803755 kubelet[1790]: E0702 08:46:23.803497 1790 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-5-4-c82a94ccd3.novalocal.17de59048dad0e80", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-5-4-c82a94ccd3.novalocal", UID:"ci-3510-3-5-4-c82a94ccd3.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-5-4-c82a94ccd3.novalocal"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 46, 23, 793778304, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 46, 23, 793778304, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-5-4-c82a94ccd3.novalocal"}': 'Post "https://172.24.4.53:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.53:6443: connect: connection refused'(may retry after sleeping) Jul 2 08:46:23.807142 kubelet[1790]: I0702 08:46:23.807125 1790 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:46:23.808097 kubelet[1790]: I0702 08:46:23.808082 1790 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:46:23.809139 kubelet[1790]: I0702 08:46:23.809126 1790 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:46:23.809407 kubelet[1790]: I0702 08:46:23.809394 1790 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:46:23.811171 kubelet[1790]: E0702 08:46:23.811137 1790 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:46:23.811241 kubelet[1790]: E0702 08:46:23.811196 1790 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:46:23.812010 kubelet[1790]: I0702 08:46:23.811792 1790 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:46:23.812132 kubelet[1790]: I0702 08:46:23.812107 1790 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:46:23.812235 kubelet[1790]: I0702 08:46:23.812214 1790 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:46:23.812900 kubelet[1790]: W0702 08:46:23.812825 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.812960 kubelet[1790]: E0702 08:46:23.812918 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.814116 kubelet[1790]: E0702 08:46:23.814100 1790 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-4-c82a94ccd3.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="200ms" Jul 2 08:46:23.854689 kubelet[1790]: I0702 08:46:23.854646 1790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:46:23.856417 kubelet[1790]: I0702 08:46:23.856387 1790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:46:23.856476 kubelet[1790]: I0702 08:46:23.856430 1790 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:46:23.856506 kubelet[1790]: I0702 08:46:23.856478 1790 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:46:23.856571 kubelet[1790]: E0702 08:46:23.856551 1790 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:46:23.864705 kubelet[1790]: W0702 08:46:23.864456 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.864705 kubelet[1790]: E0702 08:46:23.864533 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:23.874684 kubelet[1790]: I0702 08:46:23.874571 1790 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:46:23.874684 kubelet[1790]: I0702 08:46:23.874592 1790 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:46:23.874684 kubelet[1790]: I0702 08:46:23.874632 1790 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:46:23.913658 kubelet[1790]: I0702 08:46:23.913642 1790 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:23.926288 kubelet[1790]: E0702 08:46:23.914034 1790 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:23.957429 kubelet[1790]: E0702 08:46:23.957394 1790 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:46:24.020508 kubelet[1790]: E0702 08:46:24.015664 1790 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-4-c82a94ccd3.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="400ms" Jul 2 08:46:24.021657 kubelet[1790]: I0702 08:46:24.021620 1790 policy_none.go:49] "None policy: Start" Jul 2 08:46:24.023816 kubelet[1790]: I0702 08:46:24.023741 1790 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:46:24.023984 kubelet[1790]: I0702 08:46:24.023878 1790 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:46:24.117630 kubelet[1790]: I0702 08:46:24.117564 1790 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.119400 kubelet[1790]: E0702 08:46:24.119322 1790 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.157823 kubelet[1790]: E0702 08:46:24.157761 1790 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:46:24.241747 kubelet[1790]: I0702 08:46:24.241705 1790 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:46:24.243210 kubelet[1790]: I0702 08:46:24.243186 1790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:46:24.243875 kubelet[1790]: E0702 08:46:24.243847 1790 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" not found" Jul 2 08:46:24.418302 kubelet[1790]: E0702 08:46:24.417200 1790 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-4-c82a94ccd3.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="800ms" Jul 2 08:46:24.522701 kubelet[1790]: I0702 08:46:24.522665 1790 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.523645 kubelet[1790]: E0702 08:46:24.523618 1790 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.558898 kubelet[1790]: I0702 08:46:24.558865 1790 topology_manager.go:215] "Topology Admit Handler" podUID="21facd521ac6c96354f2a276e1dfcd31" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.561463 kubelet[1790]: I0702 08:46:24.561427 1790 topology_manager.go:215] "Topology Admit Handler" podUID="8bb6671bb90a115b72a33fa31a4ed7dc" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.566953 kubelet[1790]: I0702 08:46:24.566919 1790 topology_manager.go:215] "Topology Admit Handler" podUID="c1b1d0b0ac7bd611df413ab334a6e972" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.618646 kubelet[1790]: I0702 08:46:24.618596 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.619104 kubelet[1790]: I0702 08:46:24.619067 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1b1d0b0ac7bd611df413ab334a6e972-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"c1b1d0b0ac7bd611df413ab334a6e972\") " pod="kube-system/kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.619453 kubelet[1790]: I0702 08:46:24.619426 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21facd521ac6c96354f2a276e1dfcd31-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"21facd521ac6c96354f2a276e1dfcd31\") " pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.619782 kubelet[1790]: I0702 08:46:24.619688 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21facd521ac6c96354f2a276e1dfcd31-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"21facd521ac6c96354f2a276e1dfcd31\") " pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.620113 kubelet[1790]: I0702 08:46:24.620087 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21facd521ac6c96354f2a276e1dfcd31-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"21facd521ac6c96354f2a276e1dfcd31\") " pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.620445 kubelet[1790]: I0702 08:46:24.620419 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.620771 kubelet[1790]: I0702 08:46:24.620676 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.621100 kubelet[1790]: I0702 08:46:24.621003 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.621449 kubelet[1790]: I0702 08:46:24.621315 1790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:24.767181 kubelet[1790]: W0702 08:46:24.767089 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:24.767181 kubelet[1790]: E0702 08:46:24.767189 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:24.850749 kubelet[1790]: W0702 08:46:24.850700 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:24.851022 kubelet[1790]: E0702 08:46:24.850998 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:24.870710 env[1243]: time="2024-07-02T08:46:24.870555443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal,Uid:21facd521ac6c96354f2a276e1dfcd31,Namespace:kube-system,Attempt:0,}" Jul 2 08:46:24.880175 env[1243]: time="2024-07-02T08:46:24.879664613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal,Uid:8bb6671bb90a115b72a33fa31a4ed7dc,Namespace:kube-system,Attempt:0,}" Jul 2 08:46:24.882254 env[1243]: time="2024-07-02T08:46:24.881529153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal,Uid:c1b1d0b0ac7bd611df413ab334a6e972,Namespace:kube-system,Attempt:0,}" Jul 2 08:46:24.962681 kubelet[1790]: W0702 08:46:24.962595 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:24.962824 kubelet[1790]: E0702 08:46:24.962705 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:25.175823 kubelet[1790]: W0702 08:46:25.175620 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-4-c82a94ccd3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:25.175823 kubelet[1790]: E0702 08:46:25.175737 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-4-c82a94ccd3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:25.218987 kubelet[1790]: E0702 08:46:25.218935 1790 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-4-c82a94ccd3.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="1.6s" Jul 2 08:46:25.327350 kubelet[1790]: I0702 08:46:25.327261 1790 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:25.327937 kubelet[1790]: E0702 08:46:25.327891 1790 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:25.730943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97774561.mount: Deactivated successfully. Jul 2 08:46:25.740681 env[1243]: time="2024-07-02T08:46:25.740591722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.747247 env[1243]: time="2024-07-02T08:46:25.747178067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.749321 env[1243]: time="2024-07-02T08:46:25.749265456Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.753273 env[1243]: time="2024-07-02T08:46:25.753210013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.757066 env[1243]: time="2024-07-02T08:46:25.757000039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.759738 env[1243]: time="2024-07-02T08:46:25.759682386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.761680 env[1243]: time="2024-07-02T08:46:25.761616505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.763758 env[1243]: time="2024-07-02T08:46:25.763705828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.771760 env[1243]: time="2024-07-02T08:46:25.771693825Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.797946 env[1243]: time="2024-07-02T08:46:25.797859988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.799710 env[1243]: time="2024-07-02T08:46:25.799653924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.801579 env[1243]: time="2024-07-02T08:46:25.801431749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:25.839616 env[1243]: time="2024-07-02T08:46:25.839465260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:46:25.839932 env[1243]: time="2024-07-02T08:46:25.839568772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:46:25.839932 env[1243]: time="2024-07-02T08:46:25.839621053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:46:25.840135 env[1243]: time="2024-07-02T08:46:25.840004511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/775bf8763d6a67eb2a7f8c93ffd5bad5ff72a9ff5515299edbd5d5f2c0e2560c pid=1828 runtime=io.containerd.runc.v2 Jul 2 08:46:25.908532 kubelet[1790]: E0702 08:46:25.908418 1790 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:25.909503 env[1243]: time="2024-07-02T08:46:25.909463154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal,Uid:21facd521ac6c96354f2a276e1dfcd31,Namespace:kube-system,Attempt:0,} returns sandbox id \"775bf8763d6a67eb2a7f8c93ffd5bad5ff72a9ff5515299edbd5d5f2c0e2560c\"" Jul 2 08:46:25.915447 env[1243]: time="2024-07-02T08:46:25.915412218Z" level=info msg="CreateContainer within sandbox \"775bf8763d6a67eb2a7f8c93ffd5bad5ff72a9ff5515299edbd5d5f2c0e2560c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:46:25.925252 env[1243]: time="2024-07-02T08:46:25.925182449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:46:25.925458 env[1243]: time="2024-07-02T08:46:25.925245662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:46:25.925458 env[1243]: time="2024-07-02T08:46:25.925260351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:46:25.925638 env[1243]: time="2024-07-02T08:46:25.925602498Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e3f369f58ac6a5dc002d0b9a2583532e5d9f72f35d20858d60194b12c055385 pid=1871 runtime=io.containerd.runc.v2 Jul 2 08:46:25.980760 env[1243]: time="2024-07-02T08:46:25.980679773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:46:25.981056 env[1243]: time="2024-07-02T08:46:25.980718259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:46:25.981056 env[1243]: time="2024-07-02T08:46:25.980750232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:46:25.981844 env[1243]: time="2024-07-02T08:46:25.981809976Z" level=info msg="CreateContainer within sandbox \"775bf8763d6a67eb2a7f8c93ffd5bad5ff72a9ff5515299edbd5d5f2c0e2560c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53e7434ee958b41f0159134ac4f3f1ff9eca28eda3b06659a4ef2768ec244d92\"" Jul 2 08:46:25.982128 env[1243]: time="2024-07-02T08:46:25.981717807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3784ab9f3a5c62b855d88d6aa7e237ab6fd98b58dbad55859d51681af75c171 pid=1908 runtime=io.containerd.runc.v2 Jul 2 08:46:25.982863 env[1243]: time="2024-07-02T08:46:25.982840765Z" level=info msg="StartContainer for \"53e7434ee958b41f0159134ac4f3f1ff9eca28eda3b06659a4ef2768ec244d92\"" Jul 2 08:46:25.993253 env[1243]: time="2024-07-02T08:46:25.993218009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal,Uid:c1b1d0b0ac7bd611df413ab334a6e972,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e3f369f58ac6a5dc002d0b9a2583532e5d9f72f35d20858d60194b12c055385\"" Jul 2 08:46:25.995644 env[1243]: time="2024-07-02T08:46:25.995607936Z" level=info msg="CreateContainer within sandbox \"8e3f369f58ac6a5dc002d0b9a2583532e5d9f72f35d20858d60194b12c055385\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:46:26.050666 env[1243]: time="2024-07-02T08:46:26.050613241Z" level=info msg="CreateContainer within sandbox \"8e3f369f58ac6a5dc002d0b9a2583532e5d9f72f35d20858d60194b12c055385\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9df7202cec196f923df4fb899b877d16124470c8c4f8d67d9dfc39388234221a\"" Jul 2 08:46:26.051558 env[1243]: time="2024-07-02T08:46:26.051519956Z" level=info msg="StartContainer for \"9df7202cec196f923df4fb899b877d16124470c8c4f8d67d9dfc39388234221a\"" Jul 2 08:46:26.062852 env[1243]: time="2024-07-02T08:46:26.062805587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal,Uid:8bb6671bb90a115b72a33fa31a4ed7dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3784ab9f3a5c62b855d88d6aa7e237ab6fd98b58dbad55859d51681af75c171\"" Jul 2 08:46:26.070769 env[1243]: time="2024-07-02T08:46:26.070724397Z" level=info msg="CreateContainer within sandbox \"c3784ab9f3a5c62b855d88d6aa7e237ab6fd98b58dbad55859d51681af75c171\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:46:26.103428 env[1243]: time="2024-07-02T08:46:26.103317324Z" level=info msg="StartContainer for \"53e7434ee958b41f0159134ac4f3f1ff9eca28eda3b06659a4ef2768ec244d92\" returns successfully" Jul 2 08:46:26.103929 env[1243]: time="2024-07-02T08:46:26.103882413Z" level=info msg="CreateContainer within sandbox \"c3784ab9f3a5c62b855d88d6aa7e237ab6fd98b58dbad55859d51681af75c171\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"84b159a08777c8655c2c51d7bc66faae6bd324c0e5fad898cd35646c3782cec3\"" Jul 2 08:46:26.105915 env[1243]: time="2024-07-02T08:46:26.105874380Z" level=info msg="StartContainer for \"84b159a08777c8655c2c51d7bc66faae6bd324c0e5fad898cd35646c3782cec3\"" Jul 2 08:46:26.176485 env[1243]: time="2024-07-02T08:46:26.176429853Z" level=info msg="StartContainer for \"9df7202cec196f923df4fb899b877d16124470c8c4f8d67d9dfc39388234221a\" returns successfully" Jul 2 08:46:26.200134 env[1243]: time="2024-07-02T08:46:26.200061670Z" level=info msg="StartContainer for \"84b159a08777c8655c2c51d7bc66faae6bd324c0e5fad898cd35646c3782cec3\" returns successfully" Jul 2 08:46:26.706452 systemd[1]: run-containerd-runc-k8s.io-775bf8763d6a67eb2a7f8c93ffd5bad5ff72a9ff5515299edbd5d5f2c0e2560c-runc.P0NQwi.mount: Deactivated successfully. Jul 2 08:46:26.819523 kubelet[1790]: E0702 08:46:26.819495 1790 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-4-c82a94ccd3.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="3.2s" Jul 2 08:46:26.929553 kubelet[1790]: I0702 08:46:26.929533 1790 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:26.930004 kubelet[1790]: E0702 08:46:26.929994 1790 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:26.987770 kubelet[1790]: W0702 08:46:26.987710 1790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:26.987909 kubelet[1790]: E0702 08:46:26.987899 1790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jul 2 08:46:29.792074 kubelet[1790]: I0702 08:46:29.791995 1790 apiserver.go:52] "Watching apiserver" Jul 2 08:46:29.813115 kubelet[1790]: I0702 08:46:29.813069 1790 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:46:29.914192 kubelet[1790]: E0702 08:46:29.914116 1790 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510-3-5-4-c82a94ccd3.novalocal" not found Jul 2 08:46:30.028912 kubelet[1790]: E0702 08:46:30.028834 1790 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-5-4-c82a94ccd3.novalocal\" not found" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:30.135000 kubelet[1790]: I0702 08:46:30.134794 1790 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:30.150059 kubelet[1790]: I0702 08:46:30.149992 1790 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:31.056880 kubelet[1790]: W0702 08:46:31.056820 1790 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:46:32.607057 systemd[1]: Reloading. Jul 2 08:46:32.691487 /usr/lib/systemd/system-generators/torcx-generator[2084]: time="2024-07-02T08:46:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:46:32.691521 /usr/lib/systemd/system-generators/torcx-generator[2084]: time="2024-07-02T08:46:32Z" level=info msg="torcx already run" Jul 2 08:46:32.816050 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:46:32.816215 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:46:32.840482 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:46:32.967411 kubelet[1790]: I0702 08:46:32.967249 1790 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:46:32.968117 systemd[1]: Stopping kubelet.service... Jul 2 08:46:32.988049 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:46:32.988521 systemd[1]: Stopped kubelet.service. Jul 2 08:46:32.991200 systemd[1]: Starting kubelet.service... Jul 2 08:46:35.722300 systemd[1]: Started kubelet.service. Jul 2 08:46:35.861914 sudo[2158]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:46:35.862142 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:46:35.880491 kubelet[2146]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:46:35.880491 kubelet[2146]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:46:35.880491 kubelet[2146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:46:35.880491 kubelet[2146]: I0702 08:46:35.878892 2146 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:46:35.884631 kubelet[2146]: I0702 08:46:35.884610 2146 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:46:35.884737 kubelet[2146]: I0702 08:46:35.884727 2146 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:46:35.885038 kubelet[2146]: I0702 08:46:35.885025 2146 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:46:35.888429 kubelet[2146]: I0702 08:46:35.888408 2146 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:46:35.891270 kubelet[2146]: I0702 08:46:35.891240 2146 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:46:35.905543 kubelet[2146]: I0702 08:46:35.905503 2146 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:46:35.906186 kubelet[2146]: I0702 08:46:35.906175 2146 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:46:35.906734 kubelet[2146]: I0702 08:46:35.906717 2146 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:46:35.906923 kubelet[2146]: I0702 08:46:35.906863 2146 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:46:35.906992 kubelet[2146]: I0702 08:46:35.906983 2146 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:46:35.907147 kubelet[2146]: I0702 08:46:35.907136 2146 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:46:35.907388 kubelet[2146]: I0702 08:46:35.907378 2146 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:46:35.907466 kubelet[2146]: I0702 08:46:35.907456 2146 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:46:35.907556 kubelet[2146]: I0702 08:46:35.907546 2146 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:46:35.907636 kubelet[2146]: I0702 08:46:35.907626 2146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:46:35.922231 kubelet[2146]: I0702 08:46:35.922203 2146 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:46:35.922727 kubelet[2146]: I0702 08:46:35.922707 2146 server.go:1232] "Started kubelet" Jul 2 08:46:35.932863 kubelet[2146]: I0702 08:46:35.932604 2146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:46:35.934237 kubelet[2146]: E0702 08:46:35.934217 2146 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:46:35.934430 kubelet[2146]: E0702 08:46:35.934419 2146 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:46:35.939234 kubelet[2146]: I0702 08:46:35.939210 2146 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:46:35.940938 kubelet[2146]: I0702 08:46:35.940922 2146 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:46:35.943525 kubelet[2146]: I0702 08:46:35.943497 2146 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:46:35.945622 kubelet[2146]: I0702 08:46:35.945596 2146 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:46:35.946695 kubelet[2146]: I0702 08:46:35.946683 2146 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:46:35.949790 kubelet[2146]: I0702 08:46:35.949771 2146 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:46:35.950714 kubelet[2146]: I0702 08:46:35.950233 2146 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:46:35.963309 kubelet[2146]: I0702 08:46:35.962425 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:46:35.965401 kubelet[2146]: I0702 08:46:35.965383 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:46:35.965629 kubelet[2146]: I0702 08:46:35.965618 2146 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:46:35.965709 kubelet[2146]: I0702 08:46:35.965699 2146 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:46:35.965831 kubelet[2146]: E0702 08:46:35.965820 2146 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:46:36.051135 kubelet[2146]: I0702 08:46:36.047613 2146 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.055201 kubelet[2146]: I0702 08:46:36.055033 2146 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.055201 kubelet[2146]: I0702 08:46:36.055126 2146 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.070942 kubelet[2146]: E0702 08:46:36.070912 2146 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:46:36.086796 kubelet[2146]: I0702 08:46:36.086771 2146 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:46:36.086796 kubelet[2146]: I0702 08:46:36.086793 2146 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:46:36.086898 kubelet[2146]: I0702 08:46:36.086807 2146 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:46:36.086955 kubelet[2146]: I0702 08:46:36.086937 2146 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:46:36.086993 kubelet[2146]: I0702 08:46:36.086964 2146 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:46:36.086993 kubelet[2146]: I0702 08:46:36.086972 2146 policy_none.go:49] "None policy: Start" Jul 2 08:46:36.087621 kubelet[2146]: I0702 08:46:36.087597 2146 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:46:36.087621 kubelet[2146]: I0702 08:46:36.087620 2146 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:46:36.087829 kubelet[2146]: I0702 08:46:36.087806 2146 state_mem.go:75] "Updated machine memory state" Jul 2 08:46:36.088955 kubelet[2146]: I0702 08:46:36.088938 2146 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:46:36.091216 kubelet[2146]: I0702 08:46:36.090232 2146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:46:36.271788 kubelet[2146]: I0702 08:46:36.271760 2146 topology_manager.go:215] "Topology Admit Handler" podUID="21facd521ac6c96354f2a276e1dfcd31" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.272031 kubelet[2146]: I0702 08:46:36.272018 2146 topology_manager.go:215] "Topology Admit Handler" podUID="8bb6671bb90a115b72a33fa31a4ed7dc" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.272155 kubelet[2146]: I0702 08:46:36.272144 2146 topology_manager.go:215] "Topology Admit Handler" podUID="c1b1d0b0ac7bd611df413ab334a6e972" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.279226 kubelet[2146]: W0702 08:46:36.279201 2146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:46:36.292475 kubelet[2146]: W0702 08:46:36.292450 2146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:46:36.293628 kubelet[2146]: W0702 08:46:36.293616 2146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:46:36.293780 kubelet[2146]: E0702 08:46:36.293767 2146 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.349211 kubelet[2146]: I0702 08:46:36.349125 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.349419 kubelet[2146]: I0702 08:46:36.349407 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.349545 kubelet[2146]: I0702 08:46:36.349534 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.349655 kubelet[2146]: I0702 08:46:36.349644 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1b1d0b0ac7bd611df413ab334a6e972-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"c1b1d0b0ac7bd611df413ab334a6e972\") " pod="kube-system/kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.349764 kubelet[2146]: I0702 08:46:36.349754 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21facd521ac6c96354f2a276e1dfcd31-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"21facd521ac6c96354f2a276e1dfcd31\") " pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.349872 kubelet[2146]: I0702 08:46:36.349862 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21facd521ac6c96354f2a276e1dfcd31-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"21facd521ac6c96354f2a276e1dfcd31\") " pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.349982 kubelet[2146]: I0702 08:46:36.349972 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.350103 kubelet[2146]: I0702 08:46:36.350093 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bb6671bb90a115b72a33fa31a4ed7dc-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"8bb6671bb90a115b72a33fa31a4ed7dc\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.350220 kubelet[2146]: I0702 08:46:36.350204 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21facd521ac6c96354f2a276e1dfcd31-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal\" (UID: \"21facd521ac6c96354f2a276e1dfcd31\") " pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:36.908224 kubelet[2146]: I0702 08:46:36.908175 2146 apiserver.go:52] "Watching apiserver" Jul 2 08:46:36.939692 sudo[2158]: pam_unix(sudo:session): session closed for user root Jul 2 08:46:36.946831 kubelet[2146]: I0702 08:46:36.946766 2146 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:46:37.099260 kubelet[2146]: W0702 08:46:37.099158 2146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:46:37.100070 kubelet[2146]: E0702 08:46:37.100038 2146 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" Jul 2 08:46:37.123133 kubelet[2146]: I0702 08:46:37.123025 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-5-4-c82a94ccd3.novalocal" podStartSLOduration=6.122900643 podCreationTimestamp="2024-07-02 08:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:46:37.09972508 +0000 UTC m=+1.326339090" watchObservedRunningTime="2024-07-02 08:46:37.122900643 +0000 UTC m=+1.349514643" Jul 2 08:46:37.143002 kubelet[2146]: I0702 08:46:37.142960 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-5-4-c82a94ccd3.novalocal" podStartSLOduration=1.142919429 podCreationTimestamp="2024-07-02 08:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:46:37.126215662 +0000 UTC m=+1.352829672" watchObservedRunningTime="2024-07-02 08:46:37.142919429 +0000 UTC m=+1.369533389" Jul 2 08:46:37.143432 kubelet[2146]: I0702 08:46:37.143061 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-5-4-c82a94ccd3.novalocal" podStartSLOduration=1.143041173 podCreationTimestamp="2024-07-02 08:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:46:37.141564432 +0000 UTC m=+1.368178402" watchObservedRunningTime="2024-07-02 08:46:37.143041173 +0000 UTC m=+1.369655133" Jul 2 08:46:40.332826 sudo[1421]: pam_unix(sudo:session): session closed for user root Jul 2 08:46:40.780681 sshd[1400]: pam_unix(sshd:session): session closed for user core Jul 2 08:46:40.786441 systemd[1]: sshd@6-172.24.4.53:22-172.24.4.1:42476.service: Deactivated successfully. Jul 2 08:46:40.788398 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:46:40.799552 systemd-logind[1229]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:46:40.806277 systemd-logind[1229]: Removed session 7. Jul 2 08:46:44.782471 kubelet[2146]: I0702 08:46:44.782401 2146 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:46:44.783021 env[1243]: time="2024-07-02T08:46:44.782832085Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:46:44.783483 kubelet[2146]: I0702 08:46:44.783468 2146 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:46:45.465412 kubelet[2146]: I0702 08:46:45.465319 2146 topology_manager.go:215] "Topology Admit Handler" podUID="f26ceff3-02bf-4f1f-8815-568ce0852ff0" podNamespace="kube-system" podName="kube-proxy-6wntb" Jul 2 08:46:45.472899 kubelet[2146]: I0702 08:46:45.472823 2146 topology_manager.go:215] "Topology Admit Handler" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" podNamespace="kube-system" podName="cilium-z98n4" Jul 2 08:46:45.509534 kubelet[2146]: I0702 08:46:45.509510 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-cgroup\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.509717 kubelet[2146]: I0702 08:46:45.509706 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-net\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.509808 kubelet[2146]: I0702 08:46:45.509798 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f26ceff3-02bf-4f1f-8815-568ce0852ff0-xtables-lock\") pod \"kube-proxy-6wntb\" (UID: \"f26ceff3-02bf-4f1f-8815-568ce0852ff0\") " pod="kube-system/kube-proxy-6wntb" Jul 2 08:46:45.509899 kubelet[2146]: I0702 08:46:45.509889 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-xtables-lock\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510003 kubelet[2146]: I0702 08:46:45.509992 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f26ceff3-02bf-4f1f-8815-568ce0852ff0-kube-proxy\") pod \"kube-proxy-6wntb\" (UID: \"f26ceff3-02bf-4f1f-8815-568ce0852ff0\") " pod="kube-system/kube-proxy-6wntb" Jul 2 08:46:45.510099 kubelet[2146]: I0702 08:46:45.510090 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-run\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510182 kubelet[2146]: I0702 08:46:45.510173 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cni-path\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510282 kubelet[2146]: I0702 08:46:45.510255 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-config-path\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510396 kubelet[2146]: I0702 08:46:45.510386 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krf7c\" (UniqueName: \"kubernetes.io/projected/f26ceff3-02bf-4f1f-8815-568ce0852ff0-kube-api-access-krf7c\") pod \"kube-proxy-6wntb\" (UID: \"f26ceff3-02bf-4f1f-8815-568ce0852ff0\") " pod="kube-system/kube-proxy-6wntb" Jul 2 08:46:45.510492 kubelet[2146]: I0702 08:46:45.510482 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-lib-modules\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510597 kubelet[2146]: I0702 08:46:45.510587 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-bpf-maps\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510687 kubelet[2146]: I0702 08:46:45.510678 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-kernel\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510778 kubelet[2146]: I0702 08:46:45.510766 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f26ceff3-02bf-4f1f-8815-568ce0852ff0-lib-modules\") pod \"kube-proxy-6wntb\" (UID: \"f26ceff3-02bf-4f1f-8815-568ce0852ff0\") " pod="kube-system/kube-proxy-6wntb" Jul 2 08:46:45.510868 kubelet[2146]: I0702 08:46:45.510859 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-hubble-tls\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.510984 kubelet[2146]: I0702 08:46:45.510973 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxr94\" (UniqueName: \"kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-kube-api-access-kxr94\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.511132 kubelet[2146]: I0702 08:46:45.511108 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-hostproc\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.511245 kubelet[2146]: I0702 08:46:45.511234 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a035bb52-9628-4b0a-bb63-8208622a86a6-clustermesh-secrets\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.511370 kubelet[2146]: I0702 08:46:45.511359 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-etc-cni-netd\") pod \"cilium-z98n4\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " pod="kube-system/cilium-z98n4" Jul 2 08:46:45.684184 kubelet[2146]: I0702 08:46:45.681835 2146 topology_manager.go:215] "Topology Admit Handler" podUID="7d2a14a1-64c0-449a-8938-7fa96f316c29" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-s874b" Jul 2 08:46:45.713173 kubelet[2146]: I0702 08:46:45.713145 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d2a14a1-64c0-449a-8938-7fa96f316c29-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-s874b\" (UID: \"7d2a14a1-64c0-449a-8938-7fa96f316c29\") " pod="kube-system/cilium-operator-6bc8ccdb58-s874b" Jul 2 08:46:45.713408 kubelet[2146]: I0702 08:46:45.713391 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrlg\" (UniqueName: \"kubernetes.io/projected/7d2a14a1-64c0-449a-8938-7fa96f316c29-kube-api-access-dcrlg\") pod \"cilium-operator-6bc8ccdb58-s874b\" (UID: \"7d2a14a1-64c0-449a-8938-7fa96f316c29\") " pod="kube-system/cilium-operator-6bc8ccdb58-s874b" Jul 2 08:46:45.789113 env[1243]: time="2024-07-02T08:46:45.789026634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z98n4,Uid:a035bb52-9628-4b0a-bb63-8208622a86a6,Namespace:kube-system,Attempt:0,}" Jul 2 08:46:45.791098 env[1243]: time="2024-07-02T08:46:45.791064128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6wntb,Uid:f26ceff3-02bf-4f1f-8815-568ce0852ff0,Namespace:kube-system,Attempt:0,}" Jul 2 08:46:45.878548 env[1243]: time="2024-07-02T08:46:45.869206021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:46:45.878548 env[1243]: time="2024-07-02T08:46:45.869247601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:46:45.878548 env[1243]: time="2024-07-02T08:46:45.869273441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:46:45.878548 env[1243]: time="2024-07-02T08:46:45.869453004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfa862762b03705a4426db111f49252df61ef24e181a59d2f8841f61017b2ee5 pid=2231 runtime=io.containerd.runc.v2 Jul 2 08:46:45.890849 env[1243]: time="2024-07-02T08:46:45.890754061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:46:45.890970 env[1243]: time="2024-07-02T08:46:45.890877017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:46:45.890970 env[1243]: time="2024-07-02T08:46:45.890906974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:46:45.891194 env[1243]: time="2024-07-02T08:46:45.891156031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61 pid=2245 runtime=io.containerd.runc.v2 Jul 2 08:46:45.940355 env[1243]: time="2024-07-02T08:46:45.940046825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z98n4,Uid:a035bb52-9628-4b0a-bb63-8208622a86a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\"" Jul 2 08:46:45.940355 env[1243]: time="2024-07-02T08:46:45.940209305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6wntb,Uid:f26ceff3-02bf-4f1f-8815-568ce0852ff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfa862762b03705a4426db111f49252df61ef24e181a59d2f8841f61017b2ee5\"" Jul 2 08:46:45.949467 env[1243]: time="2024-07-02T08:46:45.948990833Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:46:45.952095 env[1243]: time="2024-07-02T08:46:45.949235291Z" level=info msg="CreateContainer within sandbox \"dfa862762b03705a4426db111f49252df61ef24e181a59d2f8841f61017b2ee5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:46:45.983574 env[1243]: time="2024-07-02T08:46:45.983450513Z" level=info msg="CreateContainer within sandbox \"dfa862762b03705a4426db111f49252df61ef24e181a59d2f8841f61017b2ee5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b532e5b6f3198715ddfd2b7d0694c74c7383aa8012dbf9284ba35c1df9a56c1\"" Jul 2 08:46:45.992349 env[1243]: time="2024-07-02T08:46:45.984662385Z" level=info msg="StartContainer for \"8b532e5b6f3198715ddfd2b7d0694c74c7383aa8012dbf9284ba35c1df9a56c1\"" Jul 2 08:46:45.992598 env[1243]: time="2024-07-02T08:46:45.992560369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-s874b,Uid:7d2a14a1-64c0-449a-8938-7fa96f316c29,Namespace:kube-system,Attempt:0,}" Jul 2 08:46:46.026774 env[1243]: time="2024-07-02T08:46:46.022541311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:46:46.026774 env[1243]: time="2024-07-02T08:46:46.022614080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:46:46.026774 env[1243]: time="2024-07-02T08:46:46.022643346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:46:46.026774 env[1243]: time="2024-07-02T08:46:46.022775219Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094 pid=2333 runtime=io.containerd.runc.v2 Jul 2 08:46:46.056442 env[1243]: time="2024-07-02T08:46:46.054375025Z" level=info msg="StartContainer for \"8b532e5b6f3198715ddfd2b7d0694c74c7383aa8012dbf9284ba35c1df9a56c1\" returns successfully" Jul 2 08:46:46.117870 env[1243]: time="2024-07-02T08:46:46.116690213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-s874b,Uid:7d2a14a1-64c0-449a-8938-7fa96f316c29,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\"" Jul 2 08:46:52.601351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092739417.mount: Deactivated successfully. Jul 2 08:46:58.050718 env[1243]: time="2024-07-02T08:46:58.049976978Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:58.056024 env[1243]: time="2024-07-02T08:46:58.055948108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:58.058965 env[1243]: time="2024-07-02T08:46:58.058908389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:46:58.062521 env[1243]: time="2024-07-02T08:46:58.060726942Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:46:58.064812 env[1243]: time="2024-07-02T08:46:58.064751184Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:46:58.068117 env[1243]: time="2024-07-02T08:46:58.068016127Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:46:58.126927 env[1243]: time="2024-07-02T08:46:58.126767762Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\"" Jul 2 08:46:58.127523 env[1243]: time="2024-07-02T08:46:58.127377005Z" level=info msg="StartContainer for \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\"" Jul 2 08:46:58.128550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834048119.mount: Deactivated successfully. Jul 2 08:46:58.170413 systemd[1]: run-containerd-runc-k8s.io-ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15-runc.vYfumu.mount: Deactivated successfully. Jul 2 08:46:58.201854 env[1243]: time="2024-07-02T08:46:58.201464268Z" level=info msg="StartContainer for \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\" returns successfully" Jul 2 08:46:58.576990 env[1243]: time="2024-07-02T08:46:58.576897650Z" level=info msg="shim disconnected" id=ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15 Jul 2 08:46:58.577445 env[1243]: time="2024-07-02T08:46:58.576993845Z" level=warning msg="cleaning up after shim disconnected" id=ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15 namespace=k8s.io Jul 2 08:46:58.577445 env[1243]: time="2024-07-02T08:46:58.577018691Z" level=info msg="cleaning up dead shim" Jul 2 08:46:58.595642 env[1243]: time="2024-07-02T08:46:58.595563626Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:46:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2550 runtime=io.containerd.runc.v2\n" Jul 2 08:46:59.105658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15-rootfs.mount: Deactivated successfully. Jul 2 08:46:59.139121 env[1243]: time="2024-07-02T08:46:59.139020661Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:46:59.158381 kubelet[2146]: I0702 08:46:59.158295 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6wntb" podStartSLOduration=14.15811056 podCreationTimestamp="2024-07-02 08:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:46:47.07147925 +0000 UTC m=+11.298093260" watchObservedRunningTime="2024-07-02 08:46:59.15811056 +0000 UTC m=+23.384724590" Jul 2 08:46:59.183429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2563315370.mount: Deactivated successfully. Jul 2 08:46:59.196487 env[1243]: time="2024-07-02T08:46:59.196447919Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\"" Jul 2 08:46:59.198504 env[1243]: time="2024-07-02T08:46:59.198481551Z" level=info msg="StartContainer for \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\"" Jul 2 08:46:59.255977 env[1243]: time="2024-07-02T08:46:59.255939886Z" level=info msg="StartContainer for \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\" returns successfully" Jul 2 08:46:59.266759 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:46:59.267592 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:46:59.267734 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:46:59.269617 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:46:59.284119 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:46:59.304300 env[1243]: time="2024-07-02T08:46:59.304242657Z" level=info msg="shim disconnected" id=c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b Jul 2 08:46:59.304300 env[1243]: time="2024-07-02T08:46:59.304298373Z" level=warning msg="cleaning up after shim disconnected" id=c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b namespace=k8s.io Jul 2 08:46:59.304546 env[1243]: time="2024-07-02T08:46:59.304309955Z" level=info msg="cleaning up dead shim" Jul 2 08:46:59.313712 env[1243]: time="2024-07-02T08:46:59.313678538Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:46:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2616 runtime=io.containerd.runc.v2\n" Jul 2 08:47:00.103177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b-rootfs.mount: Deactivated successfully. Jul 2 08:47:00.155787 env[1243]: time="2024-07-02T08:47:00.155706882Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:47:00.221575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784718122.mount: Deactivated successfully. Jul 2 08:47:00.241171 env[1243]: time="2024-07-02T08:47:00.241074925Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\"" Jul 2 08:47:00.248149 env[1243]: time="2024-07-02T08:47:00.248068492Z" level=info msg="StartContainer for \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\"" Jul 2 08:47:00.342537 env[1243]: time="2024-07-02T08:47:00.342489824Z" level=info msg="StartContainer for \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\" returns successfully" Jul 2 08:47:00.458836 env[1243]: time="2024-07-02T08:47:00.458522180Z" level=info msg="shim disconnected" id=5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8 Jul 2 08:47:00.459107 env[1243]: time="2024-07-02T08:47:00.459086116Z" level=warning msg="cleaning up after shim disconnected" id=5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8 namespace=k8s.io Jul 2 08:47:00.459249 env[1243]: time="2024-07-02T08:47:00.459227095Z" level=info msg="cleaning up dead shim" Jul 2 08:47:00.476240 env[1243]: time="2024-07-02T08:47:00.476176322Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:47:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2672 runtime=io.containerd.runc.v2\n" Jul 2 08:47:01.087223 env[1243]: time="2024-07-02T08:47:01.087186370Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:01.090964 env[1243]: time="2024-07-02T08:47:01.090942959Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:01.093763 env[1243]: time="2024-07-02T08:47:01.093741087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:47:01.094539 env[1243]: time="2024-07-02T08:47:01.094512119Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:47:01.099732 env[1243]: time="2024-07-02T08:47:01.099671094Z" level=info msg="CreateContainer within sandbox \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:47:01.102658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8-rootfs.mount: Deactivated successfully. Jul 2 08:47:01.116842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394683927.mount: Deactivated successfully. Jul 2 08:47:01.125735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170485385.mount: Deactivated successfully. Jul 2 08:47:01.143489 env[1243]: time="2024-07-02T08:47:01.143448368Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:47:01.154715 env[1243]: time="2024-07-02T08:47:01.154632591Z" level=info msg="CreateContainer within sandbox \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\"" Jul 2 08:47:01.156555 env[1243]: time="2024-07-02T08:47:01.155692723Z" level=info msg="StartContainer for \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\"" Jul 2 08:47:01.180420 env[1243]: time="2024-07-02T08:47:01.180366625Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\"" Jul 2 08:47:01.181196 env[1243]: time="2024-07-02T08:47:01.181165721Z" level=info msg="StartContainer for \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\"" Jul 2 08:47:01.483629 env[1243]: time="2024-07-02T08:47:01.483545599Z" level=info msg="StartContainer for \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\" returns successfully" Jul 2 08:47:01.487493 env[1243]: time="2024-07-02T08:47:01.487418700Z" level=info msg="StartContainer for \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\" returns successfully" Jul 2 08:47:01.827879 env[1243]: time="2024-07-02T08:47:01.827582156Z" level=info msg="shim disconnected" id=b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0 Jul 2 08:47:01.827879 env[1243]: time="2024-07-02T08:47:01.827712755Z" level=warning msg="cleaning up after shim disconnected" id=b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0 namespace=k8s.io Jul 2 08:47:01.827879 env[1243]: time="2024-07-02T08:47:01.827752021Z" level=info msg="cleaning up dead shim" Jul 2 08:47:01.873954 env[1243]: time="2024-07-02T08:47:01.873906602Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:47:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2770 runtime=io.containerd.runc.v2\n" Jul 2 08:47:02.137731 env[1243]: time="2024-07-02T08:47:02.137640228Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:47:02.177007 env[1243]: time="2024-07-02T08:47:02.176930260Z" level=info msg="CreateContainer within sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\"" Jul 2 08:47:02.181043 env[1243]: time="2024-07-02T08:47:02.181012459Z" level=info msg="StartContainer for \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\"" Jul 2 08:47:02.284130 env[1243]: time="2024-07-02T08:47:02.284068030Z" level=info msg="StartContainer for \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\" returns successfully" Jul 2 08:47:02.559500 kubelet[2146]: I0702 08:47:02.558264 2146 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 08:47:02.649234 kubelet[2146]: I0702 08:47:02.649187 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-s874b" podStartSLOduration=2.671535032 podCreationTimestamp="2024-07-02 08:46:45 +0000 UTC" firstStartedPulling="2024-07-02 08:46:46.117965886 +0000 UTC m=+10.344579846" lastFinishedPulling="2024-07-02 08:47:01.095562033 +0000 UTC m=+25.322175993" observedRunningTime="2024-07-02 08:47:02.230469549 +0000 UTC m=+26.457083509" watchObservedRunningTime="2024-07-02 08:47:02.649131179 +0000 UTC m=+26.875745149" Jul 2 08:47:02.649801 kubelet[2146]: I0702 08:47:02.649785 2146 topology_manager.go:215] "Topology Admit Handler" podUID="65ccbbf5-6e0d-421e-9a36-7e2eb354fd19" podNamespace="kube-system" podName="coredns-5dd5756b68-vgl67" Jul 2 08:47:02.658448 kubelet[2146]: I0702 08:47:02.658423 2146 topology_manager.go:215] "Topology Admit Handler" podUID="ec144d21-5555-4360-a15b-4ed1fb9ff432" podNamespace="kube-system" podName="coredns-5dd5756b68-vcn2t" Jul 2 08:47:02.805823 kubelet[2146]: I0702 08:47:02.805784 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec144d21-5555-4360-a15b-4ed1fb9ff432-config-volume\") pod \"coredns-5dd5756b68-vcn2t\" (UID: \"ec144d21-5555-4360-a15b-4ed1fb9ff432\") " pod="kube-system/coredns-5dd5756b68-vcn2t" Jul 2 08:47:02.805823 kubelet[2146]: I0702 08:47:02.805841 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvpch\" (UniqueName: \"kubernetes.io/projected/65ccbbf5-6e0d-421e-9a36-7e2eb354fd19-kube-api-access-rvpch\") pod \"coredns-5dd5756b68-vgl67\" (UID: \"65ccbbf5-6e0d-421e-9a36-7e2eb354fd19\") " pod="kube-system/coredns-5dd5756b68-vgl67" Jul 2 08:47:02.806037 kubelet[2146]: I0702 08:47:02.805872 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65ccbbf5-6e0d-421e-9a36-7e2eb354fd19-config-volume\") pod \"coredns-5dd5756b68-vgl67\" (UID: \"65ccbbf5-6e0d-421e-9a36-7e2eb354fd19\") " pod="kube-system/coredns-5dd5756b68-vgl67" Jul 2 08:47:02.806037 kubelet[2146]: I0702 08:47:02.805899 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4cc\" (UniqueName: \"kubernetes.io/projected/ec144d21-5555-4360-a15b-4ed1fb9ff432-kube-api-access-gg4cc\") pod \"coredns-5dd5756b68-vcn2t\" (UID: \"ec144d21-5555-4360-a15b-4ed1fb9ff432\") " pod="kube-system/coredns-5dd5756b68-vcn2t" Jul 2 08:47:02.957620 env[1243]: time="2024-07-02T08:47:02.957254511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vgl67,Uid:65ccbbf5-6e0d-421e-9a36-7e2eb354fd19,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:02.965271 env[1243]: time="2024-07-02T08:47:02.965167910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vcn2t,Uid:ec144d21-5555-4360-a15b-4ed1fb9ff432,Namespace:kube-system,Attempt:0,}" Jul 2 08:47:03.162935 kubelet[2146]: I0702 08:47:03.162892 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-z98n4" podStartSLOduration=6.045709771 podCreationTimestamp="2024-07-02 08:46:45 +0000 UTC" firstStartedPulling="2024-07-02 08:46:45.946127435 +0000 UTC m=+10.172741395" lastFinishedPulling="2024-07-02 08:46:58.063267662 +0000 UTC m=+22.289881672" observedRunningTime="2024-07-02 08:47:03.161223636 +0000 UTC m=+27.387837596" watchObservedRunningTime="2024-07-02 08:47:03.162850048 +0000 UTC m=+27.389464018" Jul 2 08:47:06.231598 systemd-networkd[1020]: cilium_host: Link UP Jul 2 08:47:06.244485 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:47:06.233063 systemd-networkd[1020]: cilium_net: Link UP Jul 2 08:47:06.233068 systemd-networkd[1020]: cilium_net: Gained carrier Jul 2 08:47:06.234477 systemd-networkd[1020]: cilium_host: Gained carrier Jul 2 08:47:06.244688 systemd-networkd[1020]: cilium_net: Gained IPv6LL Jul 2 08:47:06.350776 systemd-networkd[1020]: cilium_vxlan: Link UP Jul 2 08:47:06.350783 systemd-networkd[1020]: cilium_vxlan: Gained carrier Jul 2 08:47:07.060667 systemd-networkd[1020]: cilium_host: Gained IPv6LL Jul 2 08:47:07.317396 kernel: NET: Registered PF_ALG protocol family Jul 2 08:47:07.444607 systemd-networkd[1020]: cilium_vxlan: Gained IPv6LL Jul 2 08:47:08.290538 systemd-networkd[1020]: lxc_health: Link UP Jul 2 08:47:08.297629 systemd-networkd[1020]: lxc_health: Gained carrier Jul 2 08:47:08.298348 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:47:08.656274 systemd-networkd[1020]: lxcdfdb6ca1c8e7: Link UP Jul 2 08:47:08.661400 kernel: eth0: renamed from tmpfa481 Jul 2 08:47:08.667228 systemd-networkd[1020]: lxcc4e749f8ada2: Link UP Jul 2 08:47:08.677452 kernel: eth0: renamed from tmp7449c Jul 2 08:47:08.682357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdfdb6ca1c8e7: link becomes ready Jul 2 08:47:08.682093 systemd-networkd[1020]: lxcdfdb6ca1c8e7: Gained carrier Jul 2 08:47:08.685821 systemd-networkd[1020]: lxcc4e749f8ada2: Gained carrier Jul 2 08:47:08.686586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc4e749f8ada2: link becomes ready Jul 2 08:47:09.556592 systemd-networkd[1020]: lxc_health: Gained IPv6LL Jul 2 08:47:10.468548 systemd-networkd[1020]: lxcdfdb6ca1c8e7: Gained IPv6LL Jul 2 08:47:10.516547 systemd-networkd[1020]: lxcc4e749f8ada2: Gained IPv6LL Jul 2 08:47:13.355396 env[1243]: time="2024-07-02T08:47:13.354798703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:13.355396 env[1243]: time="2024-07-02T08:47:13.354841522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:13.355396 env[1243]: time="2024-07-02T08:47:13.354855218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:13.355396 env[1243]: time="2024-07-02T08:47:13.354977386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa48104e0f8e5081f5ae02b4fcbaa247ab4d48a0e8f8a96c773137e6aa167b7b pid=3306 runtime=io.containerd.runc.v2 Jul 2 08:47:13.416683 env[1243]: time="2024-07-02T08:47:13.416557186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:47:13.417616 env[1243]: time="2024-07-02T08:47:13.416712186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:47:13.417616 env[1243]: time="2024-07-02T08:47:13.416745507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:47:13.417616 env[1243]: time="2024-07-02T08:47:13.417020009Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7449cd21957672a77a869c191cbcc82d555d7bf95bc996a763d0871a08885b96 pid=3338 runtime=io.containerd.runc.v2 Jul 2 08:47:13.507317 env[1243]: time="2024-07-02T08:47:13.506623504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vcn2t,Uid:ec144d21-5555-4360-a15b-4ed1fb9ff432,Namespace:kube-system,Attempt:0,} returns sandbox id \"7449cd21957672a77a869c191cbcc82d555d7bf95bc996a763d0871a08885b96\"" Jul 2 08:47:13.516682 env[1243]: time="2024-07-02T08:47:13.516629785Z" level=info msg="CreateContainer within sandbox \"7449cd21957672a77a869c191cbcc82d555d7bf95bc996a763d0871a08885b96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:47:13.524432 env[1243]: time="2024-07-02T08:47:13.523856946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vgl67,Uid:65ccbbf5-6e0d-421e-9a36-7e2eb354fd19,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa48104e0f8e5081f5ae02b4fcbaa247ab4d48a0e8f8a96c773137e6aa167b7b\"" Jul 2 08:47:13.532271 env[1243]: time="2024-07-02T08:47:13.532199786Z" level=info msg="CreateContainer within sandbox \"fa48104e0f8e5081f5ae02b4fcbaa247ab4d48a0e8f8a96c773137e6aa167b7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:47:13.568155 env[1243]: time="2024-07-02T08:47:13.567959651Z" level=info msg="CreateContainer within sandbox \"7449cd21957672a77a869c191cbcc82d555d7bf95bc996a763d0871a08885b96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"829f85a9322daa3055b5b3736328eee7b65edf60e24a3d18224628853ca23aaa\"" Jul 2 08:47:13.572282 env[1243]: time="2024-07-02T08:47:13.572228427Z" level=info msg="StartContainer for \"829f85a9322daa3055b5b3736328eee7b65edf60e24a3d18224628853ca23aaa\"" Jul 2 08:47:13.591843 env[1243]: time="2024-07-02T08:47:13.591802953Z" level=info msg="CreateContainer within sandbox \"fa48104e0f8e5081f5ae02b4fcbaa247ab4d48a0e8f8a96c773137e6aa167b7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f9c6654a22d5303aafbecaae62085ddcc1fd3cec49bfd8041f02e02c62eb37f\"" Jul 2 08:47:13.594496 env[1243]: time="2024-07-02T08:47:13.594458763Z" level=info msg="StartContainer for \"5f9c6654a22d5303aafbecaae62085ddcc1fd3cec49bfd8041f02e02c62eb37f\"" Jul 2 08:47:13.695468 env[1243]: time="2024-07-02T08:47:13.694951104Z" level=info msg="StartContainer for \"829f85a9322daa3055b5b3736328eee7b65edf60e24a3d18224628853ca23aaa\" returns successfully" Jul 2 08:47:13.719532 env[1243]: time="2024-07-02T08:47:13.719480986Z" level=info msg="StartContainer for \"5f9c6654a22d5303aafbecaae62085ddcc1fd3cec49bfd8041f02e02c62eb37f\" returns successfully" Jul 2 08:47:14.335311 kubelet[2146]: I0702 08:47:14.335215 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vgl67" podStartSLOduration=29.335122718 podCreationTimestamp="2024-07-02 08:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:47:14.333129581 +0000 UTC m=+38.559743591" watchObservedRunningTime="2024-07-02 08:47:14.335122718 +0000 UTC m=+38.561736748" Jul 2 08:47:14.336452 kubelet[2146]: I0702 08:47:14.336416 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vcn2t" podStartSLOduration=29.336323787 podCreationTimestamp="2024-07-02 08:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:47:14.244401092 +0000 UTC m=+38.471015102" watchObservedRunningTime="2024-07-02 08:47:14.336323787 +0000 UTC m=+38.562937858" Jul 2 08:47:14.367224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821098536.mount: Deactivated successfully. Jul 2 08:47:42.877847 update_engine[1232]: I0702 08:47:42.877746 1232 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 08:47:42.877847 update_engine[1232]: I0702 08:47:42.877844 1232 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 08:47:42.884589 update_engine[1232]: I0702 08:47:42.884527 1232 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 08:47:42.886106 update_engine[1232]: I0702 08:47:42.886047 1232 omaha_request_params.cc:62] Current group set to lts Jul 2 08:47:42.897094 update_engine[1232]: I0702 08:47:42.896975 1232 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 08:47:42.897094 update_engine[1232]: I0702 08:47:42.897022 1232 update_attempter.cc:643] Scheduling an action processor start. Jul 2 08:47:42.897094 update_engine[1232]: I0702 08:47:42.897063 1232 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 08:47:42.897431 update_engine[1232]: I0702 08:47:42.897127 1232 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 08:47:42.897431 update_engine[1232]: I0702 08:47:42.897264 1232 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 08:47:42.897431 update_engine[1232]: I0702 08:47:42.897279 1232 omaha_request_action.cc:271] Request: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: Jul 2 08:47:42.897431 update_engine[1232]: I0702 08:47:42.897287 1232 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:47:42.923898 update_engine[1232]: I0702 08:47:42.923789 1232 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:47:42.924129 update_engine[1232]: E0702 08:47:42.924086 1232 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:47:42.924261 update_engine[1232]: I0702 08:47:42.924229 1232 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 08:47:42.944093 locksmithd[1282]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 08:47:52.818421 update_engine[1232]: I0702 08:47:52.817864 1232 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:47:52.818421 update_engine[1232]: I0702 08:47:52.818190 1232 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:47:52.818421 update_engine[1232]: E0702 08:47:52.818396 1232 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:47:52.819053 update_engine[1232]: I0702 08:47:52.818521 1232 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 08:47:56.714302 systemd[1]: Started sshd@7-172.24.4.53:22-172.24.4.1:55240.service. Jul 2 08:47:58.221242 sshd[3478]: Accepted publickey for core from 172.24.4.1 port 55240 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:47:58.224669 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:47:58.238460 systemd-logind[1229]: New session 8 of user core. Jul 2 08:47:58.240958 systemd[1]: Started session-8.scope. Jul 2 08:47:59.558207 sshd[3478]: pam_unix(sshd:session): session closed for user core Jul 2 08:47:59.564487 systemd[1]: sshd@7-172.24.4.53:22-172.24.4.1:55240.service: Deactivated successfully. Jul 2 08:47:59.566985 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:47:59.567140 systemd-logind[1229]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:47:59.570033 systemd-logind[1229]: Removed session 8. Jul 2 08:48:02.810440 update_engine[1232]: I0702 08:48:02.810237 1232 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:48:02.811224 update_engine[1232]: I0702 08:48:02.810699 1232 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:48:02.811224 update_engine[1232]: E0702 08:48:02.810858 1232 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:48:02.811224 update_engine[1232]: I0702 08:48:02.810984 1232 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 08:48:04.564605 systemd[1]: Started sshd@8-172.24.4.53:22-172.24.4.1:40624.service. Jul 2 08:48:06.026111 sshd[3491]: Accepted publickey for core from 172.24.4.1 port 40624 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:06.032116 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:06.046928 systemd-logind[1229]: New session 9 of user core. Jul 2 08:48:06.048857 systemd[1]: Started session-9.scope. Jul 2 08:48:07.090895 sshd[3491]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:07.097913 systemd-logind[1229]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:48:07.101073 systemd[1]: sshd@8-172.24.4.53:22-172.24.4.1:40624.service: Deactivated successfully. Jul 2 08:48:07.103037 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:48:07.106427 systemd-logind[1229]: Removed session 9. Jul 2 08:48:12.097525 systemd[1]: Started sshd@9-172.24.4.53:22-172.24.4.1:40634.service. Jul 2 08:48:12.815034 update_engine[1232]: I0702 08:48:12.813445 1232 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:48:12.815034 update_engine[1232]: I0702 08:48:12.813871 1232 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:48:12.815034 update_engine[1232]: E0702 08:48:12.814009 1232 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:48:12.815034 update_engine[1232]: I0702 08:48:12.814095 1232 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 08:48:12.815034 update_engine[1232]: I0702 08:48:12.814105 1232 omaha_request_action.cc:621] Omaha request response: Jul 2 08:48:12.815034 update_engine[1232]: E0702 08:48:12.814209 1232 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 2 08:48:12.815034 update_engine[1232]: I0702 08:48:12.814226 1232 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 08:48:12.815034 update_engine[1232]: I0702 08:48:12.814232 1232 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:48:12.815034 update_engine[1232]: I0702 08:48:12.814239 1232 update_attempter.cc:306] Processing Done. Jul 2 08:48:12.815034 update_engine[1232]: E0702 08:48:12.814252 1232 update_attempter.cc:619] Update failed. Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.816581 1232 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.816610 1232 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.816618 1232 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.816764 1232 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.816802 1232 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.816807 1232 omaha_request_action.cc:271] Request: Jul 2 08:48:12.817380 update_engine[1232]: Jul 2 08:48:12.817380 update_engine[1232]: Jul 2 08:48:12.817380 update_engine[1232]: Jul 2 08:48:12.817380 update_engine[1232]: Jul 2 08:48:12.817380 update_engine[1232]: Jul 2 08:48:12.817380 update_engine[1232]: Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.816815 1232 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:48:12.817380 update_engine[1232]: I0702 08:48:12.817128 1232 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:48:12.817380 update_engine[1232]: E0702 08:48:12.817272 1232 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:48:12.818187 locksmithd[1282]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 08:48:12.818930 update_engine[1232]: I0702 08:48:12.818766 1232 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 08:48:12.818930 update_engine[1232]: I0702 08:48:12.818789 1232 omaha_request_action.cc:621] Omaha request response: Jul 2 08:48:12.818930 update_engine[1232]: I0702 08:48:12.818796 1232 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:48:12.818930 update_engine[1232]: I0702 08:48:12.818801 1232 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:48:12.818930 update_engine[1232]: I0702 08:48:12.818807 1232 update_attempter.cc:306] Processing Done. Jul 2 08:48:12.818930 update_engine[1232]: I0702 08:48:12.818814 1232 update_attempter.cc:310] Error event sent. Jul 2 08:48:12.819752 update_engine[1232]: I0702 08:48:12.819612 1232 update_check_scheduler.cc:74] Next update check in 42m24s Jul 2 08:48:12.820313 locksmithd[1282]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 08:48:13.614750 sshd[3505]: Accepted publickey for core from 172.24.4.1 port 40634 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:13.617418 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:13.632975 systemd[1]: Started session-10.scope. Jul 2 08:48:13.634515 systemd-logind[1229]: New session 10 of user core. Jul 2 08:48:14.577187 sshd[3505]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:14.583538 systemd-logind[1229]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:48:14.584626 systemd[1]: sshd@9-172.24.4.53:22-172.24.4.1:40634.service: Deactivated successfully. Jul 2 08:48:14.587683 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:48:14.590537 systemd-logind[1229]: Removed session 10. Jul 2 08:48:19.647973 systemd[1]: Started sshd@10-172.24.4.53:22-172.24.4.1:49392.service. Jul 2 08:48:20.925121 sshd[3520]: Accepted publickey for core from 172.24.4.1 port 49392 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:20.928654 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:20.939539 systemd-logind[1229]: New session 11 of user core. Jul 2 08:48:20.939954 systemd[1]: Started session-11.scope. Jul 2 08:48:21.746954 sshd[3520]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:21.751078 systemd[1]: Started sshd@11-172.24.4.53:22-172.24.4.1:49404.service. Jul 2 08:48:21.767167 systemd[1]: sshd@10-172.24.4.53:22-172.24.4.1:49392.service: Deactivated successfully. Jul 2 08:48:21.769633 systemd-logind[1229]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:48:21.769869 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:48:21.772701 systemd-logind[1229]: Removed session 11. Jul 2 08:48:23.418282 sshd[3531]: Accepted publickey for core from 172.24.4.1 port 49404 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:23.421704 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:23.433404 systemd-logind[1229]: New session 12 of user core. Jul 2 08:48:23.434493 systemd[1]: Started session-12.scope. Jul 2 08:48:25.479447 sshd[3531]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:25.479449 systemd[1]: Started sshd@12-172.24.4.53:22-172.24.4.1:44600.service. Jul 2 08:48:25.497140 systemd[1]: sshd@11-172.24.4.53:22-172.24.4.1:49404.service: Deactivated successfully. Jul 2 08:48:25.498021 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:48:25.500047 systemd-logind[1229]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:48:25.502610 systemd-logind[1229]: Removed session 12. Jul 2 08:48:27.107434 sshd[3542]: Accepted publickey for core from 172.24.4.1 port 44600 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:27.110288 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:27.122186 systemd[1]: Started session-13.scope. Jul 2 08:48:27.122705 systemd-logind[1229]: New session 13 of user core. Jul 2 08:48:28.038780 sshd[3542]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:28.044600 systemd[1]: sshd@12-172.24.4.53:22-172.24.4.1:44600.service: Deactivated successfully. Jul 2 08:48:28.046301 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:48:28.046433 systemd-logind[1229]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:48:28.049524 systemd-logind[1229]: Removed session 13. Jul 2 08:48:33.043060 systemd[1]: Started sshd@13-172.24.4.53:22-172.24.4.1:44610.service. Jul 2 08:48:34.363363 sshd[3557]: Accepted publickey for core from 172.24.4.1 port 44610 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:34.368461 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:34.382539 systemd-logind[1229]: New session 14 of user core. Jul 2 08:48:34.383862 systemd[1]: Started session-14.scope. Jul 2 08:48:35.301813 sshd[3557]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:35.306883 systemd[1]: Started sshd@14-172.24.4.53:22-172.24.4.1:43820.service. Jul 2 08:48:35.314057 systemd-logind[1229]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:48:35.315414 systemd[1]: sshd@13-172.24.4.53:22-172.24.4.1:44610.service: Deactivated successfully. Jul 2 08:48:35.318754 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:48:35.321899 systemd-logind[1229]: Removed session 14. Jul 2 08:48:36.752370 sshd[3568]: Accepted publickey for core from 172.24.4.1 port 43820 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:36.756492 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:36.773192 systemd-logind[1229]: New session 15 of user core. Jul 2 08:48:36.774835 systemd[1]: Started session-15.scope. Jul 2 08:48:39.783254 sshd[3568]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:39.790125 systemd[1]: Started sshd@15-172.24.4.53:22-172.24.4.1:43828.service. Jul 2 08:48:39.791945 systemd[1]: sshd@14-172.24.4.53:22-172.24.4.1:43820.service: Deactivated successfully. Jul 2 08:48:39.802803 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:48:39.805807 systemd-logind[1229]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:48:39.812937 systemd-logind[1229]: Removed session 15. Jul 2 08:48:41.366314 sshd[3581]: Accepted publickey for core from 172.24.4.1 port 43828 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:41.372774 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:41.385468 systemd-logind[1229]: New session 16 of user core. Jul 2 08:48:41.388522 systemd[1]: Started session-16.scope. Jul 2 08:48:43.565855 sshd[3581]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:43.576318 systemd[1]: Started sshd@16-172.24.4.53:22-172.24.4.1:43836.service. Jul 2 08:48:43.578309 systemd[1]: sshd@15-172.24.4.53:22-172.24.4.1:43828.service: Deactivated successfully. Jul 2 08:48:43.583456 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:48:43.585260 systemd-logind[1229]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:48:43.588959 systemd-logind[1229]: Removed session 16. Jul 2 08:48:45.513417 sshd[3599]: Accepted publickey for core from 172.24.4.1 port 43836 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:45.518568 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:45.533799 systemd[1]: Started session-17.scope. Jul 2 08:48:45.534258 systemd-logind[1229]: New session 17 of user core. Jul 2 08:48:47.409579 sshd[3599]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:47.414612 systemd[1]: Started sshd@17-172.24.4.53:22-172.24.4.1:52550.service. Jul 2 08:48:47.423217 systemd[1]: sshd@16-172.24.4.53:22-172.24.4.1:43836.service: Deactivated successfully. Jul 2 08:48:47.429301 systemd-logind[1229]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:48:47.430650 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:48:47.434969 systemd-logind[1229]: Removed session 17. Jul 2 08:48:48.971512 sshd[3612]: Accepted publickey for core from 172.24.4.1 port 52550 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:48.976842 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:48.990912 systemd-logind[1229]: New session 18 of user core. Jul 2 08:48:48.991535 systemd[1]: Started session-18.scope. Jul 2 08:48:49.609821 sshd[3612]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:49.615553 systemd[1]: sshd@17-172.24.4.53:22-172.24.4.1:52550.service: Deactivated successfully. Jul 2 08:48:49.619123 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:48:49.620083 systemd-logind[1229]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:48:49.625065 systemd-logind[1229]: Removed session 18. Jul 2 08:48:54.617758 systemd[1]: Started sshd@18-172.24.4.53:22-172.24.4.1:60482.service. Jul 2 08:48:55.960150 sshd[3630]: Accepted publickey for core from 172.24.4.1 port 60482 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:48:55.963679 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:48:55.977425 systemd[1]: Started session-19.scope. Jul 2 08:48:55.979256 systemd-logind[1229]: New session 19 of user core. Jul 2 08:48:56.711607 sshd[3630]: pam_unix(sshd:session): session closed for user core Jul 2 08:48:56.717288 systemd[1]: sshd@18-172.24.4.53:22-172.24.4.1:60482.service: Deactivated successfully. Jul 2 08:48:56.720964 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:48:56.726088 systemd-logind[1229]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:48:56.728879 systemd-logind[1229]: Removed session 19. Jul 2 08:49:01.718694 systemd[1]: Started sshd@19-172.24.4.53:22-172.24.4.1:60492.service. Jul 2 08:49:03.293543 sshd[3643]: Accepted publickey for core from 172.24.4.1 port 60492 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:03.295958 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:03.304448 systemd-logind[1229]: New session 20 of user core. Jul 2 08:49:03.306789 systemd[1]: Started session-20.scope. Jul 2 08:49:04.063724 sshd[3643]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:04.069832 systemd[1]: sshd@19-172.24.4.53:22-172.24.4.1:60492.service: Deactivated successfully. Jul 2 08:49:04.072255 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:49:04.078424 systemd-logind[1229]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:49:04.081093 systemd-logind[1229]: Removed session 20. Jul 2 08:49:09.071715 systemd[1]: Started sshd@20-172.24.4.53:22-172.24.4.1:35934.service. Jul 2 08:49:10.510703 sshd[3656]: Accepted publickey for core from 172.24.4.1 port 35934 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:10.513465 sshd[3656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:10.525543 systemd-logind[1229]: New session 21 of user core. Jul 2 08:49:10.527068 systemd[1]: Started session-21.scope. Jul 2 08:49:11.288564 sshd[3656]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:11.300647 systemd[1]: Started sshd@21-172.24.4.53:22-172.24.4.1:35940.service. Jul 2 08:49:11.304133 systemd[1]: sshd@20-172.24.4.53:22-172.24.4.1:35934.service: Deactivated successfully. Jul 2 08:49:11.316175 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:49:11.319216 systemd-logind[1229]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:49:11.323017 systemd-logind[1229]: Removed session 21. Jul 2 08:49:12.623261 sshd[3668]: Accepted publickey for core from 172.24.4.1 port 35940 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:12.625893 sshd[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:12.634955 systemd[1]: Started session-22.scope. Jul 2 08:49:12.635923 systemd-logind[1229]: New session 22 of user core. Jul 2 08:49:14.842913 systemd[1]: run-containerd-runc-k8s.io-2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b-runc.6QajmI.mount: Deactivated successfully. Jul 2 08:49:14.864843 env[1243]: time="2024-07-02T08:49:14.864658153Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:49:14.871060 env[1243]: time="2024-07-02T08:49:14.871025417Z" level=info msg="StopContainer for \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\" with timeout 30 (s)" Jul 2 08:49:14.871350 env[1243]: time="2024-07-02T08:49:14.871294767Z" level=info msg="StopContainer for \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\" with timeout 2 (s)" Jul 2 08:49:14.871530 env[1243]: time="2024-07-02T08:49:14.871507410Z" level=info msg="Stop container \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\" with signal terminated" Jul 2 08:49:14.871742 env[1243]: time="2024-07-02T08:49:14.871709595Z" level=info msg="Stop container \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\" with signal terminated" Jul 2 08:49:14.891862 systemd-networkd[1020]: lxc_health: Link DOWN Jul 2 08:49:14.891870 systemd-networkd[1020]: lxc_health: Lost carrier Jul 2 08:49:14.932298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17-rootfs.mount: Deactivated successfully. Jul 2 08:49:14.940997 env[1243]: time="2024-07-02T08:49:14.940945940Z" level=info msg="shim disconnected" id=0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17 Jul 2 08:49:14.941217 env[1243]: time="2024-07-02T08:49:14.941196855Z" level=warning msg="cleaning up after shim disconnected" id=0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17 namespace=k8s.io Jul 2 08:49:14.941299 env[1243]: time="2024-07-02T08:49:14.941283666Z" level=info msg="cleaning up dead shim" Jul 2 08:49:14.957357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b-rootfs.mount: Deactivated successfully. Jul 2 08:49:14.958912 env[1243]: time="2024-07-02T08:49:14.958870141Z" level=info msg="shim disconnected" id=2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b Jul 2 08:49:14.959122 env[1243]: time="2024-07-02T08:49:14.959087694Z" level=warning msg="cleaning up after shim disconnected" id=2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b namespace=k8s.io Jul 2 08:49:14.959216 env[1243]: time="2024-07-02T08:49:14.959200032Z" level=info msg="cleaning up dead shim" Jul 2 08:49:14.960618 env[1243]: time="2024-07-02T08:49:14.960590578Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3733 runtime=io.containerd.runc.v2\n" Jul 2 08:49:14.964606 env[1243]: time="2024-07-02T08:49:14.964560230Z" level=info msg="StopContainer for \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\" returns successfully" Jul 2 08:49:14.965491 env[1243]: time="2024-07-02T08:49:14.965466068Z" level=info msg="StopPodSandbox for \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\"" Jul 2 08:49:14.965675 env[1243]: time="2024-07-02T08:49:14.965640000Z" level=info msg="Container to stop \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:14.967502 env[1243]: time="2024-07-02T08:49:14.967462586Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3755 runtime=io.containerd.runc.v2\n" Jul 2 08:49:14.971183 env[1243]: time="2024-07-02T08:49:14.971139607Z" level=info msg="StopContainer for \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\" returns successfully" Jul 2 08:49:14.971952 env[1243]: time="2024-07-02T08:49:14.971926334Z" level=info msg="StopPodSandbox for \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\"" Jul 2 08:49:14.972126 env[1243]: time="2024-07-02T08:49:14.972088334Z" level=info msg="Container to stop \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:14.972224 env[1243]: time="2024-07-02T08:49:14.972203308Z" level=info msg="Container to stop \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:14.972322 env[1243]: time="2024-07-02T08:49:14.972301719Z" level=info msg="Container to stop \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:14.972536 env[1243]: time="2024-07-02T08:49:14.972473067Z" level=info msg="Container to stop \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:14.972641 env[1243]: time="2024-07-02T08:49:14.972621703Z" level=info msg="Container to stop \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:15.007840 env[1243]: time="2024-07-02T08:49:15.007771264Z" level=info msg="shim disconnected" id=f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094 Jul 2 08:49:15.008394 env[1243]: time="2024-07-02T08:49:15.008373419Z" level=warning msg="cleaning up after shim disconnected" id=f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094 namespace=k8s.io Jul 2 08:49:15.008513 env[1243]: time="2024-07-02T08:49:15.008497460Z" level=info msg="cleaning up dead shim" Jul 2 08:49:15.016225 env[1243]: time="2024-07-02T08:49:15.016177420Z" level=info msg="shim disconnected" id=b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61 Jul 2 08:49:15.017004 env[1243]: time="2024-07-02T08:49:15.016982502Z" level=warning msg="cleaning up after shim disconnected" id=b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61 namespace=k8s.io Jul 2 08:49:15.017086 env[1243]: time="2024-07-02T08:49:15.017070826Z" level=info msg="cleaning up dead shim" Jul 2 08:49:15.020740 env[1243]: time="2024-07-02T08:49:15.020682728Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3808 runtime=io.containerd.runc.v2\n" Jul 2 08:49:15.021276 env[1243]: time="2024-07-02T08:49:15.021250760Z" level=info msg="TearDown network for sandbox \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" successfully" Jul 2 08:49:15.021452 env[1243]: time="2024-07-02T08:49:15.021429772Z" level=info msg="StopPodSandbox for \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" returns successfully" Jul 2 08:49:15.032035 env[1243]: time="2024-07-02T08:49:15.031991993Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3823 runtime=io.containerd.runc.v2\n" Jul 2 08:49:15.033015 env[1243]: time="2024-07-02T08:49:15.032958684Z" level=info msg="TearDown network for sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" successfully" Jul 2 08:49:15.033279 env[1243]: time="2024-07-02T08:49:15.033239234Z" level=info msg="StopPodSandbox for \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" returns successfully" Jul 2 08:49:15.250114 kubelet[2146]: I0702 08:49:15.250062 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cni-path" (OuterVolumeSpecName: "cni-path") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.251059 kubelet[2146]: I0702 08:49:15.251024 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cni-path\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.251913 kubelet[2146]: I0702 08:49:15.251299 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-hubble-tls\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.252276 kubelet[2146]: I0702 08:49:15.252248 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-hostproc\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.252535 kubelet[2146]: I0702 08:49:15.252509 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-xtables-lock\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.252740 kubelet[2146]: I0702 08:49:15.252715 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-kernel\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.252953 kubelet[2146]: I0702 08:49:15.252929 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-config-path\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.253199 kubelet[2146]: I0702 08:49:15.253171 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-net\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.253443 kubelet[2146]: I0702 08:49:15.253417 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-run\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.253656 kubelet[2146]: I0702 08:49:15.253633 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-lib-modules\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.253950 kubelet[2146]: I0702 08:49:15.253915 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.254155 kubelet[2146]: I0702 08:49:15.254122 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-hostproc" (OuterVolumeSpecName: "hostproc") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.254378 kubelet[2146]: I0702 08:49:15.254310 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.254646 kubelet[2146]: I0702 08:49:15.254607 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.260221 kubelet[2146]: I0702 08:49:15.260176 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:49:15.260520 kubelet[2146]: I0702 08:49:15.260483 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.260738 kubelet[2146]: I0702 08:49:15.260705 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.260941 kubelet[2146]: I0702 08:49:15.260916 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d2a14a1-64c0-449a-8938-7fa96f316c29-cilium-config-path\") pod \"7d2a14a1-64c0-449a-8938-7fa96f316c29\" (UID: \"7d2a14a1-64c0-449a-8938-7fa96f316c29\") " Jul 2 08:49:15.261176 kubelet[2146]: I0702 08:49:15.261150 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxr94\" (UniqueName: \"kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-kube-api-access-kxr94\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.261434 kubelet[2146]: I0702 08:49:15.261406 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a035bb52-9628-4b0a-bb63-8208622a86a6-clustermesh-secrets\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.261642 kubelet[2146]: I0702 08:49:15.261619 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-cgroup\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.261830 kubelet[2146]: I0702 08:49:15.261808 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-bpf-maps\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.262029 kubelet[2146]: I0702 08:49:15.262006 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcrlg\" (UniqueName: \"kubernetes.io/projected/7d2a14a1-64c0-449a-8938-7fa96f316c29-kube-api-access-dcrlg\") pod \"7d2a14a1-64c0-449a-8938-7fa96f316c29\" (UID: \"7d2a14a1-64c0-449a-8938-7fa96f316c29\") " Jul 2 08:49:15.262558 kubelet[2146]: I0702 08:49:15.262534 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-etc-cni-netd\") pod \"a035bb52-9628-4b0a-bb63-8208622a86a6\" (UID: \"a035bb52-9628-4b0a-bb63-8208622a86a6\") " Jul 2 08:49:15.262799 kubelet[2146]: I0702 08:49:15.262775 2146 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cni-path\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.263000 kubelet[2146]: I0702 08:49:15.262978 2146 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-hostproc\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.263171 kubelet[2146]: I0702 08:49:15.263150 2146 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-xtables-lock\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.263524 kubelet[2146]: I0702 08:49:15.263317 2146 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-kernel\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.263820 kubelet[2146]: I0702 08:49:15.263795 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-config-path\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.264147 kubelet[2146]: I0702 08:49:15.264123 2146 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-host-proc-sys-net\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.264483 kubelet[2146]: I0702 08:49:15.264421 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-run\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.264720 kubelet[2146]: I0702 08:49:15.264686 2146 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-lib-modules\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.264912 kubelet[2146]: I0702 08:49:15.264879 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.265102 kubelet[2146]: I0702 08:49:15.265070 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.265373 kubelet[2146]: I0702 08:49:15.265296 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:15.272205 kubelet[2146]: I0702 08:49:15.272133 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:15.273967 kubelet[2146]: I0702 08:49:15.273923 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2a14a1-64c0-449a-8938-7fa96f316c29-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7d2a14a1-64c0-449a-8938-7fa96f316c29" (UID: "7d2a14a1-64c0-449a-8938-7fa96f316c29"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:49:15.279096 kubelet[2146]: I0702 08:49:15.279015 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2a14a1-64c0-449a-8938-7fa96f316c29-kube-api-access-dcrlg" (OuterVolumeSpecName: "kube-api-access-dcrlg") pod "7d2a14a1-64c0-449a-8938-7fa96f316c29" (UID: "7d2a14a1-64c0-449a-8938-7fa96f316c29"). InnerVolumeSpecName "kube-api-access-dcrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:15.283924 kubelet[2146]: I0702 08:49:15.283863 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a035bb52-9628-4b0a-bb63-8208622a86a6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:49:15.287512 kubelet[2146]: I0702 08:49:15.287462 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-kube-api-access-kxr94" (OuterVolumeSpecName: "kube-api-access-kxr94") pod "a035bb52-9628-4b0a-bb63-8208622a86a6" (UID: "a035bb52-9628-4b0a-bb63-8208622a86a6"). InnerVolumeSpecName "kube-api-access-kxr94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:15.365751 kubelet[2146]: I0702 08:49:15.365707 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d2a14a1-64c0-449a-8938-7fa96f316c29-cilium-config-path\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.366139 kubelet[2146]: I0702 08:49:15.366085 2146 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kxr94\" (UniqueName: \"kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-kube-api-access-kxr94\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.366399 kubelet[2146]: I0702 08:49:15.366317 2146 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a035bb52-9628-4b0a-bb63-8208622a86a6-clustermesh-secrets\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.366644 kubelet[2146]: I0702 08:49:15.366589 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-cilium-cgroup\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.366903 kubelet[2146]: I0702 08:49:15.366878 2146 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-bpf-maps\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.367145 kubelet[2146]: I0702 08:49:15.367100 2146 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dcrlg\" (UniqueName: \"kubernetes.io/projected/7d2a14a1-64c0-449a-8938-7fa96f316c29-kube-api-access-dcrlg\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.367380 kubelet[2146]: I0702 08:49:15.367307 2146 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a035bb52-9628-4b0a-bb63-8208622a86a6-etc-cni-netd\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.367614 kubelet[2146]: I0702 08:49:15.367562 2146 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a035bb52-9628-4b0a-bb63-8208622a86a6-hubble-tls\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:15.586710 kubelet[2146]: I0702 08:49:15.583999 2146 scope.go:117] "RemoveContainer" containerID="0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17" Jul 2 08:49:15.590517 env[1243]: time="2024-07-02T08:49:15.590145244Z" level=info msg="RemoveContainer for \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\"" Jul 2 08:49:15.627348 env[1243]: time="2024-07-02T08:49:15.627267549Z" level=info msg="RemoveContainer for \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\" returns successfully" Jul 2 08:49:15.631526 kubelet[2146]: I0702 08:49:15.631506 2146 scope.go:117] "RemoveContainer" containerID="0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17" Jul 2 08:49:15.632698 env[1243]: time="2024-07-02T08:49:15.631955886Z" level=error msg="ContainerStatus for \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\": not found" Jul 2 08:49:15.632890 kubelet[2146]: E0702 08:49:15.632875 2146 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\": not found" containerID="0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17" Jul 2 08:49:15.657700 kubelet[2146]: I0702 08:49:15.657670 2146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17"} err="failed to get container status \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a3da913a233cc458ee7df8f0f75ba850ad2fa12b8cda2f0a933570825875b17\": not found" Jul 2 08:49:15.657880 kubelet[2146]: I0702 08:49:15.657868 2146 scope.go:117] "RemoveContainer" containerID="2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b" Jul 2 08:49:15.660005 env[1243]: time="2024-07-02T08:49:15.659833711Z" level=info msg="RemoveContainer for \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\"" Jul 2 08:49:15.707398 env[1243]: time="2024-07-02T08:49:15.707290445Z" level=info msg="RemoveContainer for \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\" returns successfully" Jul 2 08:49:15.708055 kubelet[2146]: I0702 08:49:15.708019 2146 scope.go:117] "RemoveContainer" containerID="b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0" Jul 2 08:49:15.710853 env[1243]: time="2024-07-02T08:49:15.710793665Z" level=info msg="RemoveContainer for \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\"" Jul 2 08:49:15.738170 env[1243]: time="2024-07-02T08:49:15.738014835Z" level=info msg="RemoveContainer for \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\" returns successfully" Jul 2 08:49:15.738951 kubelet[2146]: I0702 08:49:15.738912 2146 scope.go:117] "RemoveContainer" containerID="5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8" Jul 2 08:49:15.741798 env[1243]: time="2024-07-02T08:49:15.741734486Z" level=info msg="RemoveContainer for \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\"" Jul 2 08:49:15.771226 env[1243]: time="2024-07-02T08:49:15.771120125Z" level=info msg="RemoveContainer for \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\" returns successfully" Jul 2 08:49:15.771985 kubelet[2146]: I0702 08:49:15.771929 2146 scope.go:117] "RemoveContainer" containerID="c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b" Jul 2 08:49:15.775409 env[1243]: time="2024-07-02T08:49:15.775286766Z" level=info msg="RemoveContainer for \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\"" Jul 2 08:49:15.798052 env[1243]: time="2024-07-02T08:49:15.797975867Z" level=info msg="RemoveContainer for \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\" returns successfully" Jul 2 08:49:15.798553 kubelet[2146]: I0702 08:49:15.798508 2146 scope.go:117] "RemoveContainer" containerID="ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15" Jul 2 08:49:15.800435 env[1243]: time="2024-07-02T08:49:15.800383939Z" level=info msg="RemoveContainer for \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\"" Jul 2 08:49:15.831912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094-rootfs.mount: Deactivated successfully. Jul 2 08:49:15.832189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094-shm.mount: Deactivated successfully. Jul 2 08:49:15.832461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61-rootfs.mount: Deactivated successfully. Jul 2 08:49:15.832664 systemd[1]: var-lib-kubelet-pods-7d2a14a1\x2d64c0\x2d449a\x2d8938\x2d7fa96f316c29-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddcrlg.mount: Deactivated successfully. Jul 2 08:49:15.832958 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61-shm.mount: Deactivated successfully. Jul 2 08:49:15.833222 systemd[1]: var-lib-kubelet-pods-a035bb52\x2d9628\x2d4b0a\x2dbb63\x2d8208622a86a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxr94.mount: Deactivated successfully. Jul 2 08:49:15.833505 systemd[1]: var-lib-kubelet-pods-a035bb52\x2d9628\x2d4b0a\x2dbb63\x2d8208622a86a6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:49:15.833801 systemd[1]: var-lib-kubelet-pods-a035bb52\x2d9628\x2d4b0a\x2dbb63\x2d8208622a86a6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:49:15.839095 env[1243]: time="2024-07-02T08:49:15.838033240Z" level=info msg="RemoveContainer for \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\" returns successfully" Jul 2 08:49:15.840374 kubelet[2146]: I0702 08:49:15.840356 2146 scope.go:117] "RemoveContainer" containerID="2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b" Jul 2 08:49:15.840939 env[1243]: time="2024-07-02T08:49:15.840832266Z" level=error msg="ContainerStatus for \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\": not found" Jul 2 08:49:15.841245 kubelet[2146]: E0702 08:49:15.841215 2146 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\": not found" containerID="2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b" Jul 2 08:49:15.841309 kubelet[2146]: I0702 08:49:15.841289 2146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b"} err="failed to get container status \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2cee991df7b71e3426e77733bf3e21a3fcf88026958f60ffdc1d0d41fb83580b\": not found" Jul 2 08:49:15.842454 kubelet[2146]: I0702 08:49:15.841441 2146 scope.go:117] "RemoveContainer" containerID="b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0" Jul 2 08:49:15.842454 kubelet[2146]: E0702 08:49:15.842007 2146 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\": not found" containerID="b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0" Jul 2 08:49:15.842454 kubelet[2146]: I0702 08:49:15.842056 2146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0"} err="failed to get container status \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\": not found" Jul 2 08:49:15.842454 kubelet[2146]: I0702 08:49:15.842068 2146 scope.go:117] "RemoveContainer" containerID="5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8" Jul 2 08:49:15.842595 env[1243]: time="2024-07-02T08:49:15.841792456Z" level=error msg="ContainerStatus for \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b17b5bd77f2786a6532c8101026d1f00dab43ad03df345708f16a1c26ca082b0\": not found" Jul 2 08:49:15.842595 env[1243]: time="2024-07-02T08:49:15.842353024Z" level=error msg="ContainerStatus for \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\": not found" Jul 2 08:49:15.842746 kubelet[2146]: E0702 08:49:15.842733 2146 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\": not found" containerID="5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8" Jul 2 08:49:15.842857 kubelet[2146]: I0702 08:49:15.842846 2146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8"} err="failed to get container status \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e6dfbcdc2b737d785683671c30c8c9e6db7ecc1690b9710db73682c0bd24ba8\": not found" Jul 2 08:49:15.842952 kubelet[2146]: I0702 08:49:15.842941 2146 scope.go:117] "RemoveContainer" containerID="c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b" Jul 2 08:49:15.843424 env[1243]: time="2024-07-02T08:49:15.843261167Z" level=error msg="ContainerStatus for \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\": not found" Jul 2 08:49:15.843612 kubelet[2146]: E0702 08:49:15.843589 2146 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\": not found" containerID="c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b" Jul 2 08:49:15.843707 kubelet[2146]: I0702 08:49:15.843696 2146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b"} err="failed to get container status \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c08bbe75454d26a650f399e74202da4eaa0f4aaca8e78d49985e6a7316ae5d1b\": not found" Jul 2 08:49:15.843797 kubelet[2146]: I0702 08:49:15.843786 2146 scope.go:117] "RemoveContainer" containerID="ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15" Jul 2 08:49:15.844284 env[1243]: time="2024-07-02T08:49:15.844192963Z" level=error msg="ContainerStatus for \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\": not found" Jul 2 08:49:15.844498 kubelet[2146]: E0702 08:49:15.844487 2146 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\": not found" containerID="ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15" Jul 2 08:49:15.844607 kubelet[2146]: I0702 08:49:15.844597 2146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15"} err="failed to get container status \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac0ac5dfcb3a54ace6df40cadab9608db92782fc23d6550e2866eebfb5cbca15\": not found" Jul 2 08:49:15.978128 kubelet[2146]: I0702 08:49:15.978073 2146 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7d2a14a1-64c0-449a-8938-7fa96f316c29" path="/var/lib/kubelet/pods/7d2a14a1-64c0-449a-8938-7fa96f316c29/volumes" Jul 2 08:49:15.984641 kubelet[2146]: I0702 08:49:15.979479 2146 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" path="/var/lib/kubelet/pods/a035bb52-9628-4b0a-bb63-8208622a86a6/volumes" Jul 2 08:49:16.143873 kubelet[2146]: E0702 08:49:16.143701 2146 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:49:16.950986 sshd[3668]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:16.954750 systemd[1]: Started sshd@22-172.24.4.53:22-172.24.4.1:33590.service. Jul 2 08:49:16.956665 systemd[1]: sshd@21-172.24.4.53:22-172.24.4.1:35940.service: Deactivated successfully. Jul 2 08:49:16.959497 systemd-logind[1229]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:49:16.960178 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:49:16.967184 systemd-logind[1229]: Removed session 22. Jul 2 08:49:18.175278 sshd[3842]: Accepted publickey for core from 172.24.4.1 port 33590 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:18.178071 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:18.189139 systemd[1]: Started session-23.scope. Jul 2 08:49:18.190123 systemd-logind[1229]: New session 23 of user core. Jul 2 08:49:19.865954 kubelet[2146]: I0702 08:49:19.865907 2146 topology_manager.go:215] "Topology Admit Handler" podUID="5171f058-bd31-4cb4-a73c-757be0867eec" podNamespace="kube-system" podName="cilium-kj2nq" Jul 2 08:49:19.868003 kubelet[2146]: E0702 08:49:19.867974 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" containerName="mount-bpf-fs" Jul 2 08:49:19.868097 kubelet[2146]: E0702 08:49:19.868057 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" containerName="clean-cilium-state" Jul 2 08:49:19.868097 kubelet[2146]: E0702 08:49:19.868071 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" containerName="cilium-agent" Jul 2 08:49:19.868097 kubelet[2146]: E0702 08:49:19.868081 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" containerName="mount-cgroup" Jul 2 08:49:19.868097 kubelet[2146]: E0702 08:49:19.868088 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" containerName="apply-sysctl-overwrites" Jul 2 08:49:19.868097 kubelet[2146]: E0702 08:49:19.868097 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7d2a14a1-64c0-449a-8938-7fa96f316c29" containerName="cilium-operator" Jul 2 08:49:19.868251 kubelet[2146]: I0702 08:49:19.868134 2146 memory_manager.go:346] "RemoveStaleState removing state" podUID="a035bb52-9628-4b0a-bb63-8208622a86a6" containerName="cilium-agent" Jul 2 08:49:19.868251 kubelet[2146]: I0702 08:49:19.868143 2146 memory_manager.go:346] "RemoveStaleState removing state" podUID="7d2a14a1-64c0-449a-8938-7fa96f316c29" containerName="cilium-operator" Jul 2 08:49:20.010123 kubelet[2146]: I0702 08:49:20.010068 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-etc-cni-netd\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010273 kubelet[2146]: I0702 08:49:20.010169 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-net\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010273 kubelet[2146]: I0702 08:49:20.010232 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-kernel\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010378 kubelet[2146]: I0702 08:49:20.010292 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-clustermesh-secrets\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010421 kubelet[2146]: I0702 08:49:20.010389 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-run\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010516 kubelet[2146]: I0702 08:49:20.010486 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-cgroup\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010566 kubelet[2146]: I0702 08:49:20.010557 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cni-path\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010640 kubelet[2146]: I0702 08:49:20.010615 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-xtables-lock\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010699 kubelet[2146]: I0702 08:49:20.010681 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-config-path\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.010787 kubelet[2146]: I0702 08:49:20.010762 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-ipsec-secrets\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.011928 kubelet[2146]: I0702 08:49:20.010829 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-hubble-tls\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.011928 kubelet[2146]: I0702 08:49:20.010887 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr85j\" (UniqueName: \"kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-kube-api-access-kr85j\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.011928 kubelet[2146]: I0702 08:49:20.010947 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-bpf-maps\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.011928 kubelet[2146]: I0702 08:49:20.010997 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-hostproc\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.011928 kubelet[2146]: I0702 08:49:20.011049 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-lib-modules\") pod \"cilium-kj2nq\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " pod="kube-system/cilium-kj2nq" Jul 2 08:49:20.020658 sshd[3842]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:20.026467 systemd[1]: Started sshd@23-172.24.4.53:22-172.24.4.1:33602.service. Jul 2 08:49:20.029392 systemd[1]: sshd@22-172.24.4.53:22-172.24.4.1:33590.service: Deactivated successfully. Jul 2 08:49:20.042567 systemd-logind[1229]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:49:20.042902 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:49:20.050605 systemd-logind[1229]: Removed session 23. Jul 2 08:49:20.054315 kubelet[2146]: I0702 08:49:20.054254 2146 setters.go:552] "Node became not ready" node="ci-3510-3-5-4-c82a94ccd3.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:49:20Z","lastTransitionTime":"2024-07-02T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:49:20.201533 env[1243]: time="2024-07-02T08:49:20.200563174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kj2nq,Uid:5171f058-bd31-4cb4-a73c-757be0867eec,Namespace:kube-system,Attempt:0,}" Jul 2 08:49:20.236107 env[1243]: time="2024-07-02T08:49:20.235551020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:49:20.236107 env[1243]: time="2024-07-02T08:49:20.236048302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:49:20.236506 env[1243]: time="2024-07-02T08:49:20.236071436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:49:20.236875 env[1243]: time="2024-07-02T08:49:20.236761546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e pid=3869 runtime=io.containerd.runc.v2 Jul 2 08:49:20.283649 env[1243]: time="2024-07-02T08:49:20.283613350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kj2nq,Uid:5171f058-bd31-4cb4-a73c-757be0867eec,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\"" Jul 2 08:49:20.286576 env[1243]: time="2024-07-02T08:49:20.286539982Z" level=info msg="CreateContainer within sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:49:20.302645 env[1243]: time="2024-07-02T08:49:20.302606511Z" level=info msg="CreateContainer within sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3\"" Jul 2 08:49:20.303483 env[1243]: time="2024-07-02T08:49:20.303460866Z" level=info msg="StartContainer for \"cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3\"" Jul 2 08:49:20.352680 env[1243]: time="2024-07-02T08:49:20.352641562Z" level=info msg="StartContainer for \"cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3\" returns successfully" Jul 2 08:49:20.421462 env[1243]: time="2024-07-02T08:49:20.421405174Z" level=info msg="shim disconnected" id=cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3 Jul 2 08:49:20.421682 env[1243]: time="2024-07-02T08:49:20.421661770Z" level=warning msg="cleaning up after shim disconnected" id=cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3 namespace=k8s.io Jul 2 08:49:20.421773 env[1243]: time="2024-07-02T08:49:20.421757678Z" level=info msg="cleaning up dead shim" Jul 2 08:49:20.429501 env[1243]: time="2024-07-02T08:49:20.429444675Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3954 runtime=io.containerd.runc.v2\n" Jul 2 08:49:20.625430 env[1243]: time="2024-07-02T08:49:20.625231423Z" level=info msg="CreateContainer within sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:49:20.656388 env[1243]: time="2024-07-02T08:49:20.656260812Z" level=info msg="CreateContainer within sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48\"" Jul 2 08:49:20.665781 env[1243]: time="2024-07-02T08:49:20.665675696Z" level=info msg="StartContainer for \"5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48\"" Jul 2 08:49:20.769524 env[1243]: time="2024-07-02T08:49:20.768973781Z" level=info msg="StartContainer for \"5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48\" returns successfully" Jul 2 08:49:20.806144 env[1243]: time="2024-07-02T08:49:20.806073687Z" level=info msg="shim disconnected" id=5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48 Jul 2 08:49:20.806627 env[1243]: time="2024-07-02T08:49:20.806585046Z" level=warning msg="cleaning up after shim disconnected" id=5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48 namespace=k8s.io Jul 2 08:49:20.806851 env[1243]: time="2024-07-02T08:49:20.806815323Z" level=info msg="cleaning up dead shim" Jul 2 08:49:20.817518 env[1243]: time="2024-07-02T08:49:20.817448678Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4016 runtime=io.containerd.runc.v2\n" Jul 2 08:49:21.145728 kubelet[2146]: E0702 08:49:21.145264 2146 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:49:21.308686 sshd[3853]: Accepted publickey for core from 172.24.4.1 port 33602 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:21.312703 sshd[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:21.325302 systemd-logind[1229]: New session 24 of user core. Jul 2 08:49:21.328243 systemd[1]: Started session-24.scope. Jul 2 08:49:21.645413 env[1243]: time="2024-07-02T08:49:21.639929533Z" level=info msg="CreateContainer within sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:49:21.700675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240569630.mount: Deactivated successfully. Jul 2 08:49:21.717898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807223464.mount: Deactivated successfully. Jul 2 08:49:21.743532 env[1243]: time="2024-07-02T08:49:21.743415717Z" level=info msg="CreateContainer within sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4\"" Jul 2 08:49:21.744169 env[1243]: time="2024-07-02T08:49:21.743995774Z" level=info msg="StartContainer for \"2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4\"" Jul 2 08:49:21.824262 env[1243]: time="2024-07-02T08:49:21.824214311Z" level=info msg="StartContainer for \"2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4\" returns successfully" Jul 2 08:49:21.870026 env[1243]: time="2024-07-02T08:49:21.869911238Z" level=info msg="shim disconnected" id=2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4 Jul 2 08:49:21.870236 env[1243]: time="2024-07-02T08:49:21.870218398Z" level=warning msg="cleaning up after shim disconnected" id=2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4 namespace=k8s.io Jul 2 08:49:21.870307 env[1243]: time="2024-07-02T08:49:21.870293027Z" level=info msg="cleaning up dead shim" Jul 2 08:49:21.884451 env[1243]: time="2024-07-02T08:49:21.884415114Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4081 runtime=io.containerd.runc.v2\n" Jul 2 08:49:22.075122 systemd[1]: Started sshd@24-172.24.4.53:22-172.24.4.1:33614.service. Jul 2 08:49:22.075323 sshd[3853]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:22.079465 systemd[1]: sshd@23-172.24.4.53:22-172.24.4.1:33602.service: Deactivated successfully. Jul 2 08:49:22.081749 systemd-logind[1229]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:49:22.082019 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:49:22.085858 systemd-logind[1229]: Removed session 24. Jul 2 08:49:22.638065 env[1243]: time="2024-07-02T08:49:22.637943467Z" level=info msg="StopPodSandbox for \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\"" Jul 2 08:49:22.646438 env[1243]: time="2024-07-02T08:49:22.638084458Z" level=info msg="Container to stop \"5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:22.646438 env[1243]: time="2024-07-02T08:49:22.638123981Z" level=info msg="Container to stop \"2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:22.646438 env[1243]: time="2024-07-02T08:49:22.638154118Z" level=info msg="Container to stop \"cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:49:22.648152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e-shm.mount: Deactivated successfully. Jul 2 08:49:22.718674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e-rootfs.mount: Deactivated successfully. Jul 2 08:49:22.731189 env[1243]: time="2024-07-02T08:49:22.731109133Z" level=info msg="shim disconnected" id=5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e Jul 2 08:49:22.731380 env[1243]: time="2024-07-02T08:49:22.731194441Z" level=warning msg="cleaning up after shim disconnected" id=5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e namespace=k8s.io Jul 2 08:49:22.731380 env[1243]: time="2024-07-02T08:49:22.731219969Z" level=info msg="cleaning up dead shim" Jul 2 08:49:22.739600 env[1243]: time="2024-07-02T08:49:22.739558064Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\n" Jul 2 08:49:22.740056 env[1243]: time="2024-07-02T08:49:22.740029629Z" level=info msg="TearDown network for sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" successfully" Jul 2 08:49:22.740157 env[1243]: time="2024-07-02T08:49:22.740138191Z" level=info msg="StopPodSandbox for \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" returns successfully" Jul 2 08:49:22.846991 kubelet[2146]: I0702 08:49:22.846887 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-ipsec-secrets\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.846991 kubelet[2146]: I0702 08:49:22.846995 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-hostproc\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847453 kubelet[2146]: I0702 08:49:22.847046 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-hubble-tls\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847453 kubelet[2146]: I0702 08:49:22.847098 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-clustermesh-secrets\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847453 kubelet[2146]: I0702 08:49:22.847144 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-lib-modules\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847453 kubelet[2146]: I0702 08:49:22.847191 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-run\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847453 kubelet[2146]: I0702 08:49:22.847237 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-net\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847453 kubelet[2146]: I0702 08:49:22.847292 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-bpf-maps\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847622 kubelet[2146]: I0702 08:49:22.847421 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-etc-cni-netd\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847622 kubelet[2146]: I0702 08:49:22.847478 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cni-path\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847622 kubelet[2146]: I0702 08:49:22.847537 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr85j\" (UniqueName: \"kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-kube-api-access-kr85j\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847622 kubelet[2146]: I0702 08:49:22.847591 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-xtables-lock\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847737 kubelet[2146]: I0702 08:49:22.847644 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-kernel\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847737 kubelet[2146]: I0702 08:49:22.847692 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-cgroup\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.847796 kubelet[2146]: I0702 08:49:22.847750 2146 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-config-path\") pod \"5171f058-bd31-4cb4-a73c-757be0867eec\" (UID: \"5171f058-bd31-4cb4-a73c-757be0867eec\") " Jul 2 08:49:22.850423 kubelet[2146]: I0702 08:49:22.850393 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:49:22.855097 systemd[1]: var-lib-kubelet-pods-5171f058\x2dbd31\x2d4cb4\x2da73c\x2d757be0867eec-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:49:22.858854 kubelet[2146]: I0702 08:49:22.850536 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-hostproc" (OuterVolumeSpecName: "hostproc") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.858854 kubelet[2146]: I0702 08:49:22.850807 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859007 kubelet[2146]: I0702 08:49:22.851103 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cni-path" (OuterVolumeSpecName: "cni-path") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859007 kubelet[2146]: I0702 08:49:22.858643 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859007 kubelet[2146]: I0702 08:49:22.858727 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859007 kubelet[2146]: I0702 08:49:22.858758 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859007 kubelet[2146]: I0702 08:49:22.858786 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859350 kubelet[2146]: I0702 08:49:22.859035 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:49:22.859350 kubelet[2146]: I0702 08:49:22.859121 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859350 kubelet[2146]: I0702 08:49:22.859180 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.859350 kubelet[2146]: I0702 08:49:22.859221 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:49:22.862525 systemd[1]: var-lib-kubelet-pods-5171f058\x2dbd31\x2d4cb4\x2da73c\x2d757be0867eec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:49:22.865122 kubelet[2146]: I0702 08:49:22.865059 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:22.868044 kubelet[2146]: I0702 08:49:22.867998 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-kube-api-access-kr85j" (OuterVolumeSpecName: "kube-api-access-kr85j") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "kube-api-access-kr85j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:49:22.868784 kubelet[2146]: I0702 08:49:22.868737 2146 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5171f058-bd31-4cb4-a73c-757be0867eec" (UID: "5171f058-bd31-4cb4-a73c-757be0867eec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:49:22.950142 kubelet[2146]: I0702 08:49:22.948432 2146 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-etc-cni-netd\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950412 kubelet[2146]: I0702 08:49:22.950375 2146 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cni-path\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950492 kubelet[2146]: I0702 08:49:22.950440 2146 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kr85j\" (UniqueName: \"kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-kube-api-access-kr85j\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950492 kubelet[2146]: I0702 08:49:22.950472 2146 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-xtables-lock\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950636 kubelet[2146]: I0702 08:49:22.950502 2146 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-kernel\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950636 kubelet[2146]: I0702 08:49:22.950532 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-cgroup\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950636 kubelet[2146]: I0702 08:49:22.950561 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-config-path\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950636 kubelet[2146]: I0702 08:49:22.950589 2146 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-clustermesh-secrets\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950636 kubelet[2146]: I0702 08:49:22.950617 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-ipsec-secrets\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950862 kubelet[2146]: I0702 08:49:22.950643 2146 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-hostproc\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950862 kubelet[2146]: I0702 08:49:22.950674 2146 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5171f058-bd31-4cb4-a73c-757be0867eec-hubble-tls\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950862 kubelet[2146]: I0702 08:49:22.950701 2146 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-lib-modules\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950862 kubelet[2146]: I0702 08:49:22.950727 2146 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-cilium-run\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950862 kubelet[2146]: I0702 08:49:22.950756 2146 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-host-proc-sys-net\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:22.950862 kubelet[2146]: I0702 08:49:22.950785 2146 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5171f058-bd31-4cb4-a73c-757be0867eec-bpf-maps\") on node \"ci-3510-3-5-4-c82a94ccd3.novalocal\" DevicePath \"\"" Jul 2 08:49:23.118748 systemd[1]: var-lib-kubelet-pods-5171f058\x2dbd31\x2d4cb4\x2da73c\x2d757be0867eec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkr85j.mount: Deactivated successfully. Jul 2 08:49:23.118948 systemd[1]: var-lib-kubelet-pods-5171f058\x2dbd31\x2d4cb4\x2da73c\x2d757be0867eec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:49:23.577394 sshd[4096]: Accepted publickey for core from 172.24.4.1 port 33614 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:49:23.582378 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:49:23.598612 systemd-logind[1229]: New session 25 of user core. Jul 2 08:49:23.600096 systemd[1]: Started session-25.scope. Jul 2 08:49:23.662705 kubelet[2146]: I0702 08:49:23.662590 2146 scope.go:117] "RemoveContainer" containerID="2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4" Jul 2 08:49:23.670827 env[1243]: time="2024-07-02T08:49:23.670775830Z" level=info msg="RemoveContainer for \"2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4\"" Jul 2 08:49:23.706917 env[1243]: time="2024-07-02T08:49:23.706878577Z" level=info msg="RemoveContainer for \"2fccadd0668b0d3f20fde3c46f7aab35b223a1d6454edec919725cca7276fdf4\" returns successfully" Jul 2 08:49:23.707948 kubelet[2146]: I0702 08:49:23.707286 2146 scope.go:117] "RemoveContainer" containerID="5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48" Jul 2 08:49:23.709241 env[1243]: time="2024-07-02T08:49:23.709212993Z" level=info msg="RemoveContainer for \"5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48\"" Jul 2 08:49:23.743447 env[1243]: time="2024-07-02T08:49:23.743409508Z" level=info msg="RemoveContainer for \"5edbcec3173a596f8b12db9e86ee6f1babe1694cce45c5330303f25f8ed83f48\" returns successfully" Jul 2 08:49:23.746353 kubelet[2146]: I0702 08:49:23.743775 2146 scope.go:117] "RemoveContainer" containerID="cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3" Jul 2 08:49:23.746443 env[1243]: time="2024-07-02T08:49:23.745031080Z" level=info msg="RemoveContainer for \"cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3\"" Jul 2 08:49:23.814725 env[1243]: time="2024-07-02T08:49:23.814684371Z" level=info msg="RemoveContainer for \"cb375f5d84349c2c2671d900ce8384636994f272a273a04d9ab14dbc0a6c3ae3\" returns successfully" Jul 2 08:49:23.817361 kubelet[2146]: I0702 08:49:23.817315 2146 topology_manager.go:215] "Topology Admit Handler" podUID="d9fe28d5-fd9d-4633-bc21-b9618316792b" podNamespace="kube-system" podName="cilium-rd82z" Jul 2 08:49:23.817566 kubelet[2146]: E0702 08:49:23.817552 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5171f058-bd31-4cb4-a73c-757be0867eec" containerName="apply-sysctl-overwrites" Jul 2 08:49:23.817663 kubelet[2146]: E0702 08:49:23.817653 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5171f058-bd31-4cb4-a73c-757be0867eec" containerName="mount-bpf-fs" Jul 2 08:49:23.817755 kubelet[2146]: E0702 08:49:23.817745 2146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5171f058-bd31-4cb4-a73c-757be0867eec" containerName="mount-cgroup" Jul 2 08:49:23.817860 kubelet[2146]: I0702 08:49:23.817850 2146 memory_manager.go:346] "RemoveStaleState removing state" podUID="5171f058-bd31-4cb4-a73c-757be0867eec" containerName="mount-bpf-fs" Jul 2 08:49:23.958752 kubelet[2146]: I0702 08:49:23.958624 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-xtables-lock\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.958752 kubelet[2146]: I0702 08:49:23.958710 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-host-proc-sys-net\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959190 kubelet[2146]: I0702 08:49:23.958818 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-hostproc\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959190 kubelet[2146]: I0702 08:49:23.958888 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9fe28d5-fd9d-4633-bc21-b9618316792b-clustermesh-secrets\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959190 kubelet[2146]: I0702 08:49:23.958946 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-cilium-run\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959190 kubelet[2146]: I0702 08:49:23.959001 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d9fe28d5-fd9d-4633-bc21-b9618316792b-cilium-ipsec-secrets\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959190 kubelet[2146]: I0702 08:49:23.959056 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9fe28d5-fd9d-4633-bc21-b9618316792b-hubble-tls\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959190 kubelet[2146]: I0702 08:49:23.959109 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-lib-modules\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959454 kubelet[2146]: I0702 08:49:23.959215 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvptr\" (UniqueName: \"kubernetes.io/projected/d9fe28d5-fd9d-4633-bc21-b9618316792b-kube-api-access-lvptr\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959454 kubelet[2146]: I0702 08:49:23.959274 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-host-proc-sys-kernel\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959454 kubelet[2146]: I0702 08:49:23.959376 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-bpf-maps\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959454 kubelet[2146]: I0702 08:49:23.959442 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9fe28d5-fd9d-4633-bc21-b9618316792b-cilium-config-path\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959577 kubelet[2146]: I0702 08:49:23.959496 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-cilium-cgroup\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959577 kubelet[2146]: I0702 08:49:23.959549 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-cni-path\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.959637 kubelet[2146]: I0702 08:49:23.959607 2146 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9fe28d5-fd9d-4633-bc21-b9618316792b-etc-cni-netd\") pod \"cilium-rd82z\" (UID: \"d9fe28d5-fd9d-4633-bc21-b9618316792b\") " pod="kube-system/cilium-rd82z" Jul 2 08:49:23.970699 kubelet[2146]: I0702 08:49:23.970670 2146 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5171f058-bd31-4cb4-a73c-757be0867eec" path="/var/lib/kubelet/pods/5171f058-bd31-4cb4-a73c-757be0867eec/volumes" Jul 2 08:49:24.131242 env[1243]: time="2024-07-02T08:49:24.131191882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rd82z,Uid:d9fe28d5-fd9d-4633-bc21-b9618316792b,Namespace:kube-system,Attempt:0,}" Jul 2 08:49:24.145942 env[1243]: time="2024-07-02T08:49:24.145301059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:49:24.145942 env[1243]: time="2024-07-02T08:49:24.145363806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:49:24.145942 env[1243]: time="2024-07-02T08:49:24.145378073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:49:24.145942 env[1243]: time="2024-07-02T08:49:24.145598963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829 pid=4156 runtime=io.containerd.runc.v2 Jul 2 08:49:24.167984 systemd[1]: run-containerd-runc-k8s.io-8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829-runc.X7zLSy.mount: Deactivated successfully. Jul 2 08:49:24.194739 env[1243]: time="2024-07-02T08:49:24.194692569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rd82z,Uid:d9fe28d5-fd9d-4633-bc21-b9618316792b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\"" Jul 2 08:49:24.199972 env[1243]: time="2024-07-02T08:49:24.199925442Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:49:24.237454 env[1243]: time="2024-07-02T08:49:24.237401640Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c9aa2aea6c2d133a7be36c2a7eeb9dd8d9d05d1736ba38a04f734cef381f6b1d\"" Jul 2 08:49:24.238340 env[1243]: time="2024-07-02T08:49:24.238272018Z" level=info msg="StartContainer for \"c9aa2aea6c2d133a7be36c2a7eeb9dd8d9d05d1736ba38a04f734cef381f6b1d\"" Jul 2 08:49:24.301363 env[1243]: time="2024-07-02T08:49:24.301263428Z" level=info msg="StartContainer for \"c9aa2aea6c2d133a7be36c2a7eeb9dd8d9d05d1736ba38a04f734cef381f6b1d\" returns successfully" Jul 2 08:49:24.346238 env[1243]: time="2024-07-02T08:49:24.346190191Z" level=info msg="shim disconnected" id=c9aa2aea6c2d133a7be36c2a7eeb9dd8d9d05d1736ba38a04f734cef381f6b1d Jul 2 08:49:24.346570 env[1243]: time="2024-07-02T08:49:24.346537877Z" level=warning msg="cleaning up after shim disconnected" id=c9aa2aea6c2d133a7be36c2a7eeb9dd8d9d05d1736ba38a04f734cef381f6b1d namespace=k8s.io Jul 2 08:49:24.346661 env[1243]: time="2024-07-02T08:49:24.346644785Z" level=info msg="cleaning up dead shim" Jul 2 08:49:24.354855 env[1243]: time="2024-07-02T08:49:24.354807594Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4241 runtime=io.containerd.runc.v2\n" Jul 2 08:49:24.685391 env[1243]: time="2024-07-02T08:49:24.684715146Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:49:24.725623 env[1243]: time="2024-07-02T08:49:24.725525981Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"676449b7a27a207a0843927b3fd07924359c02b5a63d3bb00d7e38d63f075e3f\"" Jul 2 08:49:24.729897 env[1243]: time="2024-07-02T08:49:24.729811353Z" level=info msg="StartContainer for \"676449b7a27a207a0843927b3fd07924359c02b5a63d3bb00d7e38d63f075e3f\"" Jul 2 08:49:24.830075 env[1243]: time="2024-07-02T08:49:24.830032845Z" level=info msg="StartContainer for \"676449b7a27a207a0843927b3fd07924359c02b5a63d3bb00d7e38d63f075e3f\" returns successfully" Jul 2 08:49:24.858320 env[1243]: time="2024-07-02T08:49:24.858277950Z" level=info msg="shim disconnected" id=676449b7a27a207a0843927b3fd07924359c02b5a63d3bb00d7e38d63f075e3f Jul 2 08:49:24.858622 env[1243]: time="2024-07-02T08:49:24.858592495Z" level=warning msg="cleaning up after shim disconnected" id=676449b7a27a207a0843927b3fd07924359c02b5a63d3bb00d7e38d63f075e3f namespace=k8s.io Jul 2 08:49:24.858707 env[1243]: time="2024-07-02T08:49:24.858691288Z" level=info msg="cleaning up dead shim" Jul 2 08:49:24.866704 env[1243]: time="2024-07-02T08:49:24.866678460Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4301 runtime=io.containerd.runc.v2\n" Jul 2 08:49:25.144440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942034829.mount: Deactivated successfully. Jul 2 08:49:25.698892 env[1243]: time="2024-07-02T08:49:25.698818968Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:49:25.746875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642993762.mount: Deactivated successfully. Jul 2 08:49:25.782205 env[1243]: time="2024-07-02T08:49:25.782063968Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f35386a17208246b33f757694508788945caf55791da1004f2d178ae4dbfb8d\"" Jul 2 08:49:25.783919 env[1243]: time="2024-07-02T08:49:25.783851852Z" level=info msg="StartContainer for \"1f35386a17208246b33f757694508788945caf55791da1004f2d178ae4dbfb8d\"" Jul 2 08:49:25.879673 env[1243]: time="2024-07-02T08:49:25.879633654Z" level=info msg="StartContainer for \"1f35386a17208246b33f757694508788945caf55791da1004f2d178ae4dbfb8d\" returns successfully" Jul 2 08:49:25.911423 env[1243]: time="2024-07-02T08:49:25.911375594Z" level=info msg="shim disconnected" id=1f35386a17208246b33f757694508788945caf55791da1004f2d178ae4dbfb8d Jul 2 08:49:25.911703 env[1243]: time="2024-07-02T08:49:25.911669220Z" level=warning msg="cleaning up after shim disconnected" id=1f35386a17208246b33f757694508788945caf55791da1004f2d178ae4dbfb8d namespace=k8s.io Jul 2 08:49:25.911788 env[1243]: time="2024-07-02T08:49:25.911771941Z" level=info msg="cleaning up dead shim" Jul 2 08:49:25.919107 env[1243]: time="2024-07-02T08:49:25.919069135Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4360 runtime=io.containerd.runc.v2\n" Jul 2 08:49:26.150393 kubelet[2146]: E0702 08:49:26.149374 2146 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:49:26.149443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f35386a17208246b33f757694508788945caf55791da1004f2d178ae4dbfb8d-rootfs.mount: Deactivated successfully. Jul 2 08:49:26.708260 env[1243]: time="2024-07-02T08:49:26.708192986Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:49:26.739102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700887132.mount: Deactivated successfully. Jul 2 08:49:26.752914 env[1243]: time="2024-07-02T08:49:26.752501867Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45f47f0091b88516eef566f90fcc7188b5006d5a6a296b4b7738deb2a36805f3\"" Jul 2 08:49:26.757833 env[1243]: time="2024-07-02T08:49:26.757774611Z" level=info msg="StartContainer for \"45f47f0091b88516eef566f90fcc7188b5006d5a6a296b4b7738deb2a36805f3\"" Jul 2 08:49:26.854115 env[1243]: time="2024-07-02T08:49:26.854046276Z" level=info msg="StartContainer for \"45f47f0091b88516eef566f90fcc7188b5006d5a6a296b4b7738deb2a36805f3\" returns successfully" Jul 2 08:49:26.880169 env[1243]: time="2024-07-02T08:49:26.880124202Z" level=info msg="shim disconnected" id=45f47f0091b88516eef566f90fcc7188b5006d5a6a296b4b7738deb2a36805f3 Jul 2 08:49:26.880451 env[1243]: time="2024-07-02T08:49:26.880431393Z" level=warning msg="cleaning up after shim disconnected" id=45f47f0091b88516eef566f90fcc7188b5006d5a6a296b4b7738deb2a36805f3 namespace=k8s.io Jul 2 08:49:26.880532 env[1243]: time="2024-07-02T08:49:26.880516812Z" level=info msg="cleaning up dead shim" Jul 2 08:49:26.888784 env[1243]: time="2024-07-02T08:49:26.888740522Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:49:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4418 runtime=io.containerd.runc.v2\n" Jul 2 08:49:27.145250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45f47f0091b88516eef566f90fcc7188b5006d5a6a296b4b7738deb2a36805f3-rootfs.mount: Deactivated successfully. Jul 2 08:49:27.716912 env[1243]: time="2024-07-02T08:49:27.716439412Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:49:27.770257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount326339750.mount: Deactivated successfully. Jul 2 08:49:27.866029 env[1243]: time="2024-07-02T08:49:27.865931698Z" level=info msg="CreateContainer within sandbox \"8b2a021feef36d42afd755b1689bd280995dd778cfb5cde90310c167060f9829\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b197ecb90047326e87ec9e26cafcd777dbfcce6317e7c938138734d6e336f677\"" Jul 2 08:49:27.869775 env[1243]: time="2024-07-02T08:49:27.869642569Z" level=info msg="StartContainer for \"b197ecb90047326e87ec9e26cafcd777dbfcce6317e7c938138734d6e336f677\"" Jul 2 08:49:27.973183 env[1243]: time="2024-07-02T08:49:27.972902411Z" level=info msg="StartContainer for \"b197ecb90047326e87ec9e26cafcd777dbfcce6317e7c938138734d6e336f677\" returns successfully" Jul 2 08:49:28.764151 kubelet[2146]: I0702 08:49:28.764096 2146 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rd82z" podStartSLOduration=5.764003141 podCreationTimestamp="2024-07-02 08:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:49:28.763976131 +0000 UTC m=+172.990590091" watchObservedRunningTime="2024-07-02 08:49:28.764003141 +0000 UTC m=+172.990617101" Jul 2 08:49:29.018386 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 08:49:29.068364 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 2 08:49:30.459047 systemd[1]: run-containerd-runc-k8s.io-b197ecb90047326e87ec9e26cafcd777dbfcce6317e7c938138734d6e336f677-runc.PBkwaF.mount: Deactivated successfully. Jul 2 08:49:31.960114 systemd-networkd[1020]: lxc_health: Link UP Jul 2 08:49:31.967695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:49:31.967827 systemd-networkd[1020]: lxc_health: Gained carrier Jul 2 08:49:32.776757 systemd[1]: run-containerd-runc-k8s.io-b197ecb90047326e87ec9e26cafcd777dbfcce6317e7c938138734d6e336f677-runc.158Ur2.mount: Deactivated successfully. Jul 2 08:49:33.748475 systemd-networkd[1020]: lxc_health: Gained IPv6LL Jul 2 08:49:35.007190 systemd[1]: run-containerd-runc-k8s.io-b197ecb90047326e87ec9e26cafcd777dbfcce6317e7c938138734d6e336f677-runc.PZrBiV.mount: Deactivated successfully. Jul 2 08:49:35.988335 env[1243]: time="2024-07-02T08:49:35.988225245Z" level=info msg="StopPodSandbox for \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\"" Jul 2 08:49:35.988774 env[1243]: time="2024-07-02T08:49:35.988536997Z" level=info msg="TearDown network for sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" successfully" Jul 2 08:49:35.988774 env[1243]: time="2024-07-02T08:49:35.988619561Z" level=info msg="StopPodSandbox for \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" returns successfully" Jul 2 08:49:35.989838 env[1243]: time="2024-07-02T08:49:35.989721334Z" level=info msg="RemovePodSandbox for \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\"" Jul 2 08:49:35.989964 env[1243]: time="2024-07-02T08:49:35.989850665Z" level=info msg="Forcibly stopping sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\"" Jul 2 08:49:35.990157 env[1243]: time="2024-07-02T08:49:35.990109126Z" level=info msg="TearDown network for sandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" successfully" Jul 2 08:49:36.033994 env[1243]: time="2024-07-02T08:49:36.033901408Z" level=info msg="RemovePodSandbox \"b245d8e44fad538d1c9e7cb6f15fe3a59ab9d72d665d17ffde48affcae902b61\" returns successfully" Jul 2 08:49:36.035049 env[1243]: time="2024-07-02T08:49:36.034981070Z" level=info msg="StopPodSandbox for \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\"" Jul 2 08:49:36.035262 env[1243]: time="2024-07-02T08:49:36.035174010Z" level=info msg="TearDown network for sandbox \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" successfully" Jul 2 08:49:36.035320 env[1243]: time="2024-07-02T08:49:36.035260781Z" level=info msg="StopPodSandbox for \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" returns successfully" Jul 2 08:49:36.035945 env[1243]: time="2024-07-02T08:49:36.035888592Z" level=info msg="RemovePodSandbox for \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\"" Jul 2 08:49:36.036112 env[1243]: time="2024-07-02T08:49:36.035956158Z" level=info msg="Forcibly stopping sandbox \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\"" Jul 2 08:49:36.036262 env[1243]: time="2024-07-02T08:49:36.036212475Z" level=info msg="TearDown network for sandbox \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" successfully" Jul 2 08:49:36.067322 env[1243]: time="2024-07-02T08:49:36.067241298Z" level=info msg="RemovePodSandbox \"f4098fbca0eb4581e1a02d9c93be2b0726c8c94e71d79fcb3e0caf21cff6d094\" returns successfully" Jul 2 08:49:36.068170 env[1243]: time="2024-07-02T08:49:36.068089770Z" level=info msg="StopPodSandbox for \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\"" Jul 2 08:49:36.068428 env[1243]: time="2024-07-02T08:49:36.068278672Z" level=info msg="TearDown network for sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" successfully" Jul 2 08:49:36.068488 env[1243]: time="2024-07-02T08:49:36.068429233Z" level=info msg="StopPodSandbox for \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" returns successfully" Jul 2 08:49:36.069489 env[1243]: time="2024-07-02T08:49:36.069436100Z" level=info msg="RemovePodSandbox for \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\"" Jul 2 08:49:36.069552 env[1243]: time="2024-07-02T08:49:36.069498496Z" level=info msg="Forcibly stopping sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\"" Jul 2 08:49:36.069684 env[1243]: time="2024-07-02T08:49:36.069641132Z" level=info msg="TearDown network for sandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" successfully" Jul 2 08:49:36.103961 env[1243]: time="2024-07-02T08:49:36.103839924Z" level=info msg="RemovePodSandbox \"5a2ae80f8b76d0cd43ce2d02862712b201ff7e70c305e6485771eb9148ca420e\" returns successfully" Jul 2 08:49:37.195619 systemd[1]: run-containerd-runc-k8s.io-b197ecb90047326e87ec9e26cafcd777dbfcce6317e7c938138734d6e336f677-runc.0a26YI.mount: Deactivated successfully. Jul 2 08:49:37.579278 sshd[4096]: pam_unix(sshd:session): session closed for user core Jul 2 08:49:37.621787 systemd[1]: sshd@24-172.24.4.53:22-172.24.4.1:33614.service: Deactivated successfully. Jul 2 08:49:37.624679 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:49:37.625583 systemd-logind[1229]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:49:37.629503 systemd-logind[1229]: Removed session 25.