GithubHelp home page GithubHelp logo

oneapi-containers's Introduction

Intel® oneAPI Containers

Intel® oneAPI products will deliver the tools needed to deploy applications and solutions across scalar, vector, matrix, and spatial (SVMS) architectures. Its set of complementary toolkits—a base kit and specialty add-ons—simplify programming and help developers improve efficiency and innovation. oneAPI Details

Containers allow you to set up and configure environments for profiling and distribute them using images:

  • You can install an image containing an environment pre-configured with all the tools you need, then develop within that environment.
  • You can save an environment and use the image to move that environment to another machine without additional setup.
  • You can prepare containers with different sets of languages and runtimes, analysis tools, or other tools, as needed.
  • You can use runtime containers to execute your applications built with oneAPI toolkits.

oneAPI Containers Get Started Guide

oneAPI Docker Hub

Explore more containers, models, and more on the Intel® oneContainer Portal

License Agreement

By downloading and using this container and the included software, you agree to the terms and conditions of the software license agreements.

Intel® oneAPI Runtime Libraries

Get started running or deploying applications built with oneAPI toolkits.

image=intel/oneapi-runtime
docker pull "$image"
docker run --device=/dev/dri -it "$image"

Intel® oneAPI Base Toolkit

Get started with this foundational kit that enables developers of all types to build, test, and deploy performance-driven, data-centric applications across CPUs, GPUs, and FPGAs. Base Kit Details

image=intel/oneapi-basekit
docker pull "$image"
docker run --device=/dev/dri -it "$image"

Intel® HPC Toolkit

Deliver fast C++, Fortran, OpenMP, and MPI applications that scale. HPC Kit Details

image=intel/hpckit
docker pull "$image"
docker run --device=/dev/dri -it "$image"

Using containers behind a proxy

If you are behind a proxy, you may need to add proxy settings with docker run commands: -e http_proxy="$http_proxy" -e https_proxy="$https_proxy"

For example:

docker run --device=/dev/dri -e http_proxy="$http_proxy" -e https_proxy="$https_proxy" -it "$image"

Using Intel Advisor/Inspector/vTune with containers

When using these tools, extra capabilites have to be provided to the container: --cap-add=SYS_ADMIN --cap-add=SYS_PTRACE

docker run --cap-add=SYS_ADMIN --cap-add=SYS_PTRACE --device=/dev/dri -it "$image"

oneapi-containers's People

Contributors

aregm avatar deangithub avatar gabrielbriones avatar ghsilva avatar hugosoto avatar luisfponce avatar mdreyesm avatar obedmr avatar oscarfdzm avatar rajpratik71 avatar rdower avatar sys-jenkinsotcswstac avatar tingleby avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oneapi-containers's Issues

Missing level-zero-dev in basekit/Dockerfile.ubuntu-22.04

I used image basekit/Dockerfile.ubuntu-22.04, but during building my application -lze_loader was not found, because there was no level-zero-dev package.
After installing it, everything works correctly. I can see that image basekit/Dockerfile.ubuntu-20.04 installs it. Is it a mistake or you removed it on purpose, for saving memory or different reason?

intel/oneapi-basekit APT Repository not working

It appears there is a workaround for this as shown in this commit below, however you may want to consider regenerating your docker image with corrections to your latest APT Repository

intel-analytics/ipex-llm@e0f401d (Credit to simonlui for the heads up)

It appears that you switched to https://repositories.intel.com/graphics/ubuntu over https://repositories.intel.com/gpu/ubuntu

This issue is currently affecting a few repository so far:

The line you may need to change is located in https://github.com/intel/oneapi-containers/blob/master/images/docker/basekit/Dockerfile.ubuntu-22.04#L20 which looks like:

RUN curl -fsSL https://repositories.intel.com/gpu/intel-graphics.key | gpg --dearmor | tee /usr/share/keyrings/intel-graphics-archive-keyring.gpg
RUN echo "deb [signed-by=/usr/share/keyrings/intel-graphics-archive-keyring.gpg arch=amd64] https://repositories.intel.com/gpu/ubuntu jammy/lts/2350 unified" > /etc/apt/sources.list.d/intel-graphics.list

It's clear that we will need to update the url and the gpg key for the above


Context:

32bit support?

Hi,

I'm using oneAPI to build a 32-bit library with CMake in Linux by docker in path: images/docker/basekit/Dockerfile.ubuntu-20.04.
But I got an error message about incompatible of 3rd party library as below:
/usr/bin/ld: skipping incompatible /opt/intel/oneapi/compiler/2024.0/bin/compiler/../../lib/libsvml.a when searching for -lsvml
/usr/bin/ld: cannot find -lsvml
/usr/bin/ld: skipping incompatible /opt/intel/oneapi/compiler/2024.0/bin/compiler/../../lib/libirng.a when searching for -lirng
/usr/bin/ld: cannot find -lirng
/usr/bin/ld: skipping incompatible /opt/intel/oneapi/compiler/2024.0/bin/compiler/../../lib/libimf.a when searching for -limf
/usr/bin/ld: cannot find -limf
icpx: error: linker command failed with exit code 1 (use -v to see invocation)

Is this problem relating to oneAPI does not support to build 32bit library? If no, how can I link to 32bit libraries?

Thanks!

Add zlib

Could the zlib-devel (or at least zlib) package be installed in the intel/oneapi-hpckit:devel-centos8 image? It's not available in a path that can be readily linked to from C/C++, which presents difficulties when attempting to build software that links against it (-lz):

$ singularity pull docker://intel/oneapi-hpckit:devel-centos8
...
$ singularity exec --containall oneapi-hpckit_devel-centos8.sif find / -name libz.so 2>/dev/null
/opt/intel/oneapi/compiler/2021.1.1/linux/lib/oclfpga/host/linux64/lib/libz.so
/opt/intel/oneapi/intelpython/python3.7/envs/2021.1.1/lib/libz.so
/opt/intel/oneapi/intelpython/python3.7/lib/libz.so
/opt/intel/oneapi/intelpython/python3.7/pkgs/zlib-1.2.11.1-hb8a9d29_3/lib/libz.so

oneAPI Base Toolkit image no longer has cmake installed

Firstly, thanks for providing the images, they have been useful :)

I am not sure if this was done intentionally or not, if it was done intentionally feel free to close the issue, it is just that most oneMKL examples are built with cmake. However, the base toolkit image no longer has cmake installed, it was removed in the last commit

Build docker hpckit container on M1 Mac.

Hello all,

I'm having some issues running MPI jobs within the docker container on an M1 Mac. Ive tried building an image to see if this rectifies this, but I can't get the Ubuntu containers to build, they generally fail with:
#13 5.127 Get:22 http://ports.ubuntu.com/ubuntu-ports focal-security/multiverse arm64 Packages [3253 B]
#13 5.170 Fetched 18.8 MB in 5s (3827 kB/s)
#13 5.170 Reading package lists...
#13 5.623 Reading package lists...
#13 6.045 Building dependency tree...
#13 6.140 Reading state information...
#13 6.153 E: Unable to locate package intel-oneapi-advisor
#13 6.153 E: Unable to locate package intel-oneapi-ccl-devel
#13 6.153 E: Unable to locate package intel-oneapi-compiler-dpcpp-cpp

Is arm64 supported? Or is there some tricks/tips?

Many thanks

William

Link error: `llvm-foreach: No such file or directory`

I'm having troubles compiling (linking) code within the container seeing the following error:

[ 44%] Linking CXX executable KokkosExample_query_device
cd /build-test/example/query_device && /usr/local/bin/cmake -E cmake_link_script CMakeFiles/KokkosExample_query_device.dir/link.txt --verbose=1
/opt/intel/oneapi/compiler/2022.0.1/linux/bin/dpcpp -O2 -g -DNDEBUG -DKOKKOS_DEPENDENCE -fsycl -fno-sycl-id-queries-fit-in-int -fsycl-dead-args-optimization -fsycl-targets=spir64_gen -Xsycl-target-backend "-device gen12lp" CMakeFiles/KokkosExample_query_device.dir/query_device.cpp.o -o KokkosExample_query_device  ../../containers/src/libkokkoscontainers.a ../../core/src/libkokkoscore.a /usr/lib/x86_64-linux-gnu/libdl.so 
llvm-foreach: No such file or directory
dpcpp: error: gen compiler command failed with exit code 1 (use -v to see invocation)
example/query_device/CMakeFiles/KokkosExample_query_device.dir/build.make:99: recipe for target 'example/query_device/KokkosExample_query_device' failed
make[2]: *** [example/query_device/KokkosExample_query_device] Error 1
make[2]: Leaving directory '/build-test'

Steps to reproduce:

$ docker run -ti --device=/dev/dri intel/oneapi-basekit /bin/bash
$ git clone https://github.com/kokkos/kokkos.git -b develop
$ cmake -Bbuild-test -DKokkos_ARCH_INTEL_GEN12LP=ON -DKokkos_ENABLE_SYCL=ON -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_COMPILER=dpcpp -DKokkos_ENABLE_EXAMPLES=ON kokkos
$ cmake --build build-test

I also tried adding llvm-foreach to the PATH as suggested by /opt/intel/oneapi/compiler/2022.0.1/linux/bin-llvm/README.txt but it didn't fix the issue.

Mentioning kokkos/kokkos#4679 and intel/llvm#2583 for reference.

entrypoint fails with non-root user

Launching the last version of the containers is failing if another user than root is used.
The entrypoint specified is in /root/ and unreachable for normal users, and the command fails.
Using --entrypoint '' is a workaround, but then the environment is not setup correctly (maybe the issue of #14 ?).

The script to source should be placed somewhere reachable for all users (/tmp or with the other intel stuff in /opt, with correct permissions).

I personally preferred the previous version with ENV commands in the dockerfile, even if verbose, as they left the entrypoint free for other usages in containers using these as base.

failed to build docker image with hpckit (Dockerfile.ubuntu-22.04)

I got the error message as following:

=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 9.26kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:22.04 0.8s
=> [internal] load build context 0.0s
=> => transferring context: 74.36kB 0.0s
=> CACHED [1/9] FROM docker.io/library/ubuntu:22.04@sha256:2b7412e6465c3c7fc5bb21d3e6f1917c167358449f 0.0s
=> [2/9] COPY third-party-programs.txt / 0.0s
=> [3/9] RUN apt-get update && apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get instal 35.6s
=> [4/9] RUN curl -fsSL https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2023 0.4s
=> [5/9] RUN echo "deb [signed-by=/usr/share/keyrings/intel-oneapi-archive-keyring.gpg] https://apt.r 0.2s
=> [6/9] RUN apt-get update && apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get install 9.5s
=> [7/9] RUN curl -fsSL https://repositories.intel.com/graphics/intel-graphics.key | gpg --dearmor | 0.6s
=> [8/9] RUN echo "deb [signed-by=/usr/share/keyrings/intel-graphics-archive-keyring.gpg arch=amd64] 0.3s
=> ERROR [9/9] RUN apt-get update && apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get i 8.9s

[9/9] RUN apt-get update && apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends ca-certificates build-essential pkg-config gnupg libarchive13 openssh-server openssh-client wget net-tools git intel-basekit-getting-started intel-oneapi-advisor intel-oneapi-ccl-devel intel-oneapi-common-licensing intel-oneapi-common-vars intel-oneapi-compiler-dpcpp-cpp intel-oneapi-dal-devel intel-oneapi-dev-utilities intel-oneapi-dnnl-devel intel-oneapi-dpcpp-debugger intel-oneapi-ipp-devel intel-oneapi-ippcp-devel intel-oneapi-libdpstd-devel intel-oneapi-mkl-devel intel-oneapi-tbb-devel intel-oneapi-vtune intel-level-zero-gpu level-zero intel-hpckit-getting-started intel-oneapi-clck intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic intel-oneapi-compiler-fortran intel-oneapi-inspector intel-oneapi-itac intel-oneapi-mpi-devel && rm -rf /var/lib/apt/lists/*:
#13 0.420 Get:1 https://apt.repos.intel.com/oneapi all InRelease [4451 B]
#13 0.491 Get:2 https://repositories.intel.com/graphics/ubuntu jammy InRelease [22.4 kB]
#13 0.511 Get:3 https://apt.repos.intel.com/oneapi all/main all Packages [103 kB]
#13 0.545 Get:4 http://ports.ubuntu.com/ubuntu-ports jammy InRelease [270 kB]
#13 0.680 Get:5 https://repositories.intel.com/graphics/ubuntu jammy/flex amd64 Packages [98.1 kB]
#13 1.286 Get:6 http://ports.ubuntu.com/ubuntu-ports jammy-updates InRelease [119 kB]
#13 1.462 Get:7 http://ports.ubuntu.com/ubuntu-ports jammy-backports InRelease [109 kB]
#13 1.626 Get:8 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease [110 kB]
#13 1.790 Get:9 http://ports.ubuntu.com/ubuntu-ports jammy/universe arm64 Packages [17.2 MB]
#13 4.835 Get:10 http://ports.ubuntu.com/ubuntu-ports jammy/restricted arm64 Packages [24.2 kB]
#13 4.838 Get:11 http://ports.ubuntu.com/ubuntu-ports jammy/multiverse arm64 Packages [224 kB]
#13 4.860 Get:12 http://ports.ubuntu.com/ubuntu-ports jammy/main arm64 Packages [1758 kB]
#13 5.090 Get:13 http://ports.ubuntu.com/ubuntu-ports jammy-updates/main arm64 Packages [1276 kB]
#13 5.580 Get:14 http://ports.ubuntu.com/ubuntu-ports jammy-updates/multiverse arm64 Packages [27.9 kB]
#13 5.583 Get:15 http://ports.ubuntu.com/ubuntu-ports jammy-updates/universe arm64 Packages [1166 kB]
#13 5.855 Get:16 http://ports.ubuntu.com/ubuntu-ports jammy-updates/restricted arm64 Packages [914 kB]
#13 6.744 Get:17 http://ports.ubuntu.com/ubuntu-ports jammy-backports/universe arm64 Packages [30.7 kB]
#13 6.747 Get:18 http://ports.ubuntu.com/ubuntu-ports jammy-backports/main arm64 Packages [77.8 kB]
#13 6.767 Get:19 http://ports.ubuntu.com/ubuntu-ports jammy-security/main arm64 Packages [1012 kB]
#13 6.898 Get:20 http://ports.ubuntu.com/ubuntu-ports jammy-security/restricted arm64 Packages [906 kB]
#13 7.025 Get:21 http://ports.ubuntu.com/ubuntu-ports jammy-security/multiverse arm64 Packages [23.4 kB]
#13 7.026 Get:22 http://ports.ubuntu.com/ubuntu-ports jammy-security/universe arm64 Packages [904 kB]
#13 7.461 Fetched 26.4 MB in 7s (3639 kB/s)
#13 7.461 Reading package lists...
#13 7.837 Reading package lists...
#13 8.183 Building dependency tree...
#13 8.263 Reading state information...
#13 8.275 Calculating upgrade...
#13 8.359 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
#13 8.382 Reading package lists...
#13 8.730 Building dependency tree...
#13 8.810 Reading state information...
#13 8.816 E: Unable to locate package intel-oneapi-advisor
#13 8.816 E: Unable to locate package intel-oneapi-ccl-devel
#13 8.816 E: Unable to locate package intel-oneapi-compiler-dpcpp-cpp
#13 8.816 E: Unable to locate package intel-oneapi-dal-devel
#13 8.816 E: Unable to locate package intel-oneapi-dev-utilities
#13 8.816 E: Unable to locate package intel-oneapi-dnnl-devel
#13 8.816 E: Unable to locate package intel-oneapi-dpcpp-debugger
#13 8.816 E: Unable to locate package intel-oneapi-ipp-devel
#13 8.816 E: Unable to locate package intel-oneapi-ippcp-devel
#13 8.816 E: Unable to locate package intel-oneapi-libdpstd-devel
#13 8.816 E: Unable to locate package intel-oneapi-mkl-devel
#13 8.816 E: Unable to locate package intel-oneapi-tbb-devel
#13 8.816 E: Unable to locate package intel-oneapi-vtune
#13 8.816 E: Unable to locate package intel-level-zero-gpu
#13 8.816 E: Unable to locate package level-zero
#13 8.816 E: Unable to locate package intel-oneapi-clck
#13 8.816 E: Unable to locate package intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic
#13 8.816 E: Unable to locate package intel-oneapi-compiler-fortran
#13 8.816 E: Unable to locate package intel-oneapi-inspector
#13 8.816 E: Unable to locate package intel-oneapi-itac
#13 8.816 E: Unable to locate package intel-oneapi-mpi-devel


executor failed running [/bin/sh -c apt-get update && apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends ca-certificates build-essential pkg-config gnupg libarchive13 openssh-server openssh-client wget net-tools git intel-basekit-getting-started intel-oneapi-advisor intel-oneapi-ccl-devel intel-oneapi-common-licensing intel-oneapi-common-vars intel-oneapi-compiler-dpcpp-cpp intel-oneapi-dal-devel intel-oneapi-dev-utilities intel-oneapi-dnnl-devel intel-oneapi-dpcpp-debugger intel-oneapi-ipp-devel intel-oneapi-ippcp-devel intel-oneapi-libdpstd-devel intel-oneapi-mkl-devel intel-oneapi-tbb-devel intel-oneapi-vtune intel-level-zero-gpu level-zero intel-hpckit-getting-started intel-oneapi-clck intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic intel-oneapi-compiler-fortran intel-oneapi-inspector intel-oneapi-itac intel-oneapi-mpi-devel && rm -rf /var/lib/apt/lists/*]: exit code: 100

Apt not working in 23.2.1 HPC container

Running apt update in the latest HPC OneAPI container doesn't work. It used to be fine about 1 month ago when I last tried, but not now:

$ docker pull intel/oneapi-hpckit:2023.2.1-devel-ubuntu22.04
2023.2.1-devel-ubuntu22.04: Pulling from intel/oneapi-hpckit
Digest: sha256:4cd3cf90f6495d80b42e687d3385fe159bb426e88510064d88e46937959d5cbc
Status: Image is up to date for intel/oneapi-hpckit:2023.2.1-devel-ubuntu22.04
docker.io/intel/oneapi-hpckit:2023.2.1-devel-ubuntu22.04

$ docker run -it intel/oneapi-hpckit:2023.2.1-devel-ubuntu22.04 bash
root@55f6e30993a5:/# apt update
Get:1 https://repositories.intel.com/graphics/ubuntu jammy InRelease [22.4 kB]
Get:2 https://apt.repos.intel.com/oneapi all InRelease [4451 B]
Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB]
Err:2 https://apt.repos.intel.com/oneapi all InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY BAC6F0C353D04109
Get:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
........
Reading package lists... Done
W: GPG error: https://apt.repos.intel.com/oneapi all InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY BAC6F0C353D04109
E: The repository 'https://apt.repos.intel.com/oneapi all InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
root@55f6e30993a5:/#

Compilers are not working correctly in hpckit

I am building a docker container, and the compilers aren't loading by default. I have to source the setvars.sh to get it to work. Here is a snippet of my dockerfile

FROM intel/oneapi-hpckit:2021.4-devel-ubuntu18.04 as builder
LABEL maintainer "Tom Robinson"

#-------------------------------------------------------------
### Set up packages needed
RUN apt-get update -y
RUN apt-get install -y git
RUN apt-get install -y patch
RUN apt-get install -y wget
RUN apt-get install -y curl
RUN apt-get install -y m4
RUN apt-get install -y sudo

## Set compilers
ENV FC=ifort
ENV CC=icc
ENV CX=icx
ENV build=/opt
ENV IO_LIBS=${build}/io_libs
## Build zlib and szip and curl
RUN cd $build \
&& zlib="zlib-1.2.11" \
&& rm -rf zlib* \
&& wget http://www.zlib.net/zlib-1.2.11.tar.gz \
&& tar -zxvf zlib-1.2.11.tar.gz \
&& cd $zlib \
&& sudo ./configure --prefix=${IO_LIBS} \
&& make \
&& make -j 20 install
ENV CC "icc -fPIC"

RUN cd $build \
&& szip="szip-2.1.1" \
&& rm -rf szip* \
&& wget https://support.hdfgroup.org/ftp/lib-external/szip/2.1.1/src/szip-2.1.1.tar.gz \
&& tar xzf szip-2.1.1.tar.gz \
&& cd $szip \
&& ./configure FC=ifort CC=icc --prefix=${IO_LIBS} CPPDEFS="-fPIC" \
&& make \
&& make -j 20 install

This crashes when I try to build it. If I add . /opt/intel/oneapi/setvars.sh \ to the last run line

RUN  . /opt/intel/oneapi/setvars.sh \
&& cd $build \
&& szip="szip-2.1.1" \
&& rm -rf szip* \
&& wget https://support.hdfgroup.org/ftp/lib-external/szip/2.1.1/src/szip-2.1.1.tar.gz \
&& tar xzf szip-2.1.1.tar.gz \
&& cd $szip \
&& ./configure FC=ifort CC=icc --prefix=${IO_LIBS} CPPDEFS="-fPIC" \
&& make \
&& make -j 20 install

Then is builds. Is this the expected default behavior? Is there a way to enable the compilers by default?

mpirun BAD TERMINATION (Segmentation fault) when my application needs more memory

I compile a simulation application in intel hpc container built by below:

FROM intel/oneapi-hpckit:2023.0.0-devel-ubuntu20.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8 \
    UBUNTU_CODENAME=focal \
    UBUNTU_MIRROR=http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ \
    GTC_VENDOR=intel \
    GTC_HOME=/opt/gtc

RUN echo "deb $UBUNTU_MIRROR $UBUNTU_CODENAME main restricted universe multiverse" > /etc/apt/sources.list \
    && echo "deb $UBUNTU_MIRROR $UBUNTU_CODENAME-updates main restricted universe multiverse" >> /etc/apt/sources.list \
    && apt-get update \
    && apt-get install -y --no-install-recommends --no-install-suggests \
        make libncurses5-dev python \
        gfortran \
        zlib1g-dev libcurl4-openssl-dev \
    && ln -s mpif90 /opt/intel/oneapi/mpi/2021.8.0/bin/mpifort \
    && apt-get -y autoremove && apt-get clean \
    && rm -rf /var/lib/apt/lists/* \
    && useradd -u 1000 -g 100 -m -d ${GTC_HOME} gtc
USER gtc
WORKDIR ${GTC_HOME}
CMD ["/bin/bash"]

Then run cmd: docker run --rm -i -t --name gtc_worker --shm-size=64gb XXX/image:tag bash, where --shm-size= is used to solve Bus error.

The app works well when the grids are small, like 50x300, but it crashes when grids are 50x310.

After set I_MPI_DEBUG=10, I get some info:

[0] MPI startup(): shm segment size (128 MB per rank) * (32 local ranks) = 4125 MB total
[16] impi_shm_mbind_local(): mbind(p=0x7f92665cb000, size=1073741824) error=1 "Operation not permitted"
[0] impi_shm_mbind_local(): mbind(p=0x7f541cd0b000, size=1073741824) error=1 "Operation not permitted"
[0] MPI startup(): libfabric version: 1.13.2rc1-impi
[0] MPI startup(): max number of MPI_Request per vci: 67108864 (pools: 1)
[0] MPI startup(): libfabric provider: tcp;ofi_rxm

[0] MPI startup(): threading: mode: direct
[0] MPI startup(): threading: vcis: 1
[0] MPI startup(): threading: app_threads: -1
[0] MPI startup(): threading: runtime: generic
[0] MPI startup(): threading: progress_threads: 0
[0] MPI startup(): threading: async_progress: 0
[0] MPI startup(): threading: lock_level: global
[0] MPI startup(): tag bits available: 19 (TAG_UB value: 524287) 
[0] MPI startup(): source bits available: 20 (Maximal number of rank: 1048575) 
[0] MPI startup(): Rank    Pid      Node name     Pin cpu
[0] MPI startup(): 0       439      271de697c521  {0,1,48}

[0] MPI startup(): 30      469      271de697c521  {45,46,93}
[0] MPI startup(): 31      470      271de697c521  {47,94,95}
[0] MPI startup(): I_MPI_ROOT=/opt/intel/oneapi/mpi/2021.8.0
[0] MPI startup(): I_MPI_MPIRUN=mpirun
[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc
[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default
[0] MPI startup(): I_MPI_DEBUG=10

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 0 PID 439 RUNNING AT 271de697c521
=   KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 16 PID 455 RUNNING AT 271de697c521
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================

Maybe changing this part shm segment size (128 MB per rank) will solve the issue???
So how can I do that?

Add support for Ubuntu 20.04

Please add Docker file for building Ubuntu 20.04 docker images.
Note that Intel GPUs are not well supported under Ubuntu 18.04, so it is rather useless for anyone trying to use GPU in Ubuntu 18.04 Docker image

dpcpp not found

Pulled down the docker image of oneapi base today, and running dpcpp command says "not found".
intel/oneapi-basekit:latest

Reduce image size

Hi,

I would like to use the oneapi-hpckit container for my gitlab-ci pipeline but your image is too large (~23Go) for my registry.

Would it be possible to have a minimalist container with only the compiler (and the necessary libs) and not all the tools ?

Thanks

Cannot build Dockerfile.ubuntu-18.04

The following packages have unmet dependencies:
intel-level-zero-gpu : Depends: intel-igc-opencl (= 1.0.8517) but 1.0.1629709536 is to be installed
intel-opencl : Depends: intel-igc-opencl (= 1.0.8517) but 1.0.1629709536 is to be installed
E: Unable to correct problems, you have held broken packages.

sycl-ls for gpus fails on oneapi-basekit 20.04

sycl-ls output is different when using 20.04 or 22.04 variant of the oneapi-basekit container:

$ sudo docker run -v /dev/dri/by-path:/dev/dri/by-path:rw --device /dev/dri --rm -it intel/oneapi-basekit:2024.0.1-devel-ubuntu20.04 sycl-ls
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.12.0.12_195853.xmain-hotfix]
[opencl:cpu:1] Intel(R) OpenCL, Intel(R) Xeon(R) Platinum 8480+ OpenCL 3.0 (Build 0) [2023.16.12.0.12_195853.xmain-hotfix]

$ sudo docker run -v /dev/dri/by-path:/dev/dri/by-path:rw --device /dev/dri --rm -it intel/oneapi-basekit:2024.0.1-devel-ubuntu22.04 sycl-ls
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.12.0.12_195853.xmain-hotfix]
[opencl:cpu:1] Intel(R) OpenCL, Intel(R) Xeon(R) Platinum 8480+ OpenCL 3.0 (Build 0) [2023.16.12.0.12_195853.xmain-hotfix]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Data Center GPU Max 1100 OpenCL 3.0 NEO  [23.35.27191.42]
[opencl:gpu:3] Intel(R) OpenCL Graphics, Intel(R) Data Center GPU Max 1100 OpenCL 3.0 NEO  [23.35.27191.42]
[opencl:gpu:4] Intel(R) OpenCL Graphics, Intel(R) Data Center GPU Max 1100 OpenCL 3.0 NEO  [23.35.27191.42]
[opencl:gpu:5] Intel(R) OpenCL Graphics, Intel(R) Data Center GPU Max 1100 OpenCL 3.0 NEO  [23.35.27191.42]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Data Center GPU Max 1100 1.3 [1.3.27191]
[ext_oneapi_level_zero:gpu:1] Intel(R) Level-Zero, Intel(R) Data Center GPU Max 1100 1.3 [1.3.27191]
[ext_oneapi_level_zero:gpu:2] Intel(R) Level-Zero, Intel(R) Data Center GPU Max 1100 1.3 [1.3.27191]
[ext_oneapi_level_zero:gpu:3] Intel(R) Level-Zero, Intel(R) Data Center GPU Max 1100 1.3 [1.3.27191]

This could mean that GPU use with sycl is not possible?

Noticed here: sylabs/singularity#1094

(SYCL) local_accessor problem (no template named 'local_accessor')

I am not sure if here is the right place but I have a problem with intel oneapi. Here is the simple matrix multiplication code I am using:


/*
    Intel oneAPI DPC++
    dpcpp -Qstd=c++17 /EHsc hellocl.cpp -Qtbb opencl.lib -o d.exe

    Microsoft C++ Compiler
    cl /EHsc /std:c++17 hellocl.cpp opencl.lib  /Fe: m.exe

    clang++ -std=c++17 hellocl.cpp -ltbb -lopencl -o c.exe

    g++ -std=c++17 hellocl.cpp -ltbb -lopencl -o c.exe
*/
/*
    1. How to use Random Number Generator
    2. How to use std::vector as 2-dimensional array
    3. How to suppress warning in clang compiler
    4. How to use Tpf_FormatWidth, Tpf_FormatPrecision macros

    dpcpp naive.cpp tbbmalloc.lib -o d.exe
*/

#if defined(__clang__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wpass-failed"
#endif

#define Tpf_FormatWidth 6
#define Tpf_FormatPrecision 4

#include "tpf_linear_algebra.hpp"
#include <cl/sycl.hpp>

namespace chr = tpf::chrono_random;
namespace mtx = tpf::matrix;

tpf::sstream stream;
auto& nl = tpf::nl; // single carriage-return
auto& nL = tpf::nL; // two carriage-return

auto& endl = tpf::endl; // sing carriage-return and flush out to console
auto& endL = tpf::endL; // two carriage-returns and flush out to console

void test_random_number_generator()
{
    using element_t = double;
    using matrix_t = mtx::scalable_fast_matrix_t<element_t>;

    size_t N = 10; // number of rows
    size_t M = N; // number of columns

    matrix_t A{ N, M }; // N x M matrix
    matrix_t B{ N, M };

    // we created a random number generator
    // <int> means we generator integer
    // (-10, 10) means from -10 to 10, inclusive
    auto generator = chr::random_generator<int>(-10, 10);

    chr::random_parallel_fill(A.array(), generator);
    chr::random_parallel_fill(B.array(), generator);

    auto C = A * B; // matrix multiplication

    stream << "A = " << nl << A << endl;
    stream << "B = " << nl << B << endl;
    stream << "A x B = " << nl << C << endL;

}

void test_naive_matrix_multiplication()
{
    size_t N = 10;
    size_t M = N;

    using element_t = double;
    using vectrix_t = std::vector<element_t>;

    vectrix_t A(N * M);
    vectrix_t B(N * M);
    vectrix_t C(N * M);
    vectrix_t D(N * M);

    auto generator = chr::random_generator<int>(-10, 10);

    chr::random_parallel_fill(A, generator);
    chr::random_parallel_fill(B, generator);

    auto out_A = mtx::create_formatter(A, N, M);
    auto out_B = mtx::create_formatter(B, N, M);
    auto out_C = mtx::create_formatter(C, N, M);
    auto out_D = mtx::create_formatter(D, N, M);

    stream << "A = " << nl << out_A() << endl;
    stream << "B = " << nl << out_B() << endl;

    auto idx_A = mtx::create_indexer(A, N, M);
    auto idx_B = mtx::create_indexer(B, N, M);
    auto idx_C = mtx::create_indexer(C, N, M);

    for (int i = 0; i < (int)N; ++i)
    {
        for (int j = 0; j < (int)M; ++j)
        {
            for (int k = 0; k < (int)M; ++k)
                idx_C(i, j) += idx_A(i, k) * idx_B(k, j); // matrix multiplication
        }
    }



    stream << "CPU: A x B = " << nl << out_C() << endl;

    
        sycl::queue queue{ sycl::gpu_selector{} };

        sycl::buffer buf_A{ &A[0], sycl::range{N, M} };
        sycl::buffer buf_B{ &B[0], sycl::range{N, M} };
        sycl::buffer buf_D{ &D[0], sycl::range{N, M} };

        queue.submit([&](sycl::handler& cgh)
            {
                auto a = buf_A.get_access<sycl::access::mode::read>(cgh);
                auto b = buf_B.get_access<sycl::access::mode::read>(cgh);
                auto d = buf_D.get_access<sycl::access::mode::read_write>(cgh);

                constexpr int tile_size = 16;
            sycl::local_accessor<int> tileA{tile_size, cgh};

            cgh.parallel_for(
sycl::nd_range<2>{{N, N}, {1, tile_size}}, [=](sycl::nd_item<2> it) {
// Indices in the global index space:
                int m = it.get_global_id()[0];
                int n = it.get_global_id()[1];
// Index in the local index space:
                int i = it.get_local_id()[1];
                size_t sum = 0;
                for (int kk = 0; kk < 496; kk += tile_size) {
// Load the matrix tile from matrix A, and synchronize
// to ensure all work-items have a consistent view
// of the matrix tile in local memory.
                    tileA[i] = a[m][kk + i];
                    it.barrier();
// Perform computation using the local memory tile, and
// matrix B in global memory.
                    for (int k = 0; k < tile_size; k++)
                        sum += tileA[k] * b[kk + k][n];
                
            
// After computation, synchronize again, to ensure all
// reads from the local memory tile are complete.
                it.barrier();
            }
        
// Write the final result to global memory.
            d[m][n] = sum;
        });
        
    });


       

        // when this block goes off,
        // the destructor of buf_D waits until it is released by the queue
        // and copies to its host memory D
    

    stream << "GPU: A x B = " << nl << out_D() << endl;
}

#if defined(__clang__)
#pragma clang diagnostic pop
#endif 

int main()
{
    // test_random_number_generator();

    test_naive_matrix_multiplication();
}

Here is the error I am getting: error: no template named 'local_accessor' in namespace 'sycl'; did you mean 'host_accessor'?
sycl::local_accessor tileA{tile_size, cgh};

I am using intel oneapi dpc++ compiler.

Is this a wrong way to implement local_accessor or intel oneapi SYCL implementation? or it does not support it?

hpckit 2024.1.0 Ubuntu 22.04 GPG error

Hi! I am using intel/hpckit:2024.1.0-devel-ubuntu22.04 container as my base build. It was working just fine yesterday. But I got this error today:

GPG error: https://repositories.intel.com/gpu/ubuntu jammy/lts/2350 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 28DA432DAAC8BAEA

The latest intel/hpckit:2024.1.1-devel-ubuntu22.04 does not have this issue. It may be an issue with the public key for that particular build. Can you please take a look at it? That container is the base image of many of our software.

Thank you very much!

aocl install failed within container

aocl install intel_s10sx_pac
failed due to invalid install script for using within container.

docker image : intel/oneapi-basekit (Ubuntu 18.04)
host environment:
OS : Ubuntu 18.04.06 LTS (kernel : 5.4.0-99-generic)
FPGA : Intel FPGA PAC D5005 (intel_s10sx_pac)
docker version : 20.10.7
GPU : NVIDIA A100 (driver ver. 495.29.05, CUDA v11.5) // for just in case
nvidia-docker : ver. 20.10.7 // for just in case
I setup host to use OpenCL and DPCPP for intel FPGA PAC D5005.
docker command
$ sudo docker run -it --gpus all --device=/dev/dri --privileged intel/oneapi-basekit bash

First, I met the below errors:
lspci : command not found
Error: unsupported OS: Ubuntu 18.04 kernel 5.4.0-99-generic

To fix them,
installed pciutils using $apt install pciutils;
deleted kernel version checking in "install" script under .../oclfpga/board/intel_s10sx_pac/linux/libexec/
and "sudo" keywords
// I might install fpga addon via apt but not 100% sure.

Still, I met an error on setup_permissions.sh script failed. It leads to removing FCD and board packages. (I'm not sure but some of opae packages seem to be installed)
I copied FCD from the host and modified the path of .so files.

Now, I can use the FPGA PAC with a simple vector add a pre-compiled program within basekit container.

Can you provide a valid aocl script for containers?
Thank you.

Please provide intel oneapi hpckit toolkit simple mpi example

Basic multiple slot intel mpi function is not working on a VM host, please provide documentation for a simple functional mpi example.
Host operating system Ubuntu 20.04
Docker Engine Docker CE 20.10.6
Hardware CPU only
Image version "intel/oneapi-hpckit:devel-ubuntu18.04"
"intel/oneapi-hpckit@sha256:cd22c32dd04beab2ae21eb8f13402a79a7c2a91b2afc787905230099160c2bbe"

Reproduction steps

on a host VM, run the image as Docker container

docker run -it --rm intel/oneapi-hpckit@sha256:cd22c32dd04beab2ae21eb8f13402a79a7c2a91b2afc787905230099160c2bbe bash
. /opt/intel/oneapi/mpi/2021.2.0/env/vars.sh
env # confirm mpi root set, libfabric/bin in PATH

## single slot test works
cd /opt/intel/oneapi/mpi/2021.2.0/test
mpicc test.c -o test
echo $?  # expect 0 no error, we do see code 0 no compilation error
mpirun -n 1 ./test
# expect output like below
#     Hello world: rank 0 of 1 running on <container_id>
echo $?  # expect 0 success, we do see code 0

## multiple slot test fails
# in the same docker container, same directory, continue execution
mpicc -g test.c -o testdebug
mpirun --debug -n 2 ./testdebug
# failing output like below
echo $? # 255 failure exit code
[mpiexec@5fb8dcd9705b] Launch arguments: /opt/intel/oneapi/mpi/2021.2.0//bin//hydra_bstrap_proxy --upstream-host 5fb8dcd9705b --upstream-port 38303 --pgid 0 --launcher ssh --launcher-number 0 --base-path /opt/intel/oneapi/mpi/2021.2.0//bin/ --tree-width 16 --tree-level 1 --time-left -1 --launch-type 2 --debug --proxy-id 0 --node-id 0 --subtree-size 1 --upstream-fd 7 /opt/intel/oneapi/mpi/2021.2.0//bin//hydra_pmi_proxy --usize -1 --auto-cleanup 1 --abort-signal 9
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 6: cmd=init pmi_version=1 pmi_subversion=1
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 6: cmd=get_maxes
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=4096
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 6: cmd=get_appnum
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=appnum appnum=0
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 6: cmd=get_my_kvsname
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=my_kvsname kvsname=kvs_92_0
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 6: cmd=get kvsname=kvs_92_0 key=PMI_process_mapping
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,1,2))
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 9: cmd=init pmi_version=1 pmi_subversion=1
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 9: cmd=get_maxes
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=4096
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 6: cmd=put kvsname=kvs_92_0 key=-bcast-1-0 value=2F6465762F73686D2F496E74656C5F4D50495F414B6E366257
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=put_result rc=0 msg=success
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 9: cmd=get_appnum
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=appnum appnum=0
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 6: cmd=barrier_in
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 9: cmd=get_my_kvsname
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=my_kvsname kvsname=kvs_92_0
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 9: cmd=get kvsname=kvs_92_0 key=PMI_process_mapping
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,1,2))
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 9: cmd=barrier_in
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=barrier_out
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=barrier_out
[proxy:0:0@5fb8dcd9705b] pmi cmd from fd 9: cmd=get kvsname=kvs_92_0 key=-bcast-1-0
[proxy:0:0@5fb8dcd9705b] PMI response: cmd=get_result rc=0 msg=success value=2F6465762F73686D2F496E74656C5F4D50495F414B6E366257

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 0 PID 96 RUNNING AT 5fb8dcd9705b
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 1 PID 97 RUNNING AT 5fb8dcd9705b
=   KILLED BY SIGNAL: 7 (Bus error)
===================================================================================

output of env

CONDA_SHLVL=1
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
LD_LIBRARY_PATH=/opt/intel/oneapi/mpi/2021.2.0//libfabric/lib:/opt/intel/oneapi/mpi/2021.2.0//lib/release:/opt/intel/oneapi/mpi/2021.2.0//lib:/opt/intel/oneapi/vpl/2021.2.2/lib:/opt/intel/oneapi/tbb/2021.2.0/env/../lib/intel64/gcc4.8:/opt/intel/oneapi/mpi/2021.2.0//libfabric/lib:/opt/intel/oneapi/mpi/2021.2.0//lib/release:/opt/intel/oneapi/mpi/2021.2.0//lib:/opt/intel/oneapi/mkl/latest/lib/intel64:/opt/intel/oneapi/itac/2021.2.0/slib:/opt/intel/oneapi/ippcp/2021.2.0/lib/intel64:/opt/intel/oneapi/ipp/2021.2.0/lib/intel64:/opt/intel/oneapi/dnnl/2021.2.0/cpu_dpcpp_gpu_dpcpp/lib:/opt/intel/oneapi/debugger/10.1.1/dep/lib:/opt/intel/oneapi/debugger/10.1.1/libipt/intel64/lib:/opt/intel/oneapi/debugger/10.1.1/gdb/intel64/lib:/opt/intel/oneapi/dal/2021.2.0/lib/intel64:/opt/intel/oneapi/compiler/2021.2.0/linux/lib:/opt/intel/oneapi/compiler/2021.2.0/linux/lib/x64:/opt/intel/oneapi/compiler/2021.2.0/linux/lib/emu:/opt/intel/oneapi/compiler/2021.2.0/linux/lib/oclfpga/host/linux64/lib:/opt/intel/oneapi/compiler/2021.2.0/linux/lib/oclfpga/linux64/lib:/opt/intel/oneapi/compiler/2021.2.0/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/compiler/2021.2.0/linux/compiler/lib:/opt/intel/oneapi/ccl/2021.2.0/lib/cpu_gpu_dpcpp
CONDA_EXE=/opt/intel/oneapi/intelpython/latest/bin/conda
DAL_MINOR_BINARY=1
SETVARS_VARS_PATH=/opt/intel/oneapi/vtune/latest/env/vars.sh
OCL_ICD_FILENAMES=libintelocl_emu.so:libalteracl.so:/opt/intel/oneapi/compiler/2021.2.0/linux/lib/x64/libintelocl.so
VPL_INCLUDE=/opt/intel/oneapi/vpl/2021.2.2/include
VT_MPI=impi4
HOSTNAME=fa6179ca81cd
IPPROOT=/opt/intel/oneapi/ipp/2021.2.0
CLCK_ROOT=/opt/intel/oneapi/clck/2021.2.0
DAL_MAJOR_BINARY=1
ACL_BOARD_VENDOR_PATH=/opt/Intel/OpenCLFPGA/oneAPI/Boards
CONDA_PREFIX=/opt/intel/oneapi/intelpython/latest
VT_SLIB_DIR=/opt/intel/oneapi/itac/2021.2.0/slib
FI_PROVIDER_PATH=/opt/intel/oneapi/mpi/2021.2.0//libfabric/lib/prov:/usr/lib64/libfabric
_CE_M=
INTEL_PYTHONHOME=/opt/intel/oneapi/debugger/10.1.1/dep
CLASSPATH=/opt/intel/oneapi/mpi/2021.2.0//lib/mpi.jar:/opt/intel/oneapi/mpi/2021.2.0//lib/mpi.jar:/opt/intel/oneapi/dal/2021.2.0/lib/onedal.jar
ADVISOR_2021_DIR=/opt/intel/oneapi/advisor/2021.2.0
CCL_CONFIGURATION=cpu_gpu_dpcpp
VPL_BIN=/opt/intel/oneapi/vpl/2021.2.2/bin
PWD=/
DNNLROOT=/opt/intel/oneapi/dnnl/2021.2.0/cpu_dpcpp_gpu_dpcpp
INSPECTOR_2021_DIR=/opt/intel/oneapi/inspector/2021.2.0
HOME=/root
CCL_ROOT=/opt/intel/oneapi/ccl/2021.2.0
CONDA_PYTHON_EXE=/opt/intel/oneapi/intelpython/latest/bin/python
CMAKE_PREFIX_PATH=/opt/intel/oneapi/vpl:/opt/intel/oneapi/tbb/2021.2.0/env/..:/opt/intel/oneapi/dal/2021.2.0
CPATH=/opt/intel/oneapi/mpi/2021.2.0//include:/opt/intel/oneapi/vpl/2021.2.2/include:/opt/intel/oneapi/tbb/2021.2.0/env/../include:/opt/intel/oneapi/mpi/2021.2.0//include:/opt/intel/oneapi/mkl/latest/include:/opt/intel/oneapi/ippcp/2021.2.0/include:/opt/intel/oneapi/ipp/2021.2.0/include:/opt/intel/oneapi/dpl/2021.2.0/linux/include:/opt/intel/oneapi/dnnl/2021.2.0/cpu_dpcpp_gpu_dpcpp/lib:/opt/intel/oneapi/dev-utilities/2021.2.0/include:/opt/intel/oneapi/dal/2021.2.0/include:/opt/intel/oneapi/compiler/2021.2.0/linux/include:/opt/intel/oneapi/ccl/2021.2.0/include/cpu_gpu_dpcpp
INTELFPGAOCLSDKROOT=/opt/intel/oneapi/compiler/2021.2.0/linux/lib/oclfpga
DALROOT=/opt/intel/oneapi/dal/2021.2.0
_CE_CONDA=
NLSPATH=/opt/intel/oneapi/mkl/latest/lib/intel64/locale/%l_%t/%N
VPL_LIB=/opt/intel/oneapi/vpl/2021.2.2/lib
LIBRARY_PATH=/opt/intel/oneapi/mpi/2021.2.0//libfabric/lib:/opt/intel/oneapi/mpi/2021.2.0//lib/release:/opt/intel/oneapi/mpi/2021.2.0//lib:/opt/intel/oneapi/vpl/2021.2.2/lib:/opt/intel/oneapi/tbb/2021.2.0/env/../lib/intel64/gcc4.8:/opt/intel/oneapi/mpi/2021.2.0//libfabric/lib:/opt/intel/oneapi/mpi/2021.2.0//lib/release:/opt/intel/oneapi/mpi/2021.2.0//lib:/opt/intel/oneapi/mkl/latest/lib/intel64:/opt/intel/oneapi/ippcp/2021.2.0/lib/intel64:/opt/intel/oneapi/ipp/2021.2.0/lib/intel64:/opt/intel/oneapi/dnnl/2021.2.0/cpu_dpcpp_gpu_dpcpp/lib:/opt/intel/oneapi/dal/2021.2.0/lib/intel64:/opt/intel/oneapi/compiler/2021.2.0/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/compiler/2021.2.0/linux/lib:/opt/intel/oneapi/clck/2021.2.0/lib/intel64:/opt/intel/oneapi/ccl/2021.2.0/lib/cpu_gpu_dpcpp
IPPCRYPTOROOT=/opt/intel/oneapi/ippcp/2021.2.0
SETVARS_COMPLETED=1
VT_LIB_DIR=/opt/intel/oneapi/itac/2021.2.0/lib
IPP_TARGET_ARCH=intel64
DAALROOT=/opt/intel/oneapi/dal/2021.2.0
VTUNE_PROFILER_2021_DIR=/opt/intel/oneapi/vtune/2021.2.0
INTEL_LICENSE_FILE=/opt/intel/licenses:/root/intel/licenses:/opt/intel/oneapi/clck/2021.2.0/licensing:/opt/intel/licenses:/root/intel/licenses:/Users/Shared/Library/Application Support/Intel/Licenses
CONDA_PROMPT_MODIFIER=(base)
IPPCP_TARGET_ARCH=intel64
VT_ROOT=/opt/intel/oneapi/itac/2021.2.0
TERM=xterm
VT_ADD_LIBS=-ldwarf -lelf -lvtunwind -lm -lpthread
APM=/opt/intel/oneapi/advisor/2021.2.0/perfmodels
VPL_ROOT=/opt/intel/oneapi/vpl/2021.2.2
SHLVL=1
PYTHONPATH=/opt/intel/oneapi/advisor/2021.2.0/pythonapi
MANPATH=/opt/intel/oneapi/mpi/2021.2.0/man:/opt/intel/oneapi/mpi/2021.2.0/man:/opt/intel/oneapi/itac/2021.2.0/man:/opt/intel/oneapi/debugger/10.1.1/documentation/man:/opt/intel/oneapi/clck/2021.2.0/man::/opt/intel/oneapi/compiler/2021.2.0/documentation/en/man/common::::
ONEAPI_ROOT=/opt/intel/oneapi
MKLROOT=/opt/intel/oneapi/mkl/latest
CPLUS_INCLUDE_PATH=/opt/intel/oneapi/clck/2021.2.0/include
PATH=/opt/intel/oneapi/mpi/2021.2.0//libfabric/bin:/opt/intel/oneapi/mpi/2021.2.0//bin:/opt/intel/oneapi/vtune/2021.2.0/bin64:/opt/intel/oneapi/vpl/2021.2.2/bin:/opt/intel/oneapi/mpi/2021.2.0//libfabric/bin:/opt/intel/oneapi/mpi/2021.2.0//bin:/opt/intel/oneapi/mkl/latest/bin/intel64:/opt/intel/oneapi/itac/2021.2.0/bin:/opt/intel/oneapi/itac/2021.2.0/bin:/opt/intel/oneapi/intelpython/latest/bin:/opt/intel/oneapi/intelpython/latest/condabin:/opt/intel/oneapi/inspector/2021.2.0/bin64:/opt/intel/oneapi/dev-utilities/2021.2.0/bin:/opt/intel/oneapi/debugger/10.1.1/gdb/intel64/bin:/opt/intel/oneapi/compiler/2021.2.0/linux/lib/oclfpga/llvm/aocl-bin:/opt/intel/oneapi/compiler/2021.2.0/linux/lib/oclfpga/bin:/opt/intel/oneapi/compiler/2021.2.0/linux/bin/intel64:/opt/intel/oneapi/compiler/2021.2.0/linux/bin:/opt/intel/oneapi/compiler/2021.2.0/linux/ioc/bin:/opt/intel/oneapi/clck/2021.2.0/bin/intel64:/opt/intel/oneapi/advisor/2021.2.0/bin64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TBBROOT=/opt/intel/oneapi/tbb/2021.2.0/env/..
PKG_CONFIG_PATH=/opt/intel/oneapi/vtune/2021.2.0/include/pkgconfig/lib64:/opt/intel/oneapi/vpl/2021.2.2/lib/pkgconfig:/opt/intel/oneapi/mkl/latest/tools/pkgconfig:/opt/intel/oneapi/inspector/2021.2.0/include/pkgconfig/lib64:/opt/intel/oneapi/advisor/2021.2.0/include/pkgconfig/lib64:
CONDA_DEFAULT_ENV=base
INFOPATH=/opt/intel/oneapi/debugger/10.1.1/documentation/info/
I_MPI_ROOT=/opt/intel/oneapi/mpi/2021.2.0
_=/usr/bin/env

References consulted
https://github.com/intel/oneapi-containers#intel-oneapi-hpc-toolkit
https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-benchmarks.html

Dockerfile is not updated

Thanks for the great work with these containers.

Now, the version of oneAPI has been updated to 2023.2.1 two weeks ago, but the Dockerfile update is seems to be late.
I remember that the Dockerfile was updated late for a while, in 2023.2.0,
When do you plan to release a Dockerfile for 2023.2.1 ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.