Skip to content

Commit

Permalink
restore docs, tests
Browse files Browse the repository at this point in the history
  • Loading branch information
culhatsker committed Oct 16, 2024
1 parent 8564632 commit 44232ad
Show file tree
Hide file tree
Showing 63 changed files with 6,681 additions and 0 deletions.
81 changes: 81 additions & 0 deletions docs/accelerators.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Using OpenVINO™ Toolkit containers with GPU accelerators


Containers can be used to execute inference operations with GPU acceleration or with the [virtual devices](https://docs.openvino.ai/nightly/openvino_docs_Runtime_Inference_Modes_Overview.html).

There are the following prerequisites:

- Use the Linux kernel with GPU models supported by you integrated GPU or discrete GPU. Check the documetnation on https://dgpu-docs.intel.com/driver/kernel-driver-types.html.
On Linux host, confirm if there is available a character device /dev/dri

- On Windows Subsystem for Linux (WSL2) refer to the guidelines on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#
Note, that on WLS2, there must be present a character device `/dev/drx`.

- Docker image for the container must include GPU runtime drivers like described on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#

While the host and preconfigured docker engine is up and running, use the docker run parameters like described below.

## Linux

The command below should report both CPU and GPU devices available for inference execution:
```
export IMAGE=openvino/ubuntu20_dev:2023.0.0
docker run -it --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE ./samples/cpp/samples_bin/hello_query_device
```

`--device /dev/dri` - it passes the GPU device to the container
`--group-add` - it adds a security context to the container command with permission to use the GPU device

## Windows Subsystem for Linux

On WSL2, use the command to start the container:

```
export IMAGE=openvino/ubuntu20_dev:2023.0.0
docker run -it --device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl $IMAGE ./samples/cpp/samples_bin/hello_query_device
```
`--device /dev/dri` - it passes the virtual GPU device to the container
`-v /usr/lib/wsl:/usr/lib/wsl` - it mounts required WSL libs into the container


## Usage example:

Run the benchmark app using GPU accelerator with `-use_device_mem` param showcasing inference without copy between CPU and GPU memory
```
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d GPU -use_device_mem -inference_only=false"
```
In the benchmark app, the parameter `-use_device_mem` employs the OV::RemoteTensor as the input buffer. It demonstrates the gain without data copy beteen the host and the GPU device.

Run the benchmark app using both GPU and CPU. Load will be distributed on both types of devices:
```
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d MULTI:GPU,CPU"
```


**Check also:**

[Prebuilt images](#prebuilt-images)

[Working with OpenVINO Containers](docs/containers.md)

[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)

[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)












116 changes: 116 additions & 0 deletions docs/configure_gpu_ubuntu20.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu* 20.04

Intel® Graphics Compute Runtime for OpenCL™ driver components are required to use a GPU plugin and write custom layers for Intel® Integrated Graphics.
The driver is installed in the OpenVINO™ Docker image but you need to activate it in the container for a non-root user if you have Ubuntu 20.04 on your host.
To access GPU capabilities, you need to have correct permissions on the host and Docker container.
Run the following commands to list the group assigned ownership of the render nodes on your host:

```bash
$ stat -c "group_name=%G group_id=%g" /dev/dri/render*
group_name=render group_id=134
```

OpenVINO™ Docker images do not contain a render group for openvino non-root user because the render group does not have a strict group ID, unlike the video group.
Choose one of the options below to set up access to a GPU device from a container.

## 1. Configure a Host Non-Root User to Use a GPU Device from an OpenVINO Container on Ubuntu 20 Host [RECOMMENDED]

To run an OpenVINO container with a default non-root user (openvino) with access to a GPU device, you need to have a non-root user with the same id as `openvino` user inside the container:
By default, `openvino` user has the #1000 user ID.
Create a non-root user, for example, host_openvino, on the host with the same user ID and access to video, render, docker groups:

```bash
$ useradd -u 1000 -G users,video,render,docker host_openvino
```

Now you can use the OpenVINO container with GPU access under the non-root user.

```bash
$ docker run -it --rm --device /dev/dri <image_name>
```

## 2. Configure a Container to Use a GPU Device on Ubuntu 20 Host Under a Non-Root User

To run an OpenVINO container as non-root with access to a GPU device, specify the render group ID from your host:

```bash
$ docker run -it --rm --device /dev/dri --group-add=<render_group_id_on_host> <image_name>
```

For example, get the render group ID on your host:

```bash
$ docker run -it --rm --device /dev/dri --group-add=$(stat -c "%g" /dev/dri/render*) <image_name>
```

Now you can use the container with GPU access under the non-root user.

## 3. Configure an Image to Use a GPU Device on Ubuntu 20 Host and Save It

To run an OpenVINO container as the root with access to a GPU device, use the command below:

```bash
$ docker run -it --rm --user root --device /dev/dri --name my_container <image_name>
```

Check groups for the GPU device in the container:

```bash
$ ls -l /dev/dri/
```

The output should look like the following:

```bash
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0
crw-rw---- 1 root 134 226, 128 Feb 20 14:28 renderD128
```

Create a render group in the container with the same group ID as on your host:

```bash
$ addgroup --gid 134 render
```

Check groups for the GPU device in the container:

```bash
$ ls -l /dev/dri/
```

The output should look like the following:

```bash
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0
crw-rw---- 1 root render 226, 128 Feb 20 14:28 renderD128
```

Add the non-root user to the render group:

```bash
$ usermod -a -G render openvino
$ id openvino
```

Check that the group now contains the user:

```bash
uid=1000(openvino) gid=1000(openvino) groups=1000(openvino),44(video),100(users),134(render)
```

Then relogin as the non-root user:

```bash
$ su openvino
```

Now you can use the container with GPU access under the non-root user or you can save that container as an image and push it to your registry.
Open another terminal and run the commands below:

```bash
$ docker commit my_container my_image
$ docker run -it --rm --device /dev/dri --user openvino my_image
```

---
\* Other names and brands may be claimed as the property of others.
74 changes: 74 additions & 0 deletions docs/containers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Working with OpenVINO™ Toolkit Images

## Runtime images

The runtime images include OpenVINO toolkit with all required dependencies to run inference operations and OpenVINO API both in Python and C++.
There are no development tools installed.
Here are examples how the runtime image could be used:

```
export IMAGE=openvino/ubuntu20_runtime:2023.0.0
```

### Building and Using the OpenVINO samples:

```
docker run -it -u root $IMAGE bash -c "/opt/intel/openvino/install_dependencies/install_openvino_dependencies.sh -y -c dev && ./samples/cpp/build_samples.sh && \
/root/openvino_cpp_samples_build/intel64/Release/hello_query_device"
```

### Using python samples
```
docker run -it $IMAGE python3 samples/python/hello_query_device/hello_query_device.py
```

## Development images

Dev images include the OpenVINO runtime components and development tools as well. It includes a complete environment for experimenting with OpenVINO.
Examples how the development container can be used are below:

```
export IMAGE=openvino/ubuntu20_dev:2023.0.0
```

### Listing OpenVINO Model Zoo Models
```
docker run $IMAGE omz_downloader --print_all
```

### Download a model
```
mkdir model
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_downloader --name mozilla-deepspeech-0.6.1 -o /tmp/model
```

### Convert the model to IR format
```
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_converter --name mozilla-deepspeech-0.6.1 -d /tmp/model -o /tmp/model/converted/
```

### Run benchmark app to test the model performance
```
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE benchmark_app -m /tmp/model/converted/public/mozilla-deepspeech-0.6.1/FP32/mozilla-deepspeech-0.6.1.xml
```

### Run a demo from an OpenVINO Model Zoo
```
docker run $IMAGE bash -c "git clone --depth=1 --recurse-submodules --shallow-submodules https://github.com/openvinotoolkit/open_model_zoo.git && \
cd open_model_zoo/demos/classification_demo/python && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin && \
curl -O https://raw.githubusercontent.com/openvinotoolkit/model_server/main/demos/common/static/images/zebra.jpeg && \
python3 classification_demo.py -m resnet50-binary-0001.xml -i zebra.jpeg --labels ../../../data/dataset_classes/imagenet_2012.txt --no_show -nstreams 1 -r"
```

**Check also:**

[Prebuilt images](#prebuilt-images)

[Deployment with GPU accelerator](docs/accelerators.md)

[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)

[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)
68 changes: 68 additions & 0 deletions docs/get-started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Getting Started with OpenVINO™ Toolkit Images

You can easily get started by using the precompiled and published docker images.
In order to start using them you need to meet the following prerequisites:
- Linux operating system or Windows Subsystem for Linux (WSL2)
- Installed docker engine or compatible container engine
- Permissions to start containers (sudo or docker group membership)

## Pull a docker image

```
docker pull openvino/ubuntu20_dev:latest
```

## Start the container with an interactive session

```bash
export IMAGE=openvino/ubuntu20_dev:latest
docker run -it --rm $IMAGE /bin/bash
```

Inside the interactive session, you can run all OpenVINO samples and tools.

# Run a python sample
If you want to try samples, then run the image with the command like below:

```bash
docker run -it --rm $IMAGE /bin/bash -c "python3 samples/python/hello_query_device/hello_query_device.py"
```

# Download a model via omz_downloader
```
docker run -it -u $(id -u):$(id -g) -v $(pwd)/:/model/ --rm $IMAGE \
/bin/bash -c "omz_downloader --name googlenet-v1 --precisions FP32 -o /model"
```
# Convert the model to IR format
```
docker run -it -u $(id -u):$(id -g) -v $(pwd)/:/model/ --rm $IMAGE \
/bin/bash -c "omz_converter --name googlenet-v1 --precision FP32 -d /model -o /model"
```
In result, the converted model will be copied to `public/googlenet-v1/FP32` folder in the current directly:
```
tree public/googlenet-v1/
public/googlenet-v1/
├── FP32
│   ├── googlenet-v1.bin
│   └── googlenet-v1.xml
├── googlenet-v1.caffemodel
├── googlenet-v1.prototxt
└── googlenet-v1.prototxt.orig
```

# Run a benchmark app

```
docker run -it -u $(id -u):$(id -g) -v $(pwd)/:/model/ --rm $IMAGE benchmark_app -m /model/public/googlenet-v1/FP32/googlenet-v1.xml
```

**Check also:**

[Prebuilt images](#prebuilt-images)

[Working with OpenVINO Containers](docs/containers.md)

[Deployment with GPU accelerator](docs/accelerators.md)

[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)

Binary file added docs/img/dockerfile_name.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 44232ad

Please sign in to comment.