mirror of
https://github.com/immich-app/immich.git
synced 2024-12-29 15:11:58 +00:00
docs(ml): hardware acceleration (#6821)
This commit is contained in:
parent
ada3eeb777
commit
efdbe790ee
5 changed files with 91 additions and 31 deletions
|
@ -16,7 +16,7 @@ services:
|
|||
- /lib/firmware/mali_csffw.bin:/lib/firmware/mali_csffw.bin:ro # Mali firmware for your chipset (not always required depending on the driver)
|
||||
- /usr/lib/libmali.so:/usr/lib/libmali.so:ro # Mali driver for your chipset (always required)
|
||||
|
||||
cpu:
|
||||
cpu: {}
|
||||
|
||||
cuda:
|
||||
deploy:
|
||||
|
|
|
@ -9,7 +9,7 @@ version: "3.8"
|
|||
# See https://immich.app/docs/features/hardware-transcoding for more info on using hardware transcoding.
|
||||
|
||||
services:
|
||||
cpu:
|
||||
cpu: {}
|
||||
|
||||
nvenc:
|
||||
deploy:
|
||||
|
|
|
@ -1,17 +1,15 @@
|
|||
# Hardware Transcoding [Experimental]
|
||||
|
||||
This feature allows you to use a GPU or Intel Quick Sync to accelerate transcoding and reduce CPU load.
|
||||
This feature allows you to use a GPU to accelerate transcoding and reduce CPU load.
|
||||
Note that hardware transcoding is much less efficient for file sizes.
|
||||
As this is a new feature, it is still experimental and may not work on all systems.
|
||||
|
||||
## Supported APIs
|
||||
|
||||
- NVENC
|
||||
- NVIDIA GPUs
|
||||
- Quick Sync
|
||||
- Intel CPUs
|
||||
- VAAPI
|
||||
- GPUs
|
||||
- NVENC (NVIDIA)
|
||||
- Quick Sync (Intel)
|
||||
- RKMPP (Rockchip)
|
||||
- VAAPI (AMD / NVIDIA / Intel)
|
||||
|
||||
## Limitations
|
||||
|
||||
|
@ -20,8 +18,7 @@ As this is a new feature, it is still experimental and may not work on all syste
|
|||
- WSL2 does not support Quick Sync.
|
||||
- Raspberry Pi is currently not supported.
|
||||
- Two-pass mode is only supported for NVENC. Other APIs will ignore this setting.
|
||||
- Only encoding is currently hardware accelerated, so the CPU is still used for software decoding.
|
||||
- This is mainly because the original video may not be hardware-decodable.
|
||||
- Only encoding is currently hardware accelerated, so the CPU is still used for software decoding and tone-mapping.
|
||||
- Hardware dependent
|
||||
- Codec support varies, but H.264 and HEVC are usually supported.
|
||||
- Notably, NVIDIA and AMD GPUs do not support VP9 encoding.
|
||||
|
@ -43,34 +40,45 @@ As this is a new feature, it is still experimental and may not work on all syste
|
|||
|
||||
## Setup
|
||||
|
||||
#### Initial Setup
|
||||
#### Basic Setup
|
||||
|
||||
1. If you do not already have it, download the latest [`hwaccel.yml`][hw-file] file and ensure it's in the same folder as the `docker-compose.yml`.
|
||||
2. Uncomment the lines that apply to your system and desired usage.
|
||||
3. In the `docker-compose.yml` under `immich-microservices`, uncomment the lines relating to the `hwaccel.yml` file.
|
||||
4. Redeploy the `immich-microservices` container with these updated settings.
|
||||
5. In the Admin page under `FFmpeg settings`, change the hardware acceleration setting to the appropriate option and save.
|
||||
1. If you do not already have it, download the latest [`hwaccel.transcoding.yml`][hw-file] file and ensure it's in the same folder as the `docker-compose.yml`.
|
||||
2. In the `docker-compose.yml` under `immich-microservices`, uncomment the `extends` section and change `cpu` to the appropriate backend.
|
||||
|
||||
- For VAAPI on WSL2, be sure to use `vaapi-wsl` rather than `vaapi`
|
||||
|
||||
3. Redeploy the `immich-microservices` container with these updated settings.
|
||||
4. In the Admin page under `Video transcoding settings`, change the hardware acceleration setting to the appropriate option and save.
|
||||
|
||||
#### All-In-One - Unraid Setup
|
||||
|
||||
##### NVENC - NVIDIA GPUs
|
||||
|
||||
- If you are using other backends. You will still need to implement [`hwaccel.yml`][hw-file] file into the `immich-microservices` service directly, please see the "Initial Setup" section above on how to do that.
|
||||
- As of v1.92.0, steps 1 and 2 are no longer necessary. If your version of Immich is below that or missing the environment variables, please follow these steps. Otherwise, skip to step 3.
|
||||
- Please note that`NVIDIA_DRIVER_CAPABILITIES` is no longer required to enter as a variable.
|
||||
1. In the container app, add this environmental variable: Key=`NVIDIA_VISIBLE_DEVICES` Value=`all`
|
||||
2. While still in the container app, change the container from Basic Mode to Advanced Mode and add the following parameter to the Extra Parameters field: `--runtime=nvidia`
|
||||
3. Restart the container app.
|
||||
4. Continue to step 4 of "Basic Setup".
|
||||
|
||||
1. Assuming you already have the Nvidia Driver Plugin installed on your Unraid Server. Please confirm that your Nvida GPU is showing up with its GPU ID in the Nvidia Driver Plugin. The ID will be `GPU-LONG_STRING_OF_CHARACTERS`. Copy the GPU ID.
|
||||
2. In the Imagegenius/Immich Docker Container app, add two new variables: Key=`NVIDIA_VISIBLE_DEVICES` Value=`GPU-LONG_STRING_OF_CHARACTERS` and Key=`NVIDIA_DRIVER_CAPABILITIES` Value=`all`
|
||||
3. While you are in the docker container app, change the Container from Basic Mode to Advanced Mode and add the following parameter to the Extra Parameters field: `--runtime=nvidia`
|
||||
4. Restart the Imagegenius/Immich Docker Container app.
|
||||
5. In the Admin page under FFmpeg settings, change the hardware acceleration setting to the appropriate option and save.
|
||||
##### Other APIs
|
||||
|
||||
Unraid does not currently support multiple Compose files. As an alternative, you can "inline" the relevant contents of the [`hwaccel.transcoding.yml`][hw-file] file into the `immich-microservices` service directly.
|
||||
|
||||
For example, the `qsv` section in this file is:
|
||||
|
||||
```
|
||||
devices:
|
||||
- /dev/dri:/dev/dri
|
||||
```
|
||||
|
||||
You can add this to the `immich-microservices` service instead of extending from `hwaccel.transcoding.yml`.
|
||||
Once this is done, you can continue to step 3 of "Basic Setup".
|
||||
|
||||
## Tips
|
||||
|
||||
- You may want to choose a slower preset than for software transcoding to maintain quality and efficiency
|
||||
- While you can use VAAPI with Nvidia GPUs and Intel CPUs, prefer the more specific APIs since they're more optimized for their respective devices
|
||||
- While you can use VAAPI with NVIDIA and Intel devices, prefer the more specific APIs since they're more optimized for their respective devices
|
||||
|
||||
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.yml
|
||||
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.transcoding.yml
|
||||
[nvcr]: https://github.com/NVIDIA/nvidia-container-runtime/
|
||||
[jellyfin-lp]: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-and-verify-lp-mode-on-linux
|
||||
[jellyfin-kernel-bug]: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#known-issues-and-limitations
|
||||
|
|
49
docs/docs/features/ml-hardware-acceleration.md
Normal file
49
docs/docs/features/ml-hardware-acceleration.md
Normal file
|
@ -0,0 +1,49 @@
|
|||
# Hardware-Accelerated Machine Learning [Experimental]
|
||||
|
||||
This feature allows you to use a GPU to accelerate machine learning tasks, such as Smart Search and Facial Recognition, while reducing CPU load.
|
||||
As this is a new feature, it is still experimental and may not work on all systems.
|
||||
|
||||
## Supported APIs
|
||||
|
||||
- ARM NN (Mali)
|
||||
- CUDA (NVIDIA)
|
||||
- OpenVINO (Intel)
|
||||
|
||||
## Limitations
|
||||
|
||||
- The instructions and configurations here are specific to Docker Compose. Other container engines may require different configuration.
|
||||
- Only Linux and Windows (through WSL2) servers are supported.
|
||||
- ARM NN is only supported on devices with Mali GPUs. Other Arm devices are not supported.
|
||||
- The OpenVINO backend has only been tested on an iGPU. ARC GPUs may not work without other changes.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
#### ARM NN
|
||||
|
||||
- Make sure you have the appropriate linux kernel driver installed
|
||||
- This is usually pre-installed on the device vendor's Linux images
|
||||
- `/dev/mali0` must be available in the host server
|
||||
- You may confirm this by running `ls /dev` to check that it exists
|
||||
- You must have the closed-source `libmali.so` firmware (possibly with an additional firmware file)
|
||||
- Where and how you can get this file depends on device and vendor, but typically, the device vendor also supplies these
|
||||
- The `hwaccel.ml.yml` file assumes the path to it is `/usr/lib/libmali.so`, so update accordingly if it is elsewhere
|
||||
- The `hwaccel.ml.yml` file assumes an additional file `/lib/firmware/mali_csffw.bin`, so update accordingly if your device's driver does not require this file
|
||||
|
||||
#### CUDA
|
||||
|
||||
- You must have the official NVIDIA driver installed on the server.
|
||||
- On Linux (except for WSL2), you also need to have [NVIDIA Container Runtime][nvcr] installed.
|
||||
|
||||
## Setup
|
||||
|
||||
1. If you do not already have it, download the latest [`hwaccel.ml.yml`][hw-file] file and ensure it's in the same folder as the `docker-compose.yml`.
|
||||
2. In the `docker-compose.yml` under `immich-machine-learning`, uncomment the `extends` section and change `cpu` to the appropriate backend.
|
||||
3. Redeploy the `immich-machine-learning` container with these updated settings.
|
||||
|
||||
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml
|
||||
[nvcr]: https://github.com/NVIDIA/nvidia-container-runtime/
|
||||
|
||||
## Tips
|
||||
|
||||
- You may want to increase concurrency past the default for higher utilization. However, keep in mind that this will also increase VRAM consumption.
|
||||
- Larger models benefit more from hardware acceleration, if you have the VRAM for them.
|
|
@ -28,8 +28,12 @@ wget https://github.com/immich-app/immich/releases/latest/download/docker-compos
|
|||
wget -O .env https://github.com/immich-app/immich/releases/latest/download/example.env
|
||||
```
|
||||
|
||||
```bash title="(Optional) Get hwaccel.yml file"
|
||||
wget https://github.com/immich-app/immich/releases/latest/download/hwaccel.yml
|
||||
```bash title="(Optional) Get hwaccel.transcoding.yml file"
|
||||
wget https://github.com/immich-app/immich/releases/latest/download/hwaccel.transcoding.yml
|
||||
```
|
||||
|
||||
```bash title="(Optional) Get hwaccel.ml.yml file"
|
||||
wget https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml
|
||||
```
|
||||
|
||||
or by downloading from your browser and moving the files to the directory that you created.
|
||||
|
@ -37,7 +41,7 @@ or by downloading from your browser and moving the files to the directory that y
|
|||
Note: If you downloaded the files from your browser, also ensure that you rename `example.env` to `.env`.
|
||||
|
||||
:::info
|
||||
Optionally, you can use the [`hwaccel.yml`][hw-file] file to enable hardware acceleration for transcoding. See the [Hardware Transcoding](/docs/features/hardware-transcoding.md) guide for info on how to set this up.
|
||||
Optionally, you can enable hardware acceleration for machine learning and transcoding. See the [Hardware Transcoding](/docs/features/hardware-transcoding.md) and [Hardware-Accelerated Machine Learning](/docs/features/ml-hardware-acceleration.md) guides for info on how to set these up.
|
||||
:::
|
||||
|
||||
### Step 2 - Populate the .env file with custom values
|
||||
|
@ -98,5 +102,4 @@ Immich is currently under heavy development, which means you can expect breaking
|
|||
|
||||
[compose-file]: https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
|
||||
[env-file]: https://github.com/immich-app/immich/releases/latest/download/example.env
|
||||
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.yml
|
||||
[watchtower]: https://containrrr.dev/watchtower/
|
||||
|
|
Loading…
Reference in a new issue