# How to build? ```{note} This page describes what is a Jami Plugin and how to install and use it. ``` When you create a Jami Plugin, you may need some libraries, as OpenCV to modify the frame. If everyone uses different versions of libraries, it would not be maintainable. That's why we included some libraries with fixed versions in the daemon.. You can easily use it in your plugin. Before working on the plugin, you must first build its dependencies. You can find it in `daemon/contrib/src/` ## Dependencies Here we give you the steps to build [OpenCV](https://opencv.org/) and [ONNX](https://www.onnxruntime.ai/) but do not feel limited to these libraries. Other libraries should work as long they and the plugin are correctly built! ### Requirements: - git - docker ### Common dependencies We are going to see how to build the most commonly used dependencies in plugins. ONNX can take a long time so if you don't need it feel free to disable it. Examples of dependencies built include: FFmpeg, fmt, msgpack, OpenCV, OpenDHT, ONNX, opus, and FreeType. ### Update daemon Before anything, update the submodule `daemon` (from [jami-daemon](https://git.jami.net/savoirfairelinux/jami-daemon)) in `jami-plugins` ```bash git submodule update --init ``` ### Windows ```bash set DAEMON= cd ${DAEMON}/compat/msvc python3 winmake.py -fb opencv ``` ### Linux With Docker (recommended): ```bash docker build -f docker/Dockerfile_ubuntu_20.04 -t jami-plugins-docker . docker run -t --rm \ -v $(pwd):/root/jami/:rw \ -w /root/ \ -e BATCH_MODE=1 \ jami-plugins-docker /bin/bash -c " cd ./jami/daemon/contrib mkdir -p native cd native ../bootstrap --disable-x264 --disable-ffmpeg --disable-dhtnet \ --disable-webrtc-audio-processing --disable-argon2 \ --disable-asio --disable-fmt --disable-gcrypt --disable-gmp \ --disable-gnutls --disable-gpg-error --disable-gsm \ --disable-http_parser --disable-jack --disable-jsoncpp \ --disable-libarchive --disable-libressl --disable-msgpack \ --disable-natpmp --disable-nettle --enable-opencv --disable-opendht \ --disable-pjproject --disable-portaudio --disable-restinio \ --disable-secp256k1 --disable-speex --disable-speexdsp --disable-upnp \ --disable-uuid --disable-yaml-cpp --disable-onnx --disable-opus make list make fetch opencv opencv_contrib make -j$(nproc) " ``` Using your own system (not recommended): ```bash cd ./daemon/contrib/ mkdir native cd native ../bootstrap --enable-opencv --disable-ffmpeg --disable-argon2 --disable-asio \ --disable-fmt --disable-gcrypt --disable-gmp --disable-gnutls \ --disable-gpg-error --disable-gsm --disable-http_parser --disable-iconv \ --disable-jack --disable-jsoncpp --disable-libarchive \ --disable-msgpack --disable-natpmp --disable-nettle --disable-libressl\ --disable-opendht --disable-pjproject --disable-portaudio \ --disable-restinio --disable-secp256k1 --disable-speexdsp \ --disable-upnp --disable-uuid --disable-yaml-cpp --disable-zlib make list make fetch opencv opencv_contrib make -j$(nproc) ``` ### Android Using Docker (recommended): Change the android ABI between `arm64-v8a`, `armeabi-v7a` and `x86_64`. `arm64-v8a` is by far the most common ABI. ```bash docker build -f docker/Dockerfile_android -t jami-plugins-android . docker run -t --rm \ -v $(pwd):/home/gradle/plugins:rw \ -w /home/ \ -e BATCH_MODE=1 \ jami-plugins-android /bin/bash -c " cd ./gradle/plugins/contrib ANDROID_ABI='arm64-v8a' sh build-dependencies.sh " ``` If you want to build other dependencies, update accordingly build-dependencies.sh ```{note} If an error occurs while running ONNEX with root permissions, add --allow_running_as_root at the end of the build line of your configuration in `daemon/contrib/src/onnx/rules.mak`. ``` ## ONNX Runtime 1.6.0 A difficulty for a lot of people working with deep learning models is how to deploy them. With that in mind we provide the user the possibility of using the ONNX Runtime. There are several development libraries to train and test but, they are usually too heavy to deploy. TensorFlow with CUDA support, for instance, can easily surpass 400MB. The GreenScreen plugin uses the ONNX Runtime because it's lighter (library size of 140Mb for CUDA support) and supports model conversion from several development libraries (TensorFlow, PyTorch, Caffe, etc.). To build ONNX Runtime based plugins for Linux and Android, we strongly recommend using docker files available under `/docker/`. We don't offer Windows docker, but here we carefully guide you through the proper build of this library for our three supported platforms. If you want to build ONNX Runtime with Nvidia GPU suport, be sure to have a CUDA capable GPU and that you have followed all installation steps for the Nvidia drivers, CUDA Toolkit, CUDNN, and that their versions match. The following links may be very helpful: * https://developer.nvidia.com/cuda-gpus * https://developer.nvidia.com/cuda-toolkit-archive * https://developer.nvidia.com/cudnn ### Linux and Android We added ONNX Runtime as a contrib in [daemon](https://git.jami.net/savoirfairelinux/jami-daemon/tree/master/contrib). This way you can easily build ONNX Runtime for Android, and Linux. * Linux - Without acceleration: ```bash export DAEMON= cd ${DAEMON}/contrib/native ../bootstrap make .onnx ``` Or by enabling onnx with `--enable-onnx` in the common dependencies * Linux - With CUDA acceleration (CUDA 10.2): ```bash export CUDA_PATH=/usr/local/cuda/ export CUDA_HOME=${CUDA_PATH} export CUDNN_PATH=/usr/lib/x86_64-linux-gnu/ export CUDNN_HOME=${CUDNN_PATH} export CUDA_VERSION=10.2 export USE_NVIDIA=True export DAEMON= cd ${DAEMON}/contrib/native ../bootstrap make .onnx ``` * Android - With NNAPI acceleration: ```bash export DAEMON= cd ${DAEMON} export ANDROID_NDK= export ANDROID_ABI=arm64-v8a export ANDROID_API=29 export TOOLCHAIN=$ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64 export TARGET=aarch64-linux-android export CC=$TOOLCHAIN/bin/$TARGET$ANDROID_API-clang export CXX=$TOOLCHAIN/bin/$TARGET$ANDROID_API-clang++ export AR=$TOOLCHAIN/bin/$TARGET-ar export LD=$TOOLCHAIN/bin/$TARGET-ld export RANLIB=$TOOLCHAIN/bin/$TARGET-ranlib export STRIP=$TOOLCHAIN/bin/$TARGET-strip export PATH=$PATH:$TOOLCHAIN/bin cd contrib mkdir native-${TARGET} cd native-${TARGET} ../bootstrap --build=x86_64-pc-linux-gnu --host=$TARGET$ANDROID_API make .onnx ``` ### Windows * Pre-build: ```bash mkdir pluginsEnv export PLUGIN_ENV= cd pluginsEnv mkdir onnxruntime mkdir onnxruntime/cpu mkdir onnxruntime/nvidia-gpu mkdir onnxruntime/include git clone https://github.com/microsoft/onnxruntime.git onnx cd onnx git checkout v1.6.0 && git checkout -b v1.6.0 ``` * Without acceleration: ``` .\build.bat --config Release --build_shared_lib --parallel --cmake_generator "Visual Studio 16 2019" cp ./build/Windows/Release/Release/onnxruntime.dll ../onnxruntime/cpu/onnxruntime.dll ``` * With CUDA acceleration (CUDA 10.2): ``` .\build.bat --config Release --build_shared_lib --parallel --cmake_generator "Visual Studio 16 2019" --use_cuda --cudnn_home --cuda_home --cuda_version 10.2 cp ./build/Windows/Release/Release/onnxruntime.dll ../onnxruntime/nvidia-gpu/onnxruntime.dll ``` * Post-build: ```bash cp -r ./include/onnxruntime/core/ ../onnxruntime/include/ ``` For further build instructions, please visit the official ONNX Runtime instructions at [GitHub](https://github.com/microsoft/onnxruntime/blob/master/BUILD.md). ## Plugin To exemplify a plugin build, we will use the GreenScreen plugin available [here](https://git.jami.net/savoirfairelinux/jami-plugins). ### Linux/Android First you need to go to the plugins repository in your cloned jami-plugins: - Linux - Nvidia GPU `PROCESSOR=NVIDIA PLATFORM_TYPE="LINUX" AUTHOR='SFL' DIVISION='Internal' python3 build-plugin.py --projects='GreenScreen'` - Linux - CPU Using Docker (recommended): ```bash docker run -it --rm \ -v $(pwd):/root/jami/:rw \ -w /root/ \ -e BATCH_MODE=1 \ jami-plugins-docker /bin/bash -c " cd jami PLATFORM_TYPE="LINUX" AUTHOR='SFL' DIVISION='Internal' python3 build-plugin.py --projects='GreenScreen'" ``` Without using Docker : `PLATFORM_TYPE="LINUX" AUTHOR='SFL' DIVISION='Internal' python3 build-plugin.py --projects='GreenScreen'` - Android Using Docker (recommended): Change the android ABI between `arm64-v8a`, `armeabi-v7a` and `x86_64`. `arm64-v8a` is by far the most common ABI. ```bash docker run -t --rm -v $(pwd):/home/gradle/plugins:rw -w /home/gradle -e BATCH_MODE=1 jami-plugins-android /bin/bash -c " export DAEMON=/home/gradle/plugins cd ./plugins PLATFORM_TYPE="ANDROID" ANDROID_ABI="arm64-v8a" python3 build-plugin.py --projects=GreenScreen --distribution=android ``` Without using Docker : ```bash export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/jre export ANDROID_HOME=/home/${USER}/Android/Sdk export ANDROID_SDK=${ANDROID_HOME} export ANDROID_NDK=${ANDROID_HOME}/ndk/21.1.6352462 export ANDROID_NDK_ROOT=${ANDROID_NDK} export PATH=${PATH}:${ANDROID_HOME}/tools:${ANDROID_HOME}/platform-tools:${ANDROID_NDK}:${JAVA_HOME}/bin PLATFORM_TYPE="ANDROID" ANDROID_ABI="" python3 build-plugin.py --projects=GreenScreen --distribution=android ``` The GreenScreen.jpl file will be available under ``. ### Windows Windows build of plugins are linked with the daemon repository and its build scripts. So to build our example plugins you have to: ```bash cd daemon/compat/msvc python3 winmake.py -fb GreenScreen ``` The GreenScreen.jpl file will be available under ``. Related articles: -