TensorFlow extension

NOTE: This page describes how to build TensorFlow and LiteRT C++ API for Linux, Windows, and Android. LiteRT (short for Lite Runtime) is the new name for TensorFlow Lite (TFLite)

TensorFlow 2.1.0

A challenge with working with TensorFlow is how to properly build it. To reduce the building challenges, Docker images have been created with CUDA and TensorFlow libraries available for GNU/Linux builds here and for Android builds here. Docker can be used to build extensions for GNU/Linux and Android. However, they are unable to handle Windows. The following guide describes how to properly build LiteRT Native and TensorFlow C++ API for the supported platforms.

Requirements:

  • Python 3

  • Bazel 0.29.1

  • TensorFlow 2.1.0 repository:

    git clone https://github.com/tensorflow/tensorflow.git
    cd tensorflow
    git checkout v2.1.0
    

TensorFlow headers required to build extensions have been assembled. Extract the libs.tar.gz file found in jami-project/plugins/contrib to access the TensorFlow headers. However, if a different version of TensorFlow is used or assembling TensorFlow from source is required, instructions to assemble Tensorflow Lite Native and C++ API are available under gitlab:jami-plugins README_ASSEMBLE file.

GNU/Linux

LiteRT does not support desktop GPU. Consider using the TensorFlow C++ API if desktop GPU support is required.

If TensorFlow C++ API with GPU support is required, ensure that: a CUDA capable GPU is available; all the installation steps for Nvidia drivers, CUDA Toolkit, CUDNN, LiteRT are followed; and, the version numbers match and are correct for the TensorFlow version being built.

The following links may be helpful:

Set up the build options with ./configure.

  • LiteRT Native

    bazel build //tensorflow/lite:libtensorflowlite.so
    
  • TensorFlow C++ API

    bazel build --config=v1 --define framework_shared_object=false --define=no_tensorflow_py_deps=true //tensorflow:libtensorflow_cc.so
    

Windows

LiteRT does not support desktop GPU. Consider using the TensorFlow C++ API if desktop GPU support is required.

If TensorFlow C++ API with GPU support is required, ensure that: a CUDA capable GPU is available; all the installation steps for Nvidia drivers, CUDA Toolkit, CUDNN, LiteRT are followed; and, the version numbers match and are correct for the TensorFlow version being built.

The following links may be helpful:

Set up the build options with python3 configure.py.

  • LiteRT Native

    bazel build //tensorflow/lite:tensorflowlite.dll
    
  • TensorFlow C++ API

    bazel build --config=v1 --define framework_shared_object=false --config=cuda --define=no_tensorflow_py_deps=true //tensorflow:tensorflow_cc.dll
    

There may be some missing references while compiling an extension with the TensorFlow C++ API. If this occurs rebuild TensorFlow and explicitly export the missing symbols. Fortunately, TensorFlow now has an easy workaround. Feed this file with the required symbols.

Android - LiteRT Native

For mobile applications, it is suggested that LiteRT is the only option to consider to successfully build TensorFlow. Additional requirements are:

  • Android NDK 18r

Set up the build options with:

./configure
        >> Do you want to build TensorFlow with XLA JIT support? [Y/n]: n
        >> Do you want to download a fresh release of Clang? (Experimental) [y/N]: y
        >> Do you want to interactively configure ./WORKSPACE for Android builds? [y/N]: y
        >> Please specify the home path of the Android NDK to use. [Default is /home/<username>/Android/Sdk/ndk-bundle]: put the right path to NDK 18r

And build as required:

  • armeabi-v7a

    bazel build //tensorflow/lite:libtensorflowlite.so --crosstool_top=//external:android/crosstool --cpu=armeabi-v7a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"
    
  • arm64-v8a

    bazel build //tensorflow/lite:libtensorflowlite.so --crosstool_top=//external:android/crosstool --cpu=arm64-v8a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"