Building TensorFlow 1.12.2 on Jetson Nano

Quick link: jkjung-avt/jetson_nano

I wrote a script for building and installing tensorflow-1.12.2 on Jetson Nano. It should work for Jetson TX2 and other Jetson platforms (requiring some adjustments if not JetPack-4.2) as well.



Setting up a swap file on Jetson Nano is essential, otherwise the tensorflow building process would likely fail due to out-of-memory. You could refer to my Setting up Jetson Nano: The Basics post for how to do that.

I’m assuming that “pip3” is already properly installed on the Jetson Nano. If you’ve followed my Installing OpenCV 3.4.6 on Jetson Nano and built/installed opencv-3.4.6 on the Jetson Nano, then you’re good. Otherwise, you could install pip3 on the system by doing either wget; sudo python3 or sudo apt-get install -y python3-pip.

In case you are building/installing tensorflow on Jetson TX2 or another Jetson platform, it’s still a good idea to use some swap. In addition, you could adjust the --local_resources setting (since more RAM and more CPU cores are available) in the installation script.


  1. Uninstall tensorboard and tensorflow if a previous version has been installed.

    sudo pip3 uninstall -y tensorboard tensorflow
  2. Clone my ‘jetson_nano’ repository from GitHub, which contains all the scripts.

    $ cd ${HOME}/project
    $ git clone
    $ cd jetson_nano
  3. (Optional, but highly recommended) Update libprotobuf (3.6.1). This solves the “extremely long model loading time problem” of TF-TRT.

    $ ./

    This script takes 1 hour or so to finish on the Jetson Nano.

  4. Install bazel (0.15.2), the build tool for tensorflow.

    $ ./
  5. Build and install tensorflow-1.12.2 by executing the script. More specifically, this script would install requirements, download tensorflow-1.12.2 source, configure/build the code, build the pip3 wheel and install it on the system.

    $ ./

    Note this script would take a very long time (>12 hours) to run. Since bulding tensorflow requires a lot resources (memory & disk I/O), it is suggested all other applications (such as the web browser) and tasks terminated while tensorflow is being built.

    During the bazel build process, the Jetson Nano system might appear locked-up from time to time. Even worse, Ubuntu “System Program Problem Detected” message would pop up. I checked the errors with journalctl. They all appeared to be “XXX timeout” events. Anyway, the tensorflow-1.12.2 pip3 package would finally be built. Based on my testing, it worked OK.

  6. The ‘pip3 install tensorflow’ process would likely update python3 ‘protobuf’ module to the latest version (which we do not want). Assuming you’ve followed step 3 above and compiled/installed protobuf-3.6.1, you need to uninstall the newer version and re-install version 3.6.1 (cpp_implementation) of python ‘protobuf’ module again.

    $ sudo pip3 uninstall -y protobuf
    $ cd ${HOME}/src/protobuf-3.6.1/python
    $ sudo python3 install --cpp_implementation
  7. Test tensorflow-1.12.2 with ‘’. (Reference)

    $ cd ${HOME}/project
    $ git clone
    $ cd benchmarks
    $ git checkout cnn_tf_v1.12_compatible
    $ python3 scripts/tf_cnn_benchmarks/ --data_format=NHWC --device=gpu

    Just for reference, I got ‘total images/sec: 1203.95’ when I did the test on my Jetson Nano DevKit.

Additional notes

  • Thanks to peterlee0127 for publishing tensorflow1.12.patch.

  • I chose protobuf version “3.6.1” since 3.6.x is the matching version in tensorflow-1.12 source code.

  • I chose bazel version “0.15.2” for tensorflow-1.12.2 based on tensorflow’s official documentation: Tested build configurations.

  • In case you encounter problem (e.g. out-of-memory or bazel crashing) when running the, you could try to set --local_resources (reference) to lower values. For example, replace 2048.0 with 1536.0 (MB of RAM used by bazel for building code).

  • In the script, I enabled GPU, CUDNN and TENSORRT settings, while disabling most of the other features in tensorflow. This is for reducing size of the compiled tensorflow binary and only enabling functionalities I do use. You could refer to the environment variable settings (starting from the line PYTHON_BIN_PATH) in the script for details. In case one of those disabled features matters to you, you could turn it on by just setting the environment variable to 1 (instead of 0).

  • In the script, I set TF_CUDA_COMPUTE_CAPABILITIES to either 5.3, 6.2, or 7.2, depending on whether the script is invoked on a Jetson Nano, TX1, TX2 or AGX Xavier. In case you’d like to build 1 tensorflow module which works on all Jetson platforms, you could hard code TF_CUDA_COMPUTE_CAPABILITIES setting to 5.3,6.2,7.2.

  • If you are not using JetPack-4.2, you’d likely only need to adjust the following environment variables in the script: TF_CUDA_VERSION, TF_TENSORRT_VERSION. For example,


blog built using the cayman-theme by Jason Long. LICENSE