JetPack-4.6

This post summarizes how I set up my Jetson Nano with JetPack-4.6 and run my tensorrt_demos samples.

Here’s a quick link to the GitHub repository for the scripts I use to set up my Jetson software development environment: jkjung-avt/jetson_nano.

1. Basic set-up (microSD card)

I recommend using a microSD card of at least 128GB in size.

I downloaded the JetPack-4.6 image, “jetson-nano-jp46-sd-card-image.zip”, from https://developer.nvidia.com/jetson-nano-sd-card-image and “etched” the image onto my microSD card (reference: Getting Started with Jetson Nano Developer Kit). I then inserted the microSD card and boot my Jetson Nano DevKit. During initial Ubuntu set-up, I selected “MAXN” mode for the Nano. And I created an account named “nvidia” (you could replace it with your preferred user name).

Tip #1: When the Jetson Nano boot up to the Desktop, I would hit Ctrl-Alt-T to bring up a terminal and then lock the terminal on the Launcher (on the left-hand side).

Tip #2: In the command terminal, I would type and execute “chromium-browser” to bring up the web browser. Again, I would lock it onto Launcher for easier use later on.

Then I’d run the following commands to set up CUDA related environment variables properly.

$ sudo jetson_clocks
$ sudo apt update
###
### Set proper environment variables
$ mkdir ${HOME}/project
$ cd ${HOME}/project
$ git clone https://github.com/jkjung-avt/jetson_nano.git
$ cd jetson_nano
$ ./install_basics.sh
$ source ${HOME}/.bashrc

2. Making sure python3 “cv2” is working

I’d just use the built-in OpenCV-4.1.1 provided by NVIDIA (no need to build from source by myself). I’d do the following for system library dependencies and python module dependencies. NOTE: Unlike my previous posts, I just used the stock “protobuf” libraries in Ubuntu. This might not work well for TF-TRT. Check out my previous posts for details…

### Install packages required for building code
$ sudo apt update
$ sudo apt install -y build-essential make cmake cmake-curses-gui \
                      git g++ pkg-config curl libfreetype6-dev
### Install dependencies for python3 "cv2"
$ sudo apt install -y libcanberra-gtk-module libcanberra-gtk3-module \
                      protobuf-compiler libprotoc-dev \
                      python3-dev python3-pip
$ sudo pip3 install -U pip Cython testresources setuptools
$ sudo pip3 install protobuf numpy==1.19.4 matplotlib==3.2.2

Then I’d test my tegra-cam.py script with a USB webcam, and make sure the python3 “cv2” module could capture and display images properly.

### Test tegra-cam.py (using a USB webcam)
$ cd ${HOME}/project
$ wget https://gist.githubusercontent.com/jkjung-avt/86b60a7723b97da19f7bfa3cb7d2690e/raw/3dd82662f6b4584c58ba81ecba93dd6f52c3366c/tegra-cam.py
$ python3 tegra-cam.py --usb --vid 0

3. Installing tensorflow 2

Reference (official documentation from NVIDIA): Installing TensorFlow For Jetson Platform

To install tensorflow, I just followed instructions on the official documentation.

$ sudo apt install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev \
                      zip libjpeg8-dev liblapack-dev libblas-dev gfortran
$ sudo pip3 install -U --no-deps numpy==1.19.4 future==0.18.2 mock==3.0.5 \
                                 keras_preprocessing==1.1.2 keras_applications==1.0.8 \
                                 gast==0.4.0 protobuf pybind11 cython pkgconfig
$ sudo env H5PY_SETUP_REQUIRES=0 pip3 install -U h5py==3.1.0
$ sudo pip3 install --pre --extra-index-url \
                    https://developer.download.nvidia.com/compute/redist/jp/v46 \
                    tensorflow>=2

And I verified the installation as follows.

$ TF_CPP_MIN_LOG_LEVEL=3 python3 -c "import tensorflow as tf; tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR); print('tensorflow version: %s' % tf.__version__); print('tensorflow.test.is_built_with_cuda(): %s' % tf.test.is_built_with_cuda()); print('tensorflow.test.is_gpu_available(): %s' % tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None))"
tensorflow version: 2.6.2
tensorflow.test.is_built_with_cuda(): True
tensorflow.test.is_gpu_available(): True

4. Testing TensorRT GoogLeNet and MTCNN

Reference: Demo #1: GoogLeNet and Demo #2: MTCNN

### Clone the tensorrt_demos code
$ cd ${HOME}/project
$ git clone https://github.com/jkjung-avt/tensorrt_demos.git
###
### Build TensorRT engine for GoogLeNet
$ cd ${HOME}/project/tensorrt_demos/googlenet
$ make
$ ./create_engine
###
### Build TensorRT engines for MTCNN
$ cd ${HOME}/project/tensorrt_demos/mtcnn
$ make
$ ./create_engines
###
### Build the "pytrt" Cython module
$ cd ${HOME}/project/tensorrt_demos
$ sudo pip3 install Cython
$ make

I tested TensorRT GoogLeNet using a USB webcam,

$ python3 trt_googlenet.py --usb 0

and also tested TensorRT MTCNN using a USB wecam.

$ python3 trt_mtcnn.py --usb 0

5. Skipping TensorRT SSD models

My Demo #3: SSD example only works against tensorflow-1.x. So I skipped this testing.

6. Testing TensorRT YOLOv3 and YOLOv4 models

Reference: Demo #5: YOLOv4

I installed dependencies and built the TensorRT yolov3/yolov4 engines.

### Install dependencies and build "yolov3-416" and "yolov4-416" TensorRT engines
$ sudo pip3 install onnx==1.9.0
$ cd ${HOME}/project/tensorrt_demos/plugins
$ make
$ cd ${HOME}/project/tensorrt_demos/yolo
$ ./install_pycuda.sh
$ ./download_yolo.sh
$ python3 yolo_to_onnx.py -m yolov3-416
$ python3 onnx_to_tensorrt.py -v -m yolov3-416
$ python3 yolo_to_onnx.py -m yolov4-416
$ python3 onnx_to_tensorrt.py -v -m yolov4-416

Next, I tested the TensorRT engines with the “dog.jpg” image.

$ cd ${HOME}/project/tensorrt_demos
$ python3 trt_yolo.py --image yolo/dog.jpg -m yolov3-416
$ python3 trt_yolo.py --image yolo/dog.jpg -m yolov4-416

7. Testing TensorRT MODNet

Reference: Demo #5: YOLOv4

### Build TensorRT engine for MODNet
$ cd ${HOME}/project/tensorrt_demos/modnet
$ python3 onnx_to_tensorrt.py modnet.onnx modnet.engine

I tested the TensorRT engine with the “image.jpg” image.

$ cd ${HOME}/project/tensorrt_demos
$ python3 trt_modnet.py --image modnet/image.jpg --demo_mode

Conclusion

JetPack-4.6 features an updated version of TensorRT, i.e. TensorRT 8. I have verified that my tensorrt_demos samples work against TensorRT 8 on Jetson Nano.

blog built using the cayman-theme by Jason Long. LICENSE