tao cv

TAO CV Sample Workflows

Train Adapt Optimize (TAO) Toolkit is a python based AI toolkit for taking purpose-built pre-trained AI models and customizing them with your own data. TAO adapts popular network architectures and backbones lớn your data, allowing you lớn train, fine tune, prune and export highly optimized and accurate AI models for edge deployment.

Bạn đang xem: tao cv

The pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection and more.

Build end-to-end services and solutions for transforming pixels and sensor data lớn actionable insights using TAO, DeepStream SDK and TensorRT. TAO can train models for common vision AI tasks such as object detection, classification, instance segmentation as well as other complex tasks such as facial landmark, gaze estimation, heart rate estimation and others.

This resource lists out several sample notebooks lớn walk you through full training workflow using TAO 3.0.

Getting Started

To get started, first choose the model architecture that you want lớn build, then select the appropriate model thẻ on NGC and then choose one of the supported backbones.

Xem thêm: facebook của tôi

LOGO

  1. Setup your python environment using python virtualenv and virtualenvwrapper.

  2. In TAO Toolkit, we have created an abstraction above the container, you will launch all your training jobs from the launcher. No need lớn manually pull the appropriate container, tao-launcher will handle that. You may install the launcher using pip with the following commands.

pip3 install nvidia-tao

  1. Download the sample jupyter notebooks using the command mentioned in the CLI tab and kick start a jupyter server using the following command

Xem thêm: chúc ngày mới vui vẻ

jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root

Purpose-built Model Jupyter notebook
PeopleNet detectnet_v2/detectnet_v2.ipynb
TrafficCamNet detectnet_v2/detectnet_v2.ipynb
DashCamNet detectnet_v2/detectnet_v2.ipynb
FaceDetectIR detectnet_v2/detectnet_v2.ipynb
VehicleMakeNet classification/classification.ipynb
VehicleTypeNet classification/classification.ipynb
PeopleSegNet mask_rcnn/mask_rcnn.ipynb
PeopleSemSegNet unet/unet_isbi.ipynb
Bodypose Estimation bpnet/bpnet.ipynb
License Plate Detection detectnet_v2/detectnet_v2.ipynb
License Plate Recognition lprnet/lprnet.ipynb
Gaze Estimation gazenet/gazenet.ipynb
Facial Landmark fpenet/fpenet.ipynb
Heart Rate Estimation heartratenet/heartratenet.ipynb
Gesture Recognition gesturenet/gesturenet.ipynb
Emotion Recognition emotionnet/emotionnet.ipynb
FaceDetect facenet/facenet.ipynb
ActionRecognitionNet action_recognition_net/actionrecognitionnet.ipynb
PoseClassificationNet pose_classification_net/pose_classificationnet.ipynb
Pointpillars pointpillars/pointpillars.ipynb
Open model architecture Jupyter notebook
DetectNet_v2 detectnet_v2/detectnet_v2.ipynb
FasterRCNN faster_rcnn/faster_rcnn.ipynb
YOLOV3 yolo_v3/yolo_v3.ipynb
YOLOV4 yolo_v4/yolo_v4.ipynb
YOLOv4-Tiny yolo_v4_tiny/yolo_v4_tiny.ipynb
SSD ssd/ssd.ipynb
DSSD dssd/dssd.ipynb
RetinaNet retinanet/retinanet.ipynb
MaskRCNN mask_rcnn/mask_rcnn.ipynb
UNET unet/unet_isbi.ipynb
Image Classification classification/classification.ipynb
EfficientDet efficientdet/efficientdet.ipynb