New York City BubbleLife -
Blog Offers a Comprehensive Understanding of Object Tracking

August 17, 2019 – A blog post written by Shishira R Maiya beautifully reveals everything about object tracking. It explains how to use deep learning for tracking custom objects in a video. The blog talks about object detection and tracking technology that can capture unique patterns in a video.

According to the blog, a lot of emphasis has been put by the scientific community on detecting objects in an image, but detection of objects in a video is still a less explored area. Tracking objects in an image is quite different from detecting objects in a video. There could be single object tracking or multiple-object tracking in a video. The single object tracking is relatively easy, but multiple-object tracking is a complex phenomenon and requires more sophisticated algorithms.  

The author reveals both traditional methods and deep learning based approaches for building an object tracker. One of the prominent deep learning methods to track objects as explained in the blog is the ROLO method. This method focuses on capturing the spatio-temporal features for tracking an object in a video. This model has a simple architecture and delivers the output both in the form of spatial and temporal information. The method can track objects by overcoming all common types of tracking challenges.

Shishira, however, establishes that DeepSORT is the most widely used video tracking framework and can deliver better results. In this method, a track is created for each detection. The track carries all the necessary information. The author maintains that deep learning is essential when it comes to tracking objects in a video.   

To get complete knowledge about object tracking in a video, one can get access to the blog here:


Nanonets makes machine learning simple. With Nanonets the process of building Deep Learning models is as simple as uploading data. No parameter tuning. No need to bother with finding the right infrastructure to host models. One just needs to show them a few samples that the model can learn from and wait for the magic. They build, train and host the model so all one needs to do is add 2 lines of code to the codebase to get things running.

Media Contact:

Contact Person: Sumit Bhagat




Wednesday, August 21, 2019