Liviu, Bogdan and Sorin at the Int. Conf. on Robotic Computing 2019 in Naples, Italy. Liviu presented our Deep Grid Net (DGN) system, showing how we classify the driving context in an autonomous vehicle. Bogdan presented our occupancy grid simulator GridSim, designed for fast prototyping of autonomous driving controllers.
Deep Grid Net (DGN) is a deep learning system designed for understanding the context in which an autonomous car is driving. DGN incorporates a learned driving environment representation based on Occupancy Grids (OG) obtained from raw Lidar data and constructed on top of the Dempster-Shafer (DS) theory.
We are happy to introduce GridSim, an autonomous driving simulator engine that uses a car-like robot architecture to generate occupancy grids from simulated sensors. It allows for multiple scenarios to be easily represented and loaded into the simulator as backgrounds. You can use GridSim to design and evaluate End2End and Deep Reinforcement learning based autonomous driving control systems.
The Generative One-shot Learning (GOL) algorithm is a generative framework which takes as input a single object instance, or generic pattern, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated from the single object instance using a set of generalization functions. The proposed system encompasses a Deep Neural Network classifier which gets updated with each data generation iteration. The GOL training procedure follows a multi-objective optimization approach, where a generalization energy, given by the distance between the generated artificial synthetic data and a set of regularization samples, is maximized, while the classification accuracy between each object class is also maximized.
The traffic participants detection algorithm is a component within the more comprehensive road detection system, which is able to detect neighboring vehicles on the road. Using the model obtained by the road detection system, the Traffic Participants Detection algorithm can estimate the "real world" distance between the ego-vehicle and other cars present in the scene.
The ROVIS Robust Gaze Estimator (RGazE) is a fast and accurate 3D human gaze estimation algorithm which uses a collaborative tracking framework composed of a cascade of region and spatial classifiers for the extraction of the facial Regions of Interest (ROI), followed by a Gaussian Mixture Model (GMM) point estimator for calculating the facial feature points. One key concept behind this work is to control the parameters of the classifiers with respect to a feedback variable describing the quality of feature extraction.
The ROVIS Human-Robot Interaction and Tracking System
10 September 2014
First experiments with the ROVIS human-robot interaction and tracking system on a Neobotix MP 500 mobile platform. Performance evaluation at the Department of Information Technology, Széchenyi István University, Gyor, Hungary
The purpose of a 3D Generic Fitted Primitive (GFP) to fully reconstruct 3D object from sparse visual data. A modelling step is used to particularize the obtained primitive volume with the purpose of determining safe and robust grasp actions in service robotics.
The 2D-3D Collaborative Tracking (23CT) method for tracking rigid bodies in the context of mobile robotic manipulation is illustrated in this video. The tracking approach is based on a collaborative tracking framework developed around a 2D multi-class Region of Interest tracking system and a 3D model-based tracker, where both trackers benefit from each others results. The goal of the algorithm is to improve the motion planning and the object handling capabilities of service robotics platforms that operate in complex and cluttered human environments.