Join RovisLab on June 25 at the 2021 IEEE Computer Vision and Pattern Recognition conference (CVPR), Workshop on Embedded Vision, for a live demonstration of our Rovis.AI software stack. Our showcase demonstrates the perception capabilities of Rovis.AI for controlling autonomous vehicles navigating on forestry roads.
We are proud to ogranize the first edition of the Romanian AI Days conference. This edition will be organised online, under the auspices of Transilvania University of Brașov and will mark the 10-year anniversary of its ROVIS Lab (Robotics, Vision and Control Laboratory).
Cloud2Edge Elastic AI Framework for Prototyping and Deployment of AI Inference Engines in Autonomous Vehicles paper accepted for publication in the Sensors Journal, special issue on Communication, Positioning, and Sensing Solutions for Autonomous Vehicles (impact factor 3.275)
The list of publications used in our survey of Deep Learning Techniques for Autonomous Driving in now available. We gruped all the references based on their content and following the structure of the article.
Our GFPNet algorithm for 3D shape completion has been accepted for publications in the IEEE Robotics and Automation Letters (RA-L)
An interactive demo of autonomous vehicle navigation was presented at the “Sergiu T. Chiriacescu” Aula, during an even, called “IESC si companiile” (Electrical Engineering Faculty and the Companies), organized by the Transilvania University of Brasov.
The latest technologies and advanced prototypes of robots combined with many workshops and presentations about the robotic field were some of the highlights from the European Robotics Forum, which was held in Malaga Spain, between 3-6 of March 2020.
Our work on the NeuroTrajectory local state estimation approach, also published in the IEEE Robotics and Automation Letters (RA-L), was presented at IROS 2019 in Macao, China.
Congratulations to our ROVISLab team, which managed to get 3rd place at the first autonomous drone contest held at the Technical University of Cluj-Napoca.
Our article entitled A Survey of Deep Learning Techniques for Autonomous Driving has been accepted
for publication in the Journal of Field Robotics, one of the highest ranked journals world-wide in the area of robotics.
ROVISTeam attending to the East European Summer School 2019 in Bucharest. It was a greate full week around core topics regarding machine learning and artificial intelligence.
Congratulations to our ROVISLab team, which managed to get 3rd place at RObotX, the National Robotics Contest, and qualified for the international stage of EuroBot in Paris!
To share our experience gathered at the University, ROVIS team is visiting in this period high schools from Brasov or from other towns.
ROVIS team attending the European Robotics Forum. During the event, the team presented multiple showcases of AI ussage in the Autonomous Driving context.
Liviu, Bogdan and Sorin at the Int. Conf. on Robotic Computing 2019 in Naples, Italy. Liviu presented our Deep Grid Net (DGN) system, showing how we classify the driving context in an autonomous vehicle. Bogdan presented our occupancy grid simulator GridSim, designed for fast prototyping of autonomous driving controllers.
Deep Grid Net (DGN) is a deep learning system designed for understanding the context in which an autonomous car is driving. DGN incorporates a learned driving environment representation based on Occupancy Grids (OG) obtained from raw Lidar data and constructed on top of the Dempster-Shafer (DS) theory.
We are happy to introduce GridSim, an autonomous driving simulator engine that uses a car-like robot architecture to generate occupancy grids from simulated sensors. It allows for multiple scenarios to be easily represented and loaded into the simulator as backgrounds. You can use GridSim to design and evaluate End2End and Deep Reinforcement learning based autonomous driving control systems.

Happy New Year!

01 January 2019
Happy New Year from team ROVIS!
The Generative One-shot Learning (GOL) algorithm is a generative framework which takes as input a single object instance, or generic pattern, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated from the single object instance using a set of generalization functions. The proposed system encompasses a Deep Neural Network classifier which gets updated with each data generation iteration. The GOL training procedure follows a multi-objective optimization approach, where a generalization energy, given by the distance between the generated artificial synthetic data and a set of regularization samples, is maximized, while the classification accuracy between each object class is also maximized.
The traffic participants detection algorithm is a component within the more comprehensive road detection system, which is able to detect neighboring vehicles on the road. Using the model obtained by the road detection system, the Traffic Participants Detection algorithm can estimate the "real world" distance between the ego-vehicle and other cars present in the scene.
The ROVIS Robust Gaze Estimator (RGazE) is a fast and accurate 3D human gaze estimation algorithm which uses a collaborative tracking framework composed of a cascade of region and spatial classifiers for the extraction of the facial Regions of Interest (ROI), followed by a Gaussian Mixture Model (GMM) point estimator for calculating the facial feature points. One key concept behind this work is to control the parameters of the classifiers with respect to a feedback variable describing the quality of feature extraction.

The ROVIS Human-Robot Interaction and Tracking System

10 September 2014
First experiments with the ROVIS human-robot interaction and tracking system on a Neobotix MP 500 mobile platform. Performance evaluation at the Department of Information Technology, Széchenyi István University, Gyor, Hungary
The purpose of a 3D Generic Fitted Primitive (GFP) to fully reconstruct 3D object from sparse visual data. A modelling step is used to particularize the obtained primitive volume with the purpose of determining safe and robust grasp actions in service robotics.
The 2D-3D Collaborative Tracking (23CT) method for tracking rigid bodies in the context of mobile robotic manipulation is illustrated in this video. The tracking approach is based on a collaborative tracking framework developed around a 2D multi-class Region of Interest tracking system and a 3D model-based tracker, where both trackers benefit from each others results. The goal of the algorithm is to improve the motion planning and the object handling capabilities of service robotics platforms that operate in complex and cluttered human environments.