>>Generative One-shot Learning (GOL)

### Generative One-shot Learning (GOL): A Semi-parametric Approach to One-shot Learning

An important purpose of artificial intelligence systems is one-shot learning, where the goal is to learn how to recognize objects, or pattern, after a single object instance has been used for learning it's representation. Despite the last decade's breakthroughs in Deep Neural Networks (DNN), one-shot learning remains an unsolved challenge, since DNNs require large amounts of labeled data for training.

In this work, a semi-parametric approach to one-shot learning, coined Generative One-shot Learning (GOL), is proposed, with its block diagram presented in Fig. 1. The GOL method is a generative framework which takes as input a single object instance, or generic pattern, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated from the single object instance using a set of generalization functions. The proposed system encompasses a DNN classifier which gets updated with each data generation iteration. The GOL training procedure follows a multiobjective optimization approach, where a generalization energy, given by the distance between the generated artificial synthetic data and the regularization samples, is maximized, while the classification accuracy between each object class is also maximized. GOL has been evaluated on a one-dimensional one-shot learning problem, the MNIST and Omniglot characters benchmark datasets, as well as on environment perception challenges for autonomous driving.

Fig. 1 Block diagram of the Generative One-shot Learning (GOL) algorithm.

##### Generative One-shot Learning (GOL)

The GOL framework takes as input a set of one-shot objects $\mathbf s$, along with a set of regularization samples $\mathbf e$. The one-shot objects are used by the Generalization Generator to generate artificial samples $\mathbf \hat{x} \sim P(x)$ for each object class. The generated samples mimic the real PDF $\mathbf P(x)$. The samples generation process is governed by a set of generalization functions $\mathbf G(s, \Theta, J, a)$, where $\mathbf \Theta$ are the parameters of the functions, $\mathbf J(\hat{x}, \Theta)$ is the Generalization Energy, given by the similarity between the generated samples $\mathbf \hat{x}$ and the regularization samples $\mathbf e$, and $\mathbf a(\hat{x}, \Theta)$ is the accuracy of the DNN classifier $\mathbf c(\hat{x})$, trained via backpropagation. The trainig of GOL is performed through a Pareto biobjective optimization technique, which aims at maximizing in the same time the generalization energy $\mathbf J(\cdot)$ and the classification accuracy $\mathbf a(\cdot)$. The optimal classifier $\mathbf c(\hat{x}^*)$ is trained using the optimal artificial samples $\mathbf \hat{x}^*$, obtained once the optimization process converges.

##### References

Sorin Grigorescu "Generative One-Shot Learning (GOL): A Semi-Parametric Approach to One-Shot Learning in Autonomous Vision", Int. Conf. on Robotics and Automation ICRA 2018, Brisbane, Australia, May 21-25, 2018.
Sorin Grigorescu, Gigel Macesanu, Tiberiu Cocias, Bogdan Trasnea and Cosmin Ginerica, "Generating training images for machine learning-based object recognition systems" E.P. Patent EP3343432A1, U.S. Patent 2018/0189607A1, July 4, 2018.

##### Latex Bibtex Citation
 @article{Grigorescu2018,     author = {Sorin Mihai Grigorescu},     title = {{G}enerative {O}ne-{S}hot {L}earning ({GOL}): {A} {S}emi-{P}arametric {A}pproach to {O}ne-{S}hot {L}earning in {A}utonomous {V}ision},     journal = {Int. Conf. on Robotics and Automation ICRA},     year = {2018},     month = {may}, }