Karol Zieba, Joshua Bongard.
An embodied approach for evolving robust visual classifiers.
Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, 201-208, , 2015.
Abstract: Despite recent demonstrations that deep learning methods can successfully recognize and categorize objects using high dimensional visual input, other recent work has shown that these methods can fail when presented with novel input. However, a robot that is free to interact with objects should be able to reduce spurious differences between objects belonging to the same class through motion and thus reduce the likelihood of overfitting. Here we demonstrate a robot that achieves more robust categorization when it evolves to use proprioceptive sensors and is then trained to rely increasingly on vision, compared to a similar robot that is trained to categorize only with visual sensors. This work thus suggests that embodied methods may help scaffold the eventual achievement of robust visual classification.