In order to achieve real-time hand gesture recognition which is important for safety precautions in production halls or for self-driving vehicles, low latency, fast data transfer and high-speed cameras are required. In this scenario, frame-based cameras lead to huge loads of redundant data processing, given that the useful information is due to the hands’ movement. In the other hand, event-based cameras activate and send information of the pixels where a change has been occurred, allowing to drastically reduce the computational load, and therefore significantly reducing the energy consumption.

The setup consist of a DAVIS346 event-based camera, connected to a PC, where the events are being captured and processed in real time by a convolutional neural network in Python. This environment was created in the SAL team for Embedded AI in order to perform research on event-based algorithms and hardware accelerators.

For the setup which was created and developed by Diego Gigena-Ivanovich and Chunlei Xu, DHSD data of three gestures (dog, rabbit, pigeon) were created with different combinations of light illumination, backgrounds and movements of hands which are then recognized by the event-based camera.

Diego is from Argentina, where he studied electrical engineering and then started his PhD studies under the supervision of Dr. Pedro Julian. In 2021, he joined SAL together with Pedro Julian, where he continues his scientific work in the field of embedded AI and is part of the SAL-DC, a doctoral training program for researchers focusing on the field of EBS.