This proposal addresses the prototypal device Talking Hands, recently developed by the spin-off Limix of Unicam. The original idea was to create an assistive device able to translate pre-recorded hand gestures into speech in real time. This improved communication access was intended to positively affect the lives of the speech-impaired, easing their interaction at workplace, for healthcare services and in school environment. Talking Hands has a number of potential benefits: is the first of its kind, is strongly user-oriented and easily customizable for use in different frameworks and with various types of disabilities. The previous study led to a functional prototype (TRL 4) where the core idea and concepts of the project have been roughly implemented. Nonetheless, the current prototype is far from being usable in practice, particularly in those critical contexts for which has been first conceived (support to speech-impaired people). In particular, two important issues have to be properly addressed. First, for the device to be really usable and produce an impact on the user’s life, the gesture recognition algorithm should be as smart and reliable as possible. This implies that a deep methodological and experimental study has to be performed to provide the device with a high quality and reliable dynamic gesture recognition algorithm, possibly by the adoption of advanced optimization approaches involving, for instance, deep learning or machine learning. The second issue to be addressed refers to the need of ensuring the usability and wearability requirements crucial for an easy acceptance of Talking Hands by the final user (which is typically an individual suffering from different pathologies, ranging from autism to speech disabilities). Once satisfactory solutions have been found to the previous problems, the proposers intend to implement the designed solution in a final prototype to be tested in operational environment.