This is a Social Assistive Robot built on top of the Q-BO project based on RPi and Arduino. It is currently implemented by using Google Assistant as voice recognition and text to speech synthesis. The robot hardware has a central controller on a RPi and a microcontroller based on an Arduino port. It has two capacitive sensors on each side of the face that reacts to touch, a LED array for the mouth, and a multifrequency LED for the nose. It can pan and tilt its head using wonderful Dynamixel servos. It has two speakers and two microphones. It has an Speech-To-Text STT and an opposite Text-To-Speech TTS, a traditional metallic synthetizer. QBo has already both, and they work fairly well. Finally it has two low resolutions USB Cameras on each eye calibrated for 3D depth and programmed to follow faces. Additionally, the robot has an external basic 7-DOF arm with a payload of 0.5 kg, that can be controlled from the serial bus of the RaspberryPi.
This project involves the development of a Embodied Conversational Agent (ECA), a SAR robot to assist in gerontology settings. It can be used to research to build a workable product for elderly assistance, either from the point of view of caregivers or from homecare therapy.
Fix and implement the new scheme. Integrate with the arm. Embodied Conversational Agents (ECAs). There is a chance that this project is conducted jointly with the company https://www.deep-talk.ai/. This company provides software that can handle and extract meaningful information from chats, and provide meaningful chatbots and chat flow managers. This is excelent to connect with ChatGPT to perform the interaction.