assistive-rehab
assistive-rehab Documentation

| Back to the website |

The assistive-rehab project

Assistive-rehab is a framework for developing the assistive intelligence of R1 robot for clinical rehabilitation and tests. The project is being developed within the Joint Lab between IIT and Fondazionce Don Carlo Gnocchi Onlus.

Library

Assistive-rehab library provides basic functionalities for handling skeletons. The library has definitions for:

  • creating a skeleton as series of keypoints linked together with a predefined structure;
  • importing/exporting a skeleton's structure from/into a yarp Property;
  • normalizing and scaling a skeleton;
  • optimize skeletons to deal with keypoints that cannot be observed;
  • transform skeleton's keypoints to the desired reference system.

Additional functionalities are also included for filtering depth images and aligning two mono or multidimensional time-series.

Modules

Assistive-rehab modules allow the user to:

  • retrieve 3D skeletons: given depth image from the camera and 2D skeleton data from yarpOpenPose, skeletonRetriever produces 3D skeletons and adds them in a yarp oriented database through objectsPropertiesCollector;
  • lock a 3D skeleton: given a 3D skeleton along with its tag, retrieved by means of skeletonRetriever, skeletonLocker allows the user to track the selected skeleton based on its spatiotemporal consistence;
  • visualize 3D skeletons: the output of skeletonRetriever can be visualized in real-time on the skeletonViewer;
  • analyze human motion: the quality of the movement can be evaluated in real-time through motionAnalyzer, by specifying the tag of the metric under analysis. Metrics as the range of motion, the speed of the end-point and walking parameters (step length and width, speed and number of steps) are currently implemented;
  • recognize human actions: 2D skeleton's keypoints can feed the actionRecognizer for predicting the label of the exercise being performed;
  • produce a verbal feedback: a feedback can be produced by feedbackProducer and translated to verbal through feedbackSynthetizer;
  • replay and manipulate a recorded skeleton: a skeleton recorded by means of yarpdatadumper can be played back through skeletonPlayer;
  • detect ArUco lines: lines composed of ArUco boards can be visually detected by means of lineDetector;
  • navigate the environment free from obstacles: a reactive navigation system is provided by navController, which allows the robot to reach fixed points in the environment and follow users;
  • recognize and interpret a set of questions: exploiting Google services API, a question asked through an external microphone is translated by means of googleSpeech into speech transcript, which is in turn analyzed to retrieve the sentence structure and meaning by means of googleSpeechProcess.

Additional details can be found in the related Modules section.

Applications for the robot R1

Assistive-rehab applications are listed below:

  • AssistiveRehab.xml and AssistiveRehab-faces.xml: for running the upper limbs demo without and with the face recognition pipeline. Tutorial for these applications can be found here;
  • AssistiveRehab-TUG.xml and AssistiveRehab-TUG_SIM.xml: for running the TUG demo with the real robot and within the simulation environment gazebo. Tutorial for these applications can be found here;
  • skeletonDumper.xml, skeletonDumper-faces.xml, AssistiveRehab-replay.xml: for saving data without and with faces and replaying a saved experiment. Tutorial for these applications can be found here.

Datasets

Datasets used to train an LSTM for the action recognition pipeline used in the upper limbs demo can be found here.