Skip to content

Latest news

May 4, 2021 : Checkout our latest release v0.6.0!

What's new?

  • We developed the obstacleDetector module which clusters data received from front and back lasers, by means of Euclidean distance and stops the navigation when the robot reaches a threshold distance (1.5 m) from the closest obstacle:

clusters-front-opt

  • The obstacle detection has been included into the clinical test Timed Up and Go (TUG)! If an obstacle is found within a radius of 1.5 m around the robot (within the rear and front laser FOVs), the interaction is frozen and the robot asks to remove the obstacle. The interaction does not start until the obstacle is removed (within a timeout).

Checkout the video:

TUG_obstacle

May 22, 2020 : Checkout our latest release v0.5.0!

What's new?

  • The clinical test Timed Up and Go (TUG) is now ready for both the real robot and within the simulation environment gazebo:

questions

Follow the tutorial to run the demo in gazebo!

Tip

Click on the image to open the video!

  • The motion analysis has been extended to the lower limbs: now we can evaluate walking parameters, such as step length and width, walking speed and number of steps.

  • We developed the lineDetector module to visually detect start and finish lines on the floor, composed of ArUco markers.

  • We developed a reactive navigation system, which allows the robot to navigate based on the received perceptual stimuli: the robot can reach fixed points in the environment (such as the start and the finish lines) and follow the user maintaining a fixed distance along a straight path.

Note

The environment is supposed to be free from obstacles.

  • We integrated the Google services API within our application to have a simple natural question and answer mechanism:

    1. the googleSpeech module receives the sound from a microphone and retrieves the speech transcript from Google Speech cloud services;
    2. the googleSpeechProcess module receives the speech transcript and analyses the sentence to retrieve its structure and meaning, relying on Google Language Cloud services.
  • The speech system can be triggered by a Mystrom wifi button, which avoids the system to be always listening and thus responsive also to background noise: whenever the user presses the button, the robot is ready to answer questions in italian!

Note

The robot can answer a selected set of questions related to the TUG. The interaction is still flexible as questions can be posed by the user in natural language, thanks to the capability of the system to interpret the question, rather than simply recognize it.

March 19, 2020 : Added new tutorial to detect Aruco boards in gazebo. Check it out!

July 10, 2019 : Checkout our latest release v0.4.0!

What's new?

  • the feedback can be provided now using the robot skeleton template, rather than the pre-recorded one. The new module robotSkeletonPublisher publishes the robot skeleton, which represents R1 limbs configuration, as following:

robot-skeleton

The robot skeleton is remapped onto the observed skeleton internally within feedbackProducer for the further analysis (skeletonScaler and skeletonPlayer are thus bypassed). Such modality insures a full synchronization between the robot movement and the skeleton template, which was not guaranteed with the pre-recorded template.

Note

The modality with the pre-recorded template is still available and can be set through interactionManager by setting the flag use-robot-template to false. In such case, the pipeline including skeletonScaler and skeletonPlayer is used.

Tip

The robot skeleton can be replayed offline by saving the robot joints specified in this app. A tutorial for replaying a full experiment can be found in the Tutorial section.

  • the Train With Me study aims at comparing users' engagement during a physical training session with a real robot and a virtual agent. Preliminary experiments were designed for comparing R1 with its virtual counterpart and the developed infrastructure is now available. interactionManager can deal with the following three phases:

    1. observation: the real/virtual robot shows the exercise and the user observes it;
    2. imitation: the real/virtual robot performs the exercise and the user imitates it,
    3. occlusion: the real/virtual robot keeps performing the exercise behind a panel and the user keeps imitating it, without having any feedback.

    The scripts used during experiments can be found here, namely AssistiveRehab-TWM-robot.xml.template and AssistiveRehab-TWM-virtual.xml.template, which load parameters defined in the train-with-me context. A tutorial for running the demo with the virtual R1 can be found in the Tutorial section.

May 6, 2019 : Checkout our latest release v0.3.0!

This is a major change which refactors the entire framework to deal also with feet, following up the use of BODY_25 model of OpenPose. The following is an example of skeleton with feet in 2D and 3D:

screencast

Changes include:

  • SkeletonStd now includes hip_center, foot_left, foot_right:
    • hip_center is directly observed if available, otherwise is estimated as middle point between hip_left and hip_right (the same stands for shoulder_center);
    • foot_left and foot_right are detected as being the big-toe. If big-toe is not available, small-toe is used as fallback;
  • SkeletonWaist has been removed in favor of the new SkeletonStd;
  • optimization performed by skeletonRetriever is now extended also to lower limbs;
  • modules previously relying on SkeletonWaist have been updated to use the new SkeletonStd;
  • the new framework is compatible with the Y1M5 demo, which was successfully tested online on the robot;
  • the new framework is compatible with datasets recorded before the release, which can be reproduced by means of yarpdataplayer.

May 6, 2019 : Checkout our new release v0.2.1!

What's new?

  • the action recognition is now robust to rotations! The original network was trained with skeletons frontal to the camera, which is not guaranteed during a real interaction. The network has been re-trained with a wider training set, comprising synthetic rotations applied to real data around each axis, with variation of 10 degrees in a range of [-20,20] degrees. Also a variability on the speed was introduced in the training set, by considering the action performed at normal, double and half speed. We compared the accuracy of the previous and the new model for different rotations of the skeleton, and results show a high accuracy to a wider range for all axes:

ROTATION AROUND X

Original New

ROTATION AROUND Y

Original New

ROTATION AROUND Z

Original New

  • the skeleton now also stores the pixels alongside the 3D points! This is very useful when using the skeleton for gaze tracking, as it avoids the transformation from the camera frame to the root frame of the robot required is using 3D information;
  • the offline report is now interactive! The user can navigate through plots, zoom, pan, save: report2

January 28, 2019 : Checkout our latest tutorial on how to run the main applications on the robot R1!

January 25, 2019 : We are public now!

December 21, 2018 : Checkout the latest release and the the comparison with the first release!