Archives for category: robots

The human ability to grasp objects is an amazing feature of our bodies which we seamlessly integrate into our daily lives. Google is conducting research on how to replicate this feature, and as expected, replicating what we do easily is not so simple.

This post will not highlight a specific product but I would like to review a few important terms and concepts, as so much of robotics is currently shifting toward replicating what we can do with our hands.

Stereognosis: When you reach into your bag and look for your wallet, how do you determine, in seconds, that it is your wallet without having to look at it? Now, when you look for a coin in the wallet, how do you know that you are about to pull out a dime versus a penny? The concept of being able to recognize 3D objects with the sensory feedback from our hands is stereognosis. We know what we are holding without having to use visual cues. It’s phenomenal, and extremely difficult to reproduce due to the involved sensory and neural feedback that is required.

Weight anticipation:  Anticipation of forces is a very important concept in lifting and grasping. You may go through the same motion of lifting a heavy suitcase or a light grocery bag, but the amount of force that you recruit will be very different. Without much effort, we size objects up before we lift them and our brains tell our muscles to recruit the appropriate amount of force to move something. It is how we conserve energy; you don’t need the full force of your bicep to lift a light pencil. It is also how we move efficiently and save our body from injury.

In robots, this anticipation is difficult due to the limited experience, vision, and the possibly simple neural network of a robot.

Grasp: The human ability to use our fingers to pick something up is complicated and involved. Our precision, ability to use tactile cues, the involved sensation and neural network connected to our skin, and our quick ability to adapt and respond to objects means that a seemingly simple task is actually very difficult for a robot to replicate.

Robotics is currently in a very exciting time, with the applications for robotics growing. And as we use more products to enhance our work and daily activities, we find that we are the models and gold standard for the products being created.





One of the most exciting aspects of current robotics is the amenability and exchange of ideas that can occur with a single product. Both the remarkable ideas and communities that are created are very exciting.  I discussed this previously, specifically for open source 3D printing in prosthetics for the 3-D Heals community.

But, what if an intelligent arm could be programmed to carry out a wide array of tasks? KATIA from Carbon Robotics is designed for just this, to be to a functional, affordable robot with an open platform to allow versatility. The company has 3D printing and a camera as functions in mind, but is opening up its creator space to the community to give the intelligent arm more functions.

KATIA is, according to the site, ‘Kickass, Trainable, and Intelligent.’ The trainability is a very unique feature, as it appears that once the arm is guided through a motion, it can recall the same motion with ease. Designed with motion sensors and attachments in mind, the possibilities of KATIA are great, with possibly huge implications for those needing extra assistance in daily tasks.

Go to the site for updates with this project, and contribute ideas if you are a developer that would like to take part of its growth.

More details in the video below:



Termed, a ‘collaborative robot’ and starting its commercialization phase, C-Bot from Spain-based FisioBot is designed as an automated physical therapy room which includes are two robotic arms designed to administer treatment. The C-Bot is designed so far mostly for simple procedures and modalities: vacuum (suction) therapy, hot air therapy, electrotherapy, and laser therapy. These treatments can be adjusted for depth and intensity, and the robot is deemed safe for human use as there is a limit of how much physical pressure it can apply.

For use, A 3D scan of the patient’s body is performed, giving each patient an identification card of a map of their body. The treatment of choice is then administered, with the possibility of simultaneous treatments.

As robotics grows in healthcare, the implications of the C-Bot for PT are interesting, and it seems a short matter of time before robots are assisting in more involved procedures during manual therapy.

See the videos below for a demonstration (video in Spanish), and an automated video.


There is no doubt that robotics is changing and improving the field of healthcare. While there are many brilliant products being introduced in this field, it is the robotic exoskeleton that I personally find the most amazing. To think that one day we can completely eradicate the long term use of wheelchairs for people with neurological injuries and replace them with a wearable robot which allows them to stand and walk is absolutely inspiring.

The Indego is one of these devices. Weighing in at 26 pounds, this modular device comes in 5 pieces and is put on in components over the legs, hips and torso. The light frame of the device allows users to keep it on even while in a wheelchair prior to use. The device responds to weight shifts in order to guide movement. A forward lean allows the device to help users stand and walk, while leaning backward stops movement. Modular components at the hip and legs propel forward movement at the joints once initiated.

Currently only available for research purposes in rehabilitation centers, the website states it anticipates commercial sales in the US in 2016.

See the video below for demonstration and more information:


Printing is currently extremely inconvenient if you do not have regular access to a printer. Which is why it is so exciting that a mobile printer is in production and will be available for sale as early as next year. The Zuta Labs Mini Mobile Robotic Printer is a 10 x 11.5 centimeter pocket printer which is essentially an inkjet that rolls over whichever paper on which you need to print. The printer just needs a wireless connection, and can be recognized on computers as a regular printer. It supports iOS, Android, Linux, and Windows. The mobile printer is designed to start at the top of the page, and an inkjet rests on multidirectional wheels in order to cover the surface on which it is printing.

The Pocket Printer’s Kickstarter page has met its goal, but is still accepting backers in order to add more features.

See the videos below for a demonstration of how the device will look when it is working and their informational Kickstarter video:


For those with diminished strength or function of the hand, daily tasks that we often take for granted may become difficult, essentially disabling someone in their daily life. To address this and increase efficiency of the performance of the hand, researchers at MIT have developed “Supernumerary Robotic Fingers,” a type of wearable robotic device with two extra fingers to complement the grasping function of a regular hand.

In normal human movements we have muscles that work synergistically, meaning that there is a central signal from the brain that allows them to contract together to create a certain movement. For example, when the biceps contracts to bend the elbow, the muscle brachialis contracts as well to help facilitate this movement. This allows for efficiency of tasks in our body.

An article titled Bio-Artificial Synergies for Grasp Postural Control of Supernumerary Robotic Fingers explains how the researchers have developed an algorithm to allow the robotic fingers to work synergistically with human hands. That is, the extra fingers are designed to correlate with the human movements to work as an extension of the human hand and enhance activity to form essentially a seven-fingered hand. The researchers use the concept of “Bio-Artificial Synergy.” Thus, the researchers have essentially developed extra fingers that replicate the movements of muscles in the human hand.

The device is mounted on the wrist, and through a sensor glove receives a signal from the hand and works alongside the five fingers to assist with grasping objects. The robotic fingers are longer than human digits, making it easier to grasp larger objects. Each robotic finger can move in 3 different directions. For those that have difficulty holding onto objects or performing coordinated movements, this can be an invaluable tool to perform tasks independently.

Because of these extra fingers, the user is able to perform tasks that are normally difficult to perform single-handedly, such as twisting open a bottle cap, holding a tablet and typing, This product is still in the development phase, and though researchers have amazingly been able to correlate the robotic hand angles with human hand angles for grasp, they have not yet completed algorithms for fingertip forces.

The article mentions that this devices has implications not only for elder care, but for construction and manufacturing.

See the video below for more description of this amazing device:

photo source





There have been awful events happening in the world lately, and sometimes you just need the distraction of a giant robot that juggles cars. Still in the investment and development process, the BugJuggler is a 70 foot tall robot that can juggle cars using a diesel engine that will generate energy via hydraulic pressure. To invest or learn more go to the website or see video below.