Archives for category: robots

The human ability to grasp objects is an amazing feature of our bodies which we seamlessly integrate into our daily lives. Google is conducting research on how to replicate this feature, and as expected, replicating what we do easily is not so simple.

This post will not highlight a specific product but I would like to review a few important terms and concepts, as so much of robotics is currently shifting toward replicating what we can do with our hands.

Stereognosis: When you reach into your bag and look for your wallet, how do you determine, in seconds, that it is your wallet without having to look at it? Now, when you look for a coin in the wallet, how do you know that you are about to pull out a dime versus a penny? The concept of being able to recognize 3D objects with the sensory feedback from our hands is stereognosis. We know what we are holding without having to use visual cues. It’s phenomenal, and extremely difficult to reproduce due to the involved sensory and neural feedback that is required.

Weight anticipation:  Anticipation of forces is a very important concept in lifting and grasping. You may go through the same motion of lifting a heavy suitcase or a light grocery bag, but the amount of force that you recruit will be very different. Without much effort, we size objects up before we lift them and our brains tell our muscles to recruit the appropriate amount of force to move something. It is how we conserve energy; you don’t need the full force of your bicep to lift a light pencil. It is also how we move efficiently and save our body from injury.

In robots, this anticipation is difficult due to the limited experience, vision, and the possibly simple neural network of a robot.

Grasp: The human ability to use our fingers to pick something up is complicated and involved. Our precision, ability to use tactile cues, the involved sensation and neural network connected to our skin, and our quick ability to adapt and respond to objects means that a seemingly simple task is actually very difficult for a robot to replicate.

Robotics is currently in a very exciting time, with the applications for robotics growing. And as we use more products to enhance our work and daily activities, we find that we are the models and gold standard for the products being created.

 

 

source

One of the most exciting aspects of current robotics is the amenability and exchange of ideas that can occur with a single product. Both the remarkable ideas and communities that are created are very exciting.  I discussed this previously, specifically for open source 3D printing in prosthetics for the 3-D Heals community.

But, what if an intelligent arm could be programmed to carry out a wide array of tasks? KATIA from Carbon Robotics is designed for just this, to be to a functional, affordable robot with an open platform to allow versatility. The company has 3D printing and a camera as functions in mind, but is opening up its creator space to the community to give the intelligent arm more functions.

KATIA is, according to the site, ‘Kickass, Trainable, and Intelligent.’ The trainability is a very unique feature, as it appears that once the arm is guided through a motion, it can recall the same motion with ease. Designed with motion sensors and attachments in mind, the possibilities of KATIA are great, with possibly huge implications for those needing extra assistance in daily tasks.

Go to the site for updates with this project, and contribute ideas if you are a developer that would like to take part of its growth.

More details in the video below:

 

source

Termed, a ‘collaborative robot’ and starting its commercialization phase, C-Bot from Spain-based FisioBot is designed as an automated physical therapy room which includes are two robotic arms designed to administer treatment. The C-Bot is designed so far mostly for simple procedures and modalities: vacuum (suction) therapy, hot air therapy, electrotherapy, and laser therapy. These treatments can be adjusted for depth and intensity, and the robot is deemed safe for human use as there is a limit of how much physical pressure it can apply.

For use, A 3D scan of the patient’s body is performed, giving each patient an identification card of a map of their body. The treatment of choice is then administered, with the possibility of simultaneous treatments.

As robotics grows in healthcare, the implications of the C-Bot for PT are interesting, and it seems a short matter of time before robots are assisting in more involved procedures during manual therapy.

See the videos below for a demonstration (video in Spanish), and an automated video.

photosource

There is no doubt that robotics is changing and improving the field of healthcare. While there are many brilliant products being introduced in this field, it is the robotic exoskeleton that I personally find the most amazing. To think that one day we can completely eradicate the long term use of wheelchairs for people with neurological injuries and replace them with a wearable robot which allows them to stand and walk is absolutely inspiring.

The Indego is one of these devices. Weighing in at 26 pounds, this modular device comes in 5 pieces and is put on in components over the legs, hips and torso. The light frame of the device allows users to keep it on even while in a wheelchair prior to use. The device responds to weight shifts in order to guide movement. A forward lean allows the device to help users stand and walk, while leaning backward stops movement. Modular components at the hip and legs propel forward movement at the joints once initiated.

Currently only available for research purposes in rehabilitation centers, the website states it anticipates commercial sales in the US in 2016.

See the video below for demonstration and more information:

source

Printing is currently extremely inconvenient if you do not have regular access to a printer. Which is why it is so exciting that a mobile printer is in production and will be available for sale as early as next year. The Zuta Labs Mini Mobile Robotic Printer is a 10 x 11.5 centimeter pocket printer which is essentially an inkjet that rolls over whichever paper on which you need to print. The printer just needs a wireless connection, and can be recognized on computers as a regular printer. It supports iOS, Android, Linux, and Windows. The mobile printer is designed to start at the top of the page, and an inkjet rests on multidirectional wheels in order to cover the surface on which it is printing.

The Pocket Printer’s Kickstarter page has met its goal, but is still accepting backers in order to add more features.

See the videos below for a demonstration of how the device will look when it is working and their informational Kickstarter video:

 

For those with diminished strength or function of the hand, daily tasks that we often take for granted may become difficult, essentially disabling someone in their daily life. To address this and increase efficiency of the performance of the hand, researchers at MIT have developed “Supernumerary Robotic Fingers,” a type of wearable robotic device with two extra fingers to complement the grasping function of a regular hand.

In normal human movements we have muscles that work synergistically, meaning that there is a central signal from the brain that allows them to contract together to create a certain movement. For example, when the biceps contracts to bend the elbow, the muscle brachialis contracts as well to help facilitate this movement. This allows for efficiency of tasks in our body.

An article titled Bio-Artificial Synergies for Grasp Postural Control of Supernumerary Robotic Fingers explains how the researchers have developed an algorithm to allow the robotic fingers to work synergistically with human hands. That is, the extra fingers are designed to correlate with the human movements to work as an extension of the human hand and enhance activity to form essentially a seven-fingered hand. The researchers use the concept of “Bio-Artificial Synergy.” Thus, the researchers have essentially developed extra fingers that replicate the movements of muscles in the human hand.

The device is mounted on the wrist, and through a sensor glove receives a signal from the hand and works alongside the five fingers to assist with grasping objects. The robotic fingers are longer than human digits, making it easier to grasp larger objects. Each robotic finger can move in 3 different directions. For those that have difficulty holding onto objects or performing coordinated movements, this can be an invaluable tool to perform tasks independently.

Because of these extra fingers, the user is able to perform tasks that are normally difficult to perform single-handedly, such as twisting open a bottle cap, holding a tablet and typing, This product is still in the development phase, and though researchers have amazingly been able to correlate the robotic hand angles with human hand angles for grasp, they have not yet completed algorithms for fingertip forces.

The article mentions that this devices has implications not only for elder care, but for construction and manufacturing.

See the video below for more description of this amazing device:

photo source

 

 

loop_optimized_bluebg2.gif

source

There have been awful events happening in the world lately, and sometimes you just need the distraction of a giant robot that juggles cars. Still in the investment and development process, the BugJuggler is a 70 foot tall robot that can juggle cars using a diesel engine that will generate energy via hydraulic pressure. To invest or learn more go to the website or see video below.

source

As our rate of multitasking increases, we may as well embrace the age of interactive robots in our home. Crowd-funded JIBO by Cynthia Breazeal is a personal robot with a variety of functions. According to the website, JIBO can see, hear, learn, help, speak and relate. It is a personalized robot that can take orders, tell interactive stories, make video calls, and sense social and emotional cues to respond appropriately to its user. As you walk around a room, it has enabled face recognition and responds to you appropriately. While we have seen components of these in other devices, JIBO is more of a polished home companion that can interact both with other devices and humans.

Available at the end of 2015, can be pre-ordered for $499.

See the promotional video below:

photo source

The future of healthcare includes robotics devices which mimic living tissue and may help to target and perform functions with superior accuracy and efficiency of drugs. A 2014 study by Cvetkovic et al. presented the first engineered skeletal muscle machine which moved unattached to any other device. These small soft robotic devices were able to contract and crawl on their own, mimicking the function of skeletal muscle tissue. The shell of the machine is made from hydrogel to encase the tissue. Meanwhile, engineered muscle building cell tissue, along with proteins such as collagen or fibrin were printed on 3D printer and encased in this shell to make the tissue functional.

The many systems that need to be coordinated for muscle contraction are difficult enough to just understand, but to be able to engineer something that mimics the function is amazing. For muscle tissue to contract and produce force requires an intense network of neural input and coordination of responsive tissue. There must be enough elasticity in the muscle to produce the force; ultimately the change in length and contraction of muscle tissue produces the force for movement. The tissue must be slightly stretched at all times for potential contraction; but not so far that it loses the ability to contract. This all occurs with electrical cues which send signals to the muscle tissue for contraction.

The future of these small biological machines has many implications, as the article explains. Future uses include drug screening, drug delivery, medical implants, and biomimetic machines.

See the video below for demonstration:

 

Most emergency medical training involves lifeless torsos, videos, and noninvasive simulated work on a live partner in which you haphazardly  practice what you would actually have to do in an emergency situation. From my personal experience of many CPR classes as well as a course of emergency medical training, I can attest none of this prepares you very well for what you would actually have to act out in a life threatening situation. No, you can never fully prepare for having to rescue someone, but what gives you the confidence to do it is practicing something similar prior to having to act on the spot.

Kernerworks has developed a realistic robotic mannequin that breathes, bleeds, and responds to procedures to give feedback if they have been performed correctly.This company in San Rafael, CA includes a team of former special effects artists that used to work for film studios. The mannequins were molded from real people, given realistic features, and have an internal computer system that includes sensors which respond to procedures.

Used for military training for trauma response, one of the products is a double amputee mannequin which allows trainees to practice relieving pnueumothorax with a needle (sensors respond if done correctly). One of the features is also a well developed throat which features air differential sensors. Medics can practice placing a laryngoscope into the throat which has a camera so you can see the placement of a tube for breathing. An endotracheal tube can be placed in the throat for use of an Ambu bag, if done correctly this shows the chest rising. If it is done incorrectly and the tube is accidentally placed into the esophagus, the chest will not rise. Unlike most practice torsos, these are sensors responding to these procedures, which are much more precise. Watch the video above for a tour Tested shared which explains more about the company and the incredible work behind the mannequins.

https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcRssvil4tC15z-pOM2KJzrZKqY8RxdroRFE6uo1fJzq63z1EsKT

source