Deep Reinforcement Learning algorithms for Robots
These days, Deep Reinforcement Learning algorithms are used in factory robots to make them learning from mistakes on their own without needing to write any new AI/ML program. This way, self learning by Robots is claimed to be happening within overnight. Which means that, after eight hours or so it gets to 90% accuracy or above, which is almost the same as if an expert were to program it (a new ML program) for several days.!!
What is Deep Reinforcement Learning, how it applies to robotics?
It is a term derived from the world of Artificial Intelligence (AI) and Machine Learning (ML). It uses the combined principles of Deep Learning (DL) and Reinforcement Learning (RL), hence called Deep Reinforcement Learning. In our previous blog we read that, Deep Learning (DL) is a part of machine learning methods which is based on artificial neural networks. Reinforcement Learning (RL) is an area of machine learning which tells how software agents should take actions to maximize the probability of choosing the best possible path or behavior for a particular situation.
Now, with the combination of DL and RL principles, we can create efficient algorithms that can be applied to robotics in manufacturing. This helps in automatic scaling-up the efficiency of existing machine learning models, thereby helping robots to solve new problems on their own.
As we all know that industrial robots are generally capable of performing at extreme precision and speed, they need to be programmed very carefully in order to do something like picking/dropping an object. This is difficult and time-consuming, and it means that such robots can usually work only in tightly controlled environments.
The world’s largest industrial robot maker FANUC (Japan, Tokyo) is developing robots that use deep reinforcement learning to figure out how to do things on its own. In other words, FANUC factory robots have used deep reinforcement learning to figure out how to move objects from one container to another. Typically, this would require extensive programming with time-consuming trial and error. However, Deep Reinforcement Learning enables robots to learn tasks on its own overnight. Without any programming in place, it only took the robotic arm 8 hours to have a 90% performance rate – which is the same percentage as a robotic arm programmed by an expert by taking several days time.!!!
Here, FANUC’s robot learns picking-up/dropping-off of objects while capturing video footage of the process. Each time it succeeds or fails, it remembers how the object looked, knowledge that is used to refine a deep learning model, or a large neural network, that controls its action. Here the robot memorizes the object, gains knowledge and trains itself to do this job with more speed and accuracy on next day. This way, day-by-day, Robot improves accuracy of output (increases probability of doing things right) and efficient too (doing things faster without mistakes). FANUC’s adoption of Deep Reinforcement Learning algorithms will allow machines to perform tasks without a human needing to program it to tell it what to do.
FANUC understands if machines are able to work together in a system to perform a task, they can learn from each other, resulting in higher workplace efficiency. For instance, one robotic arm has taken 8 hours to learn a task, then if 8 robotic arms had been working together with the ability to communicate each other, the task could have been learned with the same success rate in about an hour’s time alone. If a factory had one hundred arms working together as a team, teaching each other and learning from one another, their precision and speed could lead to one of the highest efficiencies ever seen in an industrial workplace. This form of distributed learning, sometimes called “cloud robotics” is shaping-up to be a big trend in manufacturing industries.
To conclude, Reinforcement Learning algorithms are goal-oriented algorithms, which generally learn to reach a complex goal or maximize the probability of a decision along a particular dimension over many steps. For example, how to maximize the points won in a game over many moves? They can start from a blank page, and under proper conditions, achieve super-human performance. These algorithms are penalized when they make a wrong decision, but awarded when they make right decision. Reinforcement learning analyzes actions by the produced results. Its goal is to learn sequences of actions that helps robot to achieve its goal or maximize the probability of acheiving the goal (increased accuracy).
Other real world use-cases of Deep Reinforcement Learning are…
- Tesla’s factory comprises more than 160 robots that do a significant part of work on its cars to reduce the possibility of human error.
- The warehousing facilities of several eCommerce websites and supermarkets use these super-smart robots for sorting millions of their products every day and help deliver the right product to the right customer.
- Self-driving cars already utilizing deep machine learning’s technology.
How Deep Reinforcement Learning Works?
Humans have the ability to recognize places, plants and animals without much effort at all. Even a 5 year old child can tell a fish apart from a dog, but computers don’t find the psychology so fast and easily yet. Deep Reinforcement Learning relies on an algorithm based on how the human brain works. The biological brain has individual cells called neurons. When recognizing something like a fish, these neurons work independently to decide what they’re seeing, but they also compare their findings to the findings of other neurons to reach a conclusion.
Deep Reinforcement Learning uses Artificial Neural Networks to mimic this process. Artificial neural networks are built in a step by step procedure like below:
- Tiny mathematical formulas are grouped together, like neurons.
- These groupings are called a “net” and are instructed to work together to “learn”.
- If these groupings are expanded upon, the bigger and deeper a net gets.
- The net learns from practice the machine performs.
Instead of relying on human beings taking hours to write complex, tedious instructions for machines to perform tasks at 60-70% accuracy rates, programmers can now use simple formulas and instructions to program a machine to learn to solve problems based on examples or test runs. This gives machines the ability to be more intelligent than if we were to write a program for them, as the programming could always be flawed.
Note: FANUC’s robot has been taken as an example above to explain how Deep Reinforcement Learning works in manufacturing. FANUC had done this research sometime during 2015-2016 by investing millions of dollars, and it was demonstrated at the International Robot Exhibition in Tokyo in December 2015.However, as on today, there are many other companies who also have developed (or done R&D on) such robots that use Deep Reinforcement Learning.
Don’t worry, robots won’t be taking over the world anytime sooner, but deep machine learning may sooner be a reality in many industrial workplaces. Efficiency and productivity at factories could shoot through the roof. Most of us interact with AI almost every day, like Google translation, Google search, product recommendations in eCommerce applications, Amazon Alexa and Apple Siri kind of speech recognition etc. Going forward, Deep Reinforcement Learning will be used to make smarter decisions by applications itself through self-learning techniques. Big thanks to significant advances in artificial intelligence:-)
14 thoughts on “Factory robot can learn on its own and improve its efficiency and accuracy overnight. How?”
Do you mind if I quote a couple of your articles as long as I provide credit and sources back to your webpage? My website is in the exact same area of interest as yours and my users would definitely benefit from some of the information you present here. Please let me know if this alright with you. Cheers!
I don’t mind, hence please proceed!
Amazing! This blog looks exactly like my old one! It’s on a totally different subject but it has pretty much the same layout and design. Wonderful choice of colors!
I and also my guys ended up analyzing the best points found on your web page and so the sudden developed an awful suspicion I never expressed respect to the blog owner for those techniques. These men happened to be so stimulated to read them and have in effect quite simply been taking advantage of those things. Appreciation for simply being so helpful and also for making a choice on varieties of nice information millions of individuals are really wanting to discover. Our own honest apologies for not expressing gratitude to you sooner.
Thank you 👍
You actually make it seem so easy with your presentation but I find this matter to be really something that I think I would never understand. It seems too complex and extremely broad for me. I am looking forward for your next post, I’ll try to get the hang of it!
Great post. I was checking continuously this blog and I’m impressed! Extremely useful info specifically the last part 🙂 I care for such information a lot. I was looking for this particular info for a long time. Thank you and good luck.
As a Newbie, I am continuously exploring online for articles that can be of assistance to me. Thank you
Hey very nice blog!! Man .. Excellent .. Wonderful .. I will bookmark your blog and take the feeds alsoKI’m satisfied to find numerous useful information here in the post, we need work out more techniques in this regard, thanks for sharing. . . . . .
Thank you 👍
I and my guys appeared to be studying the best hints from your web site and then instantly I had a horrible suspicion I never thanked you for those tips. These ladies had been consequently very interested to read all of them and already have honestly been loving these things. Appreciate your truly being indeed accommodating and then for having varieties of important topics most people are really desperate to learn about. Our sincere apologies for not saying thanks to sooner.
Very clear site, appreciate it for this post.
Thank you very much!
I very lucky to find this site on bing, just what I was looking for : D besides saved to bookmarks.