The facebook research is developing curious & sensitive robots

Social media platform with global reach Facebook research makes extensive use of its artificial intelligence & machine learning systems

To keep a site online and display malicious content (at least sometimes). After self-study, computer vision, and natural language processing were announced at the beginning of the month, Facebook shared details on Monday on three additional areas of research that could ultimately lead to the creation of a more capable and curious AI.

Daisy crop facebook research“Most of our work in the field of robotics is focused on self-learning learning, in which systems learn directly from raw data so that they can adapt to new tasks and new circumstances,” wrote a team of researchers from the FAIR (Facebook AI Research) blog. message. “In robotics, we develop methods such as model-based learning (RL) to allow robots to learn themselves through trial and error through direct sensor input.”

Facebook Research robotsIn particular, the team tried to force the six-legged robot to teach itself to walk without assistance.

“Generally speaking, movement is a very difficult task in robotics, and this makes it very exciting from our point of view,” said Engadget Roberto Calandra, a researcher at FAIR. “We were able to develop algorithms for AI and actually test them on a really difficult task, which otherwise we do not know how to solve it.”

Hexapod begins its existence as a bunch of legs without understanding the environment. Using the reinforcement learning algorithm, the robot slowly calculates a controller that will help it achieve its goal of the movement. And since the algorithm uses the recursive self-improvement function, the robot can track the information it collects and further optimizes its behavior over time. That is, the more experience a robot gains, the better it works.

This is easier said than done, given that the robot must determine not only its location and orientation in space but also its balance and momentum – all from a series of sensors located on the machine’s lap. By optimizing the behavior of the robot and concentrating on it to go as quickly as possible, Facebook taught the robot to “walk” for several hours, not days.

But what does the hexapod do when it figures out how to move? Go explore, obviously. But it’s not so easy to evoke a passion for traveling with robots, who are usually trained to achieve a narrowly defined goal. However, this is exactly what Facebook is trying to do, with some help from its colleagues at New York University and the robotic arm.

Robotic arm facebook researchPrevious research on spreading curiosity about AI has been aimed at reducing uncertainty. Facebook’s latest efforts are pursuing the same goal, but they do it in a more structured way.

“In fact, we started with a model that knows little about itself,” FAIR researcher Franziska Meyer told Engadget. “At this point, the robot knows how to hold hands, but in fact, he does not know what actions need to be taken to achieve a specific goal.” But when the robot finds out what points you need to put in order to move your hand to the next target configuration, it may eventually begin to optimize its planning.

“We use this model, which tells us about it, to plan in advance a series of steps,” continued Meier. “And we are trying to use this planning procedure to optimize the sequence of actions to achieve the task.” To prevent the robot from optimizing its procedures too high and getting hung up, the research team rewarded the robot for actions that resolved uncertainty.

“We are conducting this study, in fact, we are quickly studying the best model, faster solving the problem and studying a model that better summarizes new problems,” Meyer concluded.

Finally, Facebook diligently taught robots to feel. Not emotionally, but physically. And it uses the predictive model of deep learning, originally developed for video. “In essence, this is a method that can predict a video from its current state, from its current image and action,” explained Kalandra.

The team trained the AI ​​to work directly using raw data, in this case, a high-resolution tactile sensor, and not through a model. “Our work shows that such a policy can be studied completely without remuneration, through a variety of uncontrolled research interactions with the environment,” the researchers concluded. During the experiment, the robot was able to successfully manipulate the joystick, roll the ball and determine the correct edge of the 20-sided cube.

“We show that we can have a robot that manipulates small objects without supervision,” said Calandra. “And in practice, this means that … we can actually accurately predict what will be the result of [this] action. This allows us to start planning for the future. We can optimize the sequence of actions, which in fact will give the desired result. ”

Combining visual and tactile inputs can significantly improve the functionality of future robotic platforms and improve learning methods. “To create machines that can learn, interacting with the outside world independently, we need robots that can use data from different senses,” the team concluded. We can only imagine that Facebook is preparing for this. However, the company declined to comment on potential practical applications for this study in the future.

Amazon Echo speakerTechnology LiteracyTechnology AcceptanceThe drone shrinks

Image Credit: Facebook

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

Please fill the required information in the form & send us, We will respond as soon as possible; we will not share your personal information.