For better or for worse, science may be approaching to create self-conscious robots, according to an article in Science Robotics. This means that machines can be close to thinking, acting and learning as humans do, adapting to the most diverse situations throughout their lives.
Until then, artificial beings could only perform these actions using simulators and standards supplied by humans. However, the Columbia Engineering researchers made a breakthrough in the robotics business: they created a robot that literally “turns around” on its own.
The artificial being has no prior knowledge of physics, geometry, or motor dynamics, nor does he know what he is, for he can not recognize its form. Then, after some information and a day of intensive computing, the robot created its own patterns if projected into a future situation to evaluate the conditions and possibilities.
The self-simulator is done internally and with it, the robot can observe and adapt to different situations, as well as dealing with new tasks and detecting and repairing damage to its own body. Until then, the robots could only operate thanks to a human model.
But to become independent and deal with unforeseen (created by their creators, inclusive), the machines themselves have to learn to simulate before these situations, right? This was questioned by Hod Lipson, a professor of mechanical engineering and director of the laboratory where the research was conducted.
The study made use of an articulated robotic arm, which initially moved randomly and only accumulated trajectories, without great advances. Although very imprecise, the robot did not even know what it was or how its joints were connected.
To create “self-simulation”, the robot made use of deep learning. In less than 35 hours of training, the artificial being became consistent and was able to internally simulate take-and-move tasks repeatedly. All this using a closed loop system.
The closed mesh, it is worth mentioning, is a system for engines with electronic injection, where the sensors send information to the engine, defining the base injection time, but always counting on an oxygen sensor to recalibrate the quantity and thus improve and save the burning of combustion. The open mesh works in a similar way, but without an oxygen sensor.
The closed-loop system was used so that at the end of each trajectory where the arm grasped an object and moved it into a container, the robot could readjust itself to restart the same movement. After some time, he was able to deposit all the items inside a glass, using his own pattern of repetition of movements, with 100% of success.
When the study used an open-loop system for the task, even with standards and self-simulations created by the robot itself internally without any external feedback, the same task of grabbing and depositing objects achieved a success rate of 44%. In addition, another task was also assigned to the robot: writing a text using a marker.
For this stage, the researchers also attributed an extra dose of impromptu in order to assess whether the mechanical arm could detect changes or damage in itself. To do this, they printed a piece on a 3D printer and embedded it in the robot. The mechanical arm was able to detect the change in its own extension and readjusted its patterns and its self-simulations.
Thereafter, even with a “deformed” body, the arm managed to reconfigure itself and performed its task with little loss of performance. Lipson noted that self-image may be the key to allowing robots to expand their knowledge beyond the so-called Applied IA, performing more widespread movements and actions.
“That’s maybe what a newborn child does in his crib as he learns what it is,” Lipton said. “We believe that this advantage may also have been the evolutionary origin of self-consciousness in humans.”
Lipton also added, “While our robot’s ability to project itself is still modest compared to humans, we believe that this technique will lead to the path of self-consciousness.”
For the researcher, robotics and artificial intelligence can offer a new window for the evolution of consciousness in machines. “Philosophers, psychologists, and cognitive scientists have been questioning self-consciousness for millennia, but relatively little progress has been made.” Lipton also says that “we still hide our lack of understanding with subjective terms as a ‘fabric of reality’, but robots now compel us to translate these vague conceptions into concrete algorithms and mechanisms.” On the other hand, Lipton and his team are aware of the ethical implications that their study can cause.
“Self-awareness will lead to more resilient and adaptive systems, but it can also mean some loss of control. It’s a powerful technology, but it has to be handled with care. ”
The next step of the study is to understand whether robots can create patterns not only for their bodies, but also for their “minds,” and thus think how humans are capable of doing.