2019年12月17日星期二

Robotic arm mounting camera for quick creation of 3D models

Researchers at the Robotics Institute at Carnegie Mellon University in the United States said that installing a camera on a robot's robotic arm can help it quickly create an environmental 3D model and make robots sense where their arms are.

When the robot performs an obligation, such as extending its arm into a narrow space or perhaps picking up a fragile object, it must accurately know where its robotic arm is. Researchers at the Carnegie Mellon University (CMU) Robotics Institute in the United States said that attaching a camera to a robot arm can quickly create a 3D model of the surrounding environment and let the robot know where its arm is currently located.

If the camera is not accurate enough and the robot arm does not fluctuate, it is difficult to complete real-time synchronization. However, the CMU team found that it was possible to merge the camera with the robot arm and use the angle of the joint to determine the shape of the camera, thereby improving the accuracy of the drawing. Dr. Matthew Klingensmith of Robotics said that this is crucial for tasks such as exploration.

Researchers have previously presented their findings at the IEEE International Conference on Robotics and Automation. Siddhartha Srinivasa, assistant professor of robotics, and Michael Kaess, assistant professor of research, both participated in the seminar.

Srinivasa said that placing a camera or other sensor on a robot arm is currently feasible, as today's sensors have become smaller and more efficient. He explained that this is very important, because robots "generally have a camera pole in their heads", so they can not have a better perception of the mission environment like humans.

But if the robot can't see its own hand, it's not enough to simply place an "eye" on the robot arm, because it can't sense the absolute status of the person's hand and the objects in the environment. This is a rare problem for mobile robots that perform obligations in unknown environments. The commonly used processing method is synchronous positioning and mapping, and the English abbreviation is SLAM. This approach is to allow different parts of the robot to go through the camera, lidar and wheel odometry, and work together to draw a new 3D map of the environment and calculate the robot's position in the 3D world.

"There are currently several algorithms that can aggregate these resources and build 3D space, but they have very strict requirements on the accuracy and computation of the sensors," Srinivasa said.

These algorithms are usually hand-held if the sensor's pose is unknown, Klingensmith said. However, assuming that the camera device is on the robot arm, it will impose restrictions on its behavior.

Klingensmith introduced: “Automatically tracking changes in joint angles makes it possible to draw high-quality circumlunar maps, even when the camera is moving very fast and the sensor data is missing, perhaps because of poor accuracy.”

The researchers demonstrated to us the multi-joint robots. They can complete the real-time positioning function through the depth camera installed on the light robot. When creating a 3D model of a bookshelf, the completion degree of its reconstruction obligations is comparable to or better than other surveying and mapping techniques.

"There are still many tasks to be done to perfect this approach, but we are convinced that it has great potential for advanced robotic operation," said Srinivasa. Toyota, the U.S. Naval Research Office, and the U.S. National Superstition Foundation also expressed support for this study.

VGA TO HDMI Converter

VGA TO HDMI Converters,VGA to HDMI Adapter,Digital to Analog Audio Converter

HDMI Splitter,HDMI Extender Co., Ltd. , http://www.nshdmicable.com

没有评论:

发表评论