The dismantling of nuclear installations poses major challenges for operators. Whether decommissioning or safe containment, the amount of nuclear waste for disposal is increasing at an overwhelming rate around the world. Automation is increasingly required to manage nuclear waste, but the nuclear industry is reluctant to fully autonomous robotic control methods for safety reasons, and remote-controlled industrial robots are preferred in hazardous environments. However, tasks as complex as remote-controlled gripping or cutting out unfamiliar objects using joysticks and CCTV cameras are difficult to control and sometimes even impossible.

To simplify this process, the National Center for Nuclear Robotics led by Extreme Robotics Lab at the University of Birmingham in the UK is investigating options for automated handling of nuclear waste. The robotic assistance system developed there enables “shared” control to perform complex manipulation tasks by means of haptic feedback and vision information provided by the Ensenso 3D camera. The operator, always present in the loop, can keep control of the robot’s automated actions, in the event of a system failure.

Application
Anyone who has tried a fairground gripping machine can confirm that manual control of the gripper arms is anything but trivial. As harmless as it is to fail in trying to catch a stuffed rabbit, unsuccessful attempts can be just as dramatic when handling radioactive waste. To avoid damage with serious consequences for humans and the environment, the robot must be able to detect with extreme precision the radioactive objects present on the scene and to act with precision. It is literally in the hands of the operator to identify the correct gripping positions. At the same time, he must correctly evaluate the inverse kinematics (transformation towards the rear) and correctly determine the angles of articulation of the elements of the robot arm in order to position it correctly and to avoid collisions. The assistance system developed by the British researchers simplifies and speeds up this task enormously: with a standard industrial robot equipped with a parallel jaw gripper and an Ensenso N35 3D camera.

The system autonomously scans unknown waste and creates a 3D model of it in the form of a point cloud. This is extremely precise because Ensenso 3D cameras work on the principle of spatial vision (stereo vision), which is modeled on human vision. Two cameras see the object from different positions. Although the content of the images from the two cameras looks identical, they show differences in the position of the objects viewed. Since the distance and viewing angle of the cameras as well as the focal length of the optics are known, the Ensenso software can determine the 3D coordination of the object point for each individual image pixel. In this case, the scene is captured using different camera scan positions and combined to get a full 3D surface from all viewing angles. Ensenso’s calibration routines help transform individual point clouds into a global coordinate system, which enhances the full virtual image. The resulting point cloud thus contains all the spatial object information necessary to communicate the correct gripping or cutting position to the robot.

49318 Ids Machine Vision Erl Cutting CamerasUsing the software, the Enseno 3D camera supports the perception and evaluation of depth information for the operator, whose cognitive load is considerably reduced. The assistance system combines the haptic characteristics of the object to be grasped with a special gripping algorithm.

“The scene cloud is used by our system to automatically generate several stable grip positions. Since the point clouds captured by the 3D camera are high resolution and dense, it is possible to generate very precise grip positions for each object in the scene. on this point, our “hypothesis ranking algorithm” determines the next object to pick up, based on the current position of the robot, “explains Dr Naresh Marturi, senior researcher at the National Center for Nuclear Robotics.

(The principle is similar to that of the Mikado skill game, where one stick must be removed at a time without moving any other stick).

The determined path guidance allows the robot to navigate smoothly and evenly along a desired path to the target grip position. Like a navigation system, the system helps the operator guide the robot arm to grip it safely, if necessary, also past other unknown and dangerous objects. The system calculates a safe lane for this and helps the operator stay in the lane with haptic feedback.

49318 Ids Machine Vision 3 Dcamera Erl TeleopThe system maps the natural movements of the operator’s hand accurately and reliably in real time with the corresponding movements of the robot. The operator thus always retains manual control and can take over in the event of a component failure. He can simply turn off the AI ​​and switch back to human intelligence by disabling “force feedback mode”. In accordance with the principle of shared control between man and machine, the system therefore always remains under control, which is essential in an environment with a high level of danger.

Camera
“For all our tasks of stand-alone take planning, remote control and visual object tracking, we use Ensenso N35 3D cameras with blue LEDs (465nm) mounted on the end effector of the robots with other tools, ”says Dr. Naresh Marturi. Most of the Extreme Robotic Lab systems have so far been equipped with a single 3D camera. “However, recently, to speed up the process of creating 3D models, we have upgraded our systems to use three additional Ensenso 3D stage-mounted cameras as well as the one on-board the robot.”

49318 EnsensoThe Ensenso N series is predestined for this task. It has been specially designed for use in harsh environmental conditions. Thanks to its compact design, the N series is also suitable for space-saving stationary or mobile use on a robot arm for 3D detection of moving and static objects. Even in difficult lighting conditions, the built-in spotlight projects a high contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus complementing structures that are not or little present in the image. its surface. The aluminum housing of the N30 models ensures optimum heat dissipation of the electronic components and therefore a stable light output even under extreme ambient conditions. This ensures the consistent quality and robustness of the 3D data. Even in difficult lighting conditions, the built-in spotlight projects a high contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus complementing structures that are not or little present in the image. its surface.

The cameras of the Ensenso N camera family are easy to configure and use via the Ensenso SDK. It offers GPU-based image processing for even faster 3D data processing and allows the output of a single 3D point cloud from all cameras used in multi-camera operation, which is required in this case. , as well as live composition of 3D point clouds from multiple viewing directions. For the assistance system, the researchers developed their own software in C ++ to process the 3D point clouds captured by the cameras.

49318 Erl Naresh Marturi Maxime Adjigble“Our software uses the Ensenso (multithreaded) SDK and its calibration routines to overlay texture on high-resolution point clouds, then transform those textured point clouds into a global coordinate system,” says Dr. Naresh Marturi. “Ensenso SDK is quite easy to integrate with our C ++ software. It offers a variety of functions and easy methods to capture and manage point clouds as well as camera images. Additionally, with CUDA support, the SDK routines allow us to record multiple high resolution point clouds to generate high quality scene clouds in an overall frame. This is very important for us, in particular to generate precise input assumptions. “

Main advantages of the system

  • Operators don’t have to worry about the depth of the scene or how to reach the object or where to pick it up. The system can understand everything in the background and helps the operator to move exactly to where the robot can best grip the object.
  • Thanks to haptic feedback, operators can feel the robot in their hand even when the robot is not in front of them.
  • By combining haptics and grip planning, operators can move objects in a remote scene very easily and very quickly with very low cognitive load. This saves time and money, prevents errors and increases security.

Outlook
Researchers at the Extreme Robotic Lab in Birmingham are currently developing an extension of the method to allow the use of a hand with several fingers instead of a clamp with parallel jaws. This should increase flexibility and reliability when gripping complex objects. In the future, the operator will also be able to feel the forces to which the fingers of the remote control robot are exposed when gripping an object. Fully autonomous gripping methods are also under development, in which the robot arm is controlled by AI and guided by an automatic vision system. The team is also working on visualization tools to improve human-robot collaboration to control remote robots via a “shared control” system.

It is a promising approach for the safety and health of all of us: the handling of hazardous objects such as nuclear waste is ultimately a matter of concern for all of us. By reliably capturing relevant information about objects, Ensenso 3D cameras make an important contribution to this globally widespread and increasingly urgent task.

49318 Picture1Client / University
The Extreme Robotics Lab at the University of Birmingham, UK, is the market leader for many of the components required for growing efforts to robotize nuclear operations. https://www.birmingham.ac.uk/research/activity/metallurgy-materials/robotics/our-technologies.aspx


Previous

Sony's $ 9,000 Airpeak S1 drone is finally available for pre-order

Next

🌱 Body Camera Questions + New Mask Mandates + Omicron Surveillance

Check Also