Background: Many vendors are proposing AMR based picking and manipulation systems for warehouses and customization centers. Today most of these systems are based on separation of humans from the robot operating space or power and force limited (PFL) collaborative robots. While potentially viable these systems are often prohibitively costly and/or offer relatively low pick rates.
We have evaluated laser scanners and other technologies that can be used with position systems to monitor a safe envelope around mobile robotics systems. These strategies break down when reaching into material pick locations that may have varying amounts of products, materials, pallets, packing materials, etc. The sensing system sees the occupied space, and by not understating if the objects in the field are materials or humans, it is obliged to slow or stop the robot system.
A highly reliable object detection and recognition system (eg Yolo on steroids) that is capable of reliably identifying and confirming the presence cartons, pallets, racks and other anticipated objects could use reverse logic to verify that the items occupying the space are not human. Ruling out humans in the occupied space would allow the manipulator to operate at speeds sufficient to justify the cost of automation.
We are looking for:
We are NOT interested in:
Note: Veo Robotics, Waltham, MA is working on a stationary system to identify humans entering a stationary robot working envelope. However, this system requires multiple fixed location cameras.
Background: The new generation of automatic guided vehicles (AGVs) and autonomous mobile robots (AMRs) are increasingly using SLAM (LIDAR) based systems for vehicle localization in the environment. These non-deterministic vision technologies are sufficient to navigate and move in various environments. However, SLAM based systems are not reliable enough to confirm the position of mobile platforms with respect to safety standards when carrying and running mobile robot arms on a manufacturing / manipulation line.
Today we need to add secondary sensors when mating AMRs (autonomous mobile robots) to equipment for docking, load transfer, and equipment interactions. Having a highly deterministic safety rated positioning system would enable the vehicles to make safety rated logical decisions when navigating and interacting. This capability will reduce the cost and complexity of requiring secondary safety sensing and ultimately help streamline and the deployment of advanced AMRs.
Background: Overall operating efficiency of mobile platforms may be increased if vehicle safety systems have the capability to differentiate between humans and inanimate objects. Increased efficiency is directly linked to system cost and affordability. Additionally, increasing the capability of individual vehicles results in a reduction in fleet size which minimizes traffic jams and further amplifies efficiency improvements.
Current AGV and mobile device standards go to great lengths to describe vehicle sensing, stopping, and clearance requirements. In many cases these requirements have evolved with the focus of protecting humans in the operating environment. In fact, the test artefacts outlined in ANSI B56.5 were developed to approximate human legs and a prone torso.
Having more information regarding humans in the environment allows systems to make more sophisticated decisions in their path planning and stopping algorithms. If there are no humans in the scene, the system may choose to be less conservative. This could enable several features including shorter turning radii, bi-directional traffic, and less superfluous slow downs.
Veo Robotics, Waltham, MA is working on a stationary system to identify humans entering a robot working envelope. Their system monitors the humans and their trajectory relative to robot arm motion and makes slow down and stopping decisions based on the anticipated interaction of the human and the robot. The Veo system is expected to substantially reduce cell footprint when working in a guard free environment. Their currently system requires multiple fixed location cameras to monitor the 3D robot workspace. It is foreseeable that similar human detection algorithms could be adapted to and mounted on mobile platforms.
Disclaimer: MIT Startup Exchange can make introductions that ideally provide open ended discussions in order to share mutual interests and potentially create common ground that incite the parties to collaborate. MIT Startup Exchange introductions may eventually lead to mutual partnerships, but that is not in any way guaranteed by MIT, MIT Corporate Relations, MIT Industrial Liaison Program (ILP) or MIT Startup Exchange, which takes no responsibility for these outcomes and no formal part in such discussions following our introduction. MIT Startup Exchange and its activities and events are not for purposes of soliciting investment or offering securities.