Author:
Trinh Minh,Kötter David,Chu Ariane,Behery Mohamed,Lakemeyer Gerhard,Petrovic Oliver,Brecher Christian
Abstract
Human-robot-collaboration combines the strengths of humans, such as
flexibility and dexterity, as well as the precision and efficiency of the
cobot. However, small and medium-sized businesses (SMBs) often lack the
expertise to plan and execute e.g. collaborative assembly processes, which
still highly depend on manual work. This paper introduces a framework using
behavior trees (BTs) and computer vision (CV) to simplify this process while
complying with safety standards. In this way, SMBs are able to benefit from
automation and become more resilient to global competition. BTs organize the
behavior of a system in a tree structure [1], [2]. They are modular since
nodes can be easily added or removed. Condition nodes check if a certain
condition holds before an action node is executed, which leads to the
reactivity of the trees. Finally, BTs are intuitive and human-understandable
and can therefore be used by non-experts [3]. In preliminary works, BTs have
been implemented for planning and execution of a collaborative assembly
process [4]. Furthermore, an extension for an efficient task sharing and
communication between human and cobots was developed in [5] using the Human
Action Nodes (H-nodes). The H-node is crucial for BTs to handle
collaborative tasks and reducing idle times. This node requires the use of
CV for the cobot to recognize, whether the human has finished her sub-task
and continue with the next one. In order to do so, the algorithm must be
able to detect different assembly states and map them to the corresponding
tree nodes. A further use of CV is the detection of assembly parts such as
screws. This enables the cobot to autonomously recognize and handle specific
components. Collaboration is the highest level of interaction between humans
and cobots [4] due to a shared workspace and task. Therefore, it requires
strict safety standards that are determined in the DIN EN ISO 10218 and DIN
ISO/TS 15066 [6], [7], which e.g. regulate speed limits for cobots. The
internal safety functions of cobots have been successfully extended with
sensors, cameras, and CV algorithms [8]–[10] to avoid collisions with the
human. The latter approach uses the object detection library OpenCV [11],
for instance. OpenCV offers a hand detection algorithm, which is pretrained
with more than 30.000 images of hands. In addition, it allows for a high
frame rate, which is essential for real-time safety.In this paper, CV is
used to enhance the CoboTrees (cobots and BTs) demonstrator within the
Cluster of Excellence ’Internet of Production’ [12]. The demonstrator
consists of a six degree-of-freedom Doosan M1013 cobot, which is controlled
by the Robot Operating System (ROS) and two Intel RealSense D435 depth
cameras. The BTs are modeled using the PyTrees library [13]. Using OpenCV,
an object and assembly state detection algorithm is implemented e.g. for use
in the H-nodes. Since the majority of accidents between robots and humans
occur due to clamping or crushing of the human hand [14], a hand detector is
implemented. It is evaluated regarding its compliance with existing safety
standards. The resulting safety subtree integration in ROS is shown in Fig.
1.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献