Skip to content

Latest commit

 

History

History
75 lines (49 loc) · 15.5 KB

2021.md

File metadata and controls

75 lines (49 loc) · 15.5 KB

ICRA 2021

  1. Dynamic Projection of Human Motion for Safe and Efficient Human-Robot Collaboration

    • In the modern manufacturing process, novel technologies enable the collaboration between humans and robots, which increases productivity while keeping flexibility. However, these technologies also lead to new challenges, e.g., maximization of Human-Robot Collaboration (HRC) performance while ensuring safety for the human being in fenceless robot applications. In this paper, an approach of the dynamic human motion projection is proposed for typical assembly tasks. The human upper body is simplified as a five-degree-of-freedom (5-DOF) rigid-body model. A control-oriented projection model is proposed, and its parameters are estimated from the test data of human capability. Combined with a human-state estimator and a collision estimator, the "worst-case" collision motion is projected in the HRC scenario. The dynamic projection method is feasible online. Finally, the estimated collision time is adopted to increase the robot's speed limit, which validates the improvement of HRC's efficiency.
  2. A Human-Centered Dynamic Scheduling Architecture for Collaborative Application

    • In collaborative robotic applications, human and robot have to work together during a whole shift for executing a sequence of jobs. The performance of the human robot team can be enhanced by scheduling the right tasks to the human and the robot. The scheduling should consider the task execution constraints, the variability in the task execution by the human, and the job quality of the human. Therefore, it is necessary to dynamically schedule the assigned tasks. In this paper, we propose a two-layered architecture for task allocation and scheduling in a collaborative cell. Job quality is explicitly considered during the allocation of the tasks and over a sequence of jobs. The tasks are dynamically scheduled based on the real time monitoring of the human's activities. The effectiveness of the proposed architecture is experimentally validated
  3. Task Planning with a Weighted Functional Object-Oriented Network

    • In reality, there is still much to be done for robots to be able to perform manipulation actions with full autonomy. Complicated manipulation tasks, such as cooking, may still require a person to perform some actions that are very risky for a robot to perform. On the other hand, some other actions may be very risky for a human with physical disabilities to perform. Therefore, it is necessary to balance the workload of a robot and a human based on their limitations while minimizing the effort needed from a human in a collaborative robot (cobot) set-up. This paper proposes a new version of our functional object-oriented network (FOON) that integrates weights in its functional units to reflect a robot's chance of successfully executing an action of that functional unit. The paper also presents a task planning algorithm for the weighted FOON to allocate manipulation action load to the robot and human to achieve optimal performance while minimizing human effort. Through a number of experiments, this paper shows several successful cases in which using the proposed weighted FOON and the task planning algorithm allow a robot and a human to successfully complete complicated tasks together with higher success rates than a robot doing them alone.
  4. Effect of Robot Assistance, Operator Cognitive Fatigue, and Sex on Task Efficiency, Workload, and Situation Awareness in Human-Robot Collaboration

    • Advancements in robot technology are allowing for increasing integration of humans and robots in shared space manufacturing processes. While individual task performance of the robotic assistance and human operator can be separately optimized, the interaction between humans and robots can lead to emergent effects on collaborative performance. As such, this paper examines the interplay of operator sex, their fatigue states, and varying levels of automation on collaborative task performance, operator situation awareness, perceived workload, and physiological responses (heart rate variability; HRV). Sixteen participants, balanced by sex, performed metal polishing tasks directly with a UR10 robot under different fatigued states and with varying levels of robotic assistance. Perceived fatigue, situation awareness, and workload were measured periodically, in addition to continuous physiological monitoring, and three task performance metrics: task efficiency, accuracy, and precision were obtained. Higher robotic assistance demonstrated direct task performance benefits. However, unlike females, males did not perceive these improved performance benefits. A relationship between situation awareness and automation was observed in both the HRV signals and subjective measures, where increased robot assistance reduced the attentional supply and task engagement of participants. The consideration of the interplay between human and robot factors can lead to improved human-robot system designs.
  5. A Scalable Approach to Predict Multi-Agent Motion for Human-Robot Collaboration

    • Human motion prediction is considered a key component for enabling fluent human-robot collaboration. The ability to anticipate the motion and subsequent intent of the partner(s) remains a challenging task due to the complex and interpersonal nature of human behavior. In this work, we propose a novel sequence learning approach that learns a robust representation over the observed human motion and can condition future predictions over a subset of past sequences. Our approach works for both single and multi-agent settings and relies on an interpretable latent space that has the implicit benefit of improving human motion understanding. We evaluated the proposed approach by comparing its performance against state-of-the-art motion prediction methods on single, multi-agent, and human-robot collaboration datasets. The results suggest that our approach outperforms other methods over all the evaluated temporal horizons, for single-agent and multi-agent motion prediction. The improved performance of our approach for both single and multiagent settings, coupled with an interpretable latent space, can enable close-proximity human-robot collaboration.
  6. Temporal Anticipation and Adaptation Methods for Fluent Human-Robot Teaming

    • As robots work with human teams, they will be expected to fluently coordinate with them. While people are adept at coordination and real-time adaptation, robots still lack this skill. In this paper, we introduce TANDEM: Temporal Anticipation and Adaptation for Machines, a series of neurobiologically-inspired algorithms that enable robots to fluently coordinate with people. TANDEM leverages a human-like understanding of external and internal temporal changes to facilitate coordination. We experimentally validated the approach via a human-robot collaborative drumming task across tempo-changing rhythmic conditions. We found that an adaptation process alone enables a robot to achieve human-level performance Moreover, by combining anticipatory knowledge along with an adaptation process, robots can potentially perform such tasks better than people. We hope this work will enable researchers to create robots more sensitive to changes in team dynamics.
  7. Human-Aware Robot Task Planning Based on a Hierarchical Task Model

    • Human-robot collaboration (HRC) is becoming increasingly important as the paradigm of manufacturing is shifting from mass production to mass customization. When robots work with humans for collaborative task, they need to plan their actions by taking the humans' actions into account. %The tasks we consider are assembly tasks performed in industry environment with various plans. However, due to the complexity of the tasks and stochastic nature of human collaborators, it is quite challenging for the robot to efficiently collaborate with the humans. To address this challenge, in this paper, we first propose an algorithm to automatically construct a hierarchical task model from single-agent demonstrations. The hierarchical task model explicitly captures the sequential and parallel relationships of the task at all levels of abstraction. We then propose an optimization-based planner, which exploits the parallel relationships in the task model and prioritizes actions that are parallel to the humans' actions. In such way, potential spatial interfaces can be avoided, task completion time can be reduced, and human's comfort can be improved. We conducted both simulations and experiments of a robot arm collaborating with a human for several collaborative tasks. The comparison results with several baselines proved that our proposed planner is better in terms of efficiency, safety and human comfort.

Assembly

  1. IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks

    • The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of long-horizon and hierarchical manipulation tasks. The environment is designed to advance reinforcement learning and imitation learning from simple toy tasks to complex tasks requiring both long-term planning and sophisticated low-level control. Our environment features 60 furniture models, 6 robots, photorealistic rendering, and domain randomization. We evaluate reinforcement learning and imitation learning methods on the proposed environment. Our experiments show furniture assembly is a challenging task due to its long horizon and sophisticated manipulation requirements, which provides ample opportunities for future research. The environment is publicly available at https://clvrai.com/furniture.
  2. From Manual Operation to Collaborative Robot Assembly: An Integrated Model of Productivity and Ergonomic Performance

    • This paper presents a unified model to evaluate the productivity and ergonomic performance of both manual operations and collaborative robot assembly systems. Specifically, flow time or throughput is used to represent the productivity measurement and strain index characterizes the ergonomic performance. Models and solutions of both performance measures in manual operation and collaborative assembly processes are introduced. Then a unified model to integrate both performances, throughput rate per unit of work effort time, is proposed. In addition, a cylinder head assembly example is introduced to illustrate the applicability of the model. Such a work presents a quantitative tool to study productivity and ergonomic performance in assembly systems.
  3. Fine-Grained Activity Recognition for Assembly Videos

    • In this paper we address the task of recognizing assembly actions as a structure (e.g. a piece of furniture or a toy block tower) is built up from a set of primitive objects. Recognizing the full range of assembly actions requires perception at a level of spatial detail that has not been attempted in the action recognition literature to date. We extend the fine-grained activity recognition setting to address the task of assembly action recognition in its full generality by unifying assembly actions and kinematic structures within a single framework. We use this framework to develop a general method for recognizing assembly actions from observation sequences, along with observation features that take advantage of a spatial assembly's special structure. Finally, we evaluate our method empirically on two application-driven data sources: (1) An IKEA furniture-assembly dataset, and (2) A block-building dataset. On the first, our system recognizes assembly actions with an average framewise accuracy of 70% and an average normalized edit distance of 10%. On the second, which requires fine-grained geometric reasoning to distinguish between assemblies, our system attains an average normalized edit distance of 23% - a relative improvement of 69% over prior work.
  4. Learning Sequences of Manipulation Primitives for Robotic Assembly

  5. Robotic Imitation of Human Assembly Skills Using Hybrid Trajectory and Force Learning

Other

  • Anytime Game-Theoretic Planning with Active Reasoning about Humans' Latent States for Human-Centered Robots
  • Leveraging Neural Network Gradients within Trajectory Optimization for Proactive Human-Robot Interactions
  • Human-Robot Collaborative Multi-Agent Path Planning Using Monte Carlo Tree Search and Social Reward Sources

Safety

  1. 3D Collision-Force-Map for Safe Human-Robot Collaboration

  2. A Data-Driven Approach for Contact Detection, Classification and Reaction in Physical Human-Robot Collaboration

  3. Virtual Adversarial Humans Finding Hazards in Robot Workplaces

  4. A Safety-Aware Kinodynamic Architecture for Human-Robot Collaboration

    • The new paradigm of human-robot collaboration has led to the creation of shared work environments in which humans and robots work in close contact with each other. Consequently, the safety regulations have been updated addressing these new scenarios. The mere application of these regulations may lead to a very inefficient behavior of the robot. In order to preserve safety for the human operators and allow the robot to reach a desired configuration in a safe and efficient way, a two layers architecture for trajectory planning and scaling is proposed. The first layer calculates the nominal trajectory and continuously adapts it based on the human behavior. The second layer, which explicitly considers the safety regulations, scales the robot velocity and requests for a new trajectory if the robot speed drops. The proposed architecture is experimentally validated on a Pilz PRBT manipulator.
  5. Towards Safe Motion Planning in Human Workspaces: A Robust Multi-Agent Approach

Learning

  1. Learning Human Objectives from Sequences of Physical Corrections

    • When personal, assistive, and interactive robots make mistakes, humans naturally and intuitively correct those mistakes through physical interaction. In simple situations, one correction is sufficient to convey what the human wants. But when humans are working with multiple robots or the robot is performing an intricate task often the human must make several corrections to fix the robot's behavior. Prior research assumes each of these physical corrections are independent events, and learns from them one-at-a-time. However, this misses out on crucial information: each of these interactions are interconnected, and may only make sense if viewed together. Alternatively, other work reasons over the final trajectory produced by all of the human's corrections. But this method must wait until the end of the task to learn from corrections, as opposed to inferring from the corrections in an online fashion. In this paper we formalize an approach for learning from sequences of physical corrections during the current task. To do this we introduce an auxiliary reward that captures the human's trade-off between making corrections which improve the robot's immediate reward and long-term performance. We evaluate the resulting algorithm in remote and in-person human-robot experiments, and compare to both independent and final baselines. Our results indicate that users are best able to convey their objective when the robot reasons over their sequence of corrections.
  2. Human-Guided Robot Behavior Learning: A GAN-Assisted Preference-Based Reinforcement Learning Approach

User experience

  • Order Matters: Generating Progressive Explanations for Planning Tasks in Human-Robot Teaming

Object transfer

  • Human-Robot Collaborative Object Transfer Using Human Motion Prediction Based on Cartesian Pose Dynamic Movement Primitives
  • Evaluating Guided Policy Search for Human-Robot Handovers

Cool

  • A Robot Walks into a Bar: Automatic Robot Joke Success Assessment

RSS 2021

IROS 2021