IROS Workshop 2022
5th Ergonomic Human-Robot Collaboration: How Cognitive and Physical Aspects Come Together
Sunday, October 23rd, 2022
Work-related musculoskeletal disorders (WMSD) constitute a significant proportion of occupational morbidity, lost workdays, and costs for concern in any workplace. To reduce potential occupational injuries, a safe working environment must be created, and all workers must understand, accept, and use the principles of ergonomics. The field of ergonomics has been instrumental in developing methods, tools, and solutions when considering cognitive and physical aspects, respectively. The study of physical ergonomics is concerned with human anatomic, anthropometric, physiological, and biomechanical characteristics as they relate to physical work systems. To anticipate and mitigate the physical risk factor related to WMSD, Human-Robot Collaboration (HRC) techniques such as collaborative robots (cobots) or wearable assistive systems have been exploited at workplaces. Specifically, some of the major challenges and opportunities have been recently introduced to exploit the advanced technologies that HRC can be aware of the human co-worker's ergonomic status hence to reconfigure and improve the physical working conditions. Regarding the aspect of physical ergonomics on HRC, we previously organised the successful series workshops on this topic: introduce the field of ergonomic human-robot collaboration at ICRA 2018, review the initial research progress based on the goals set at the first workshop, and identify major ongoing research problems at IROS 2019, discuss the emerging research opportunities and challenges at IROS 2020, and then discuss the potential applications and implications in real-world scenarios at IROS 2021.
So far, the major aspect of ergonomics in HRC has been focused on the physical effect. However, every human action is orchestrated by a combination of brain activity and body interactions. The study of cognitive human factors is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system.
Therefore, now comes the time to discuss some level of mental or cognitive processing in combination with physical efforts, so that ideally physical and cognitive demands should be considered together when examining HRC tasks.
The proposed workshop will first review the progress of the research and development in the physical ergonomic human-robot collaboration which was achieved since the first workshop in 2018. Next, we will focus on how cognitive and physical ergonomics aspects come together in the HRC field. This agenda requires experts from various research fields and interdisciplinary discussions. For this purpose, we assembled a diverse set of organisers and speakers, who are leading experts in their respective areas that are highly relevant to this workshop topic.
Physical Human-Robot Collaboration, Occupational Ergonomics, Human Modelling, Physical Interaction Control, Adaptation and Learning, Industrial Robots, Exoskeleton Robots, Wearable Sensors, Feedback Devices, Shared Control.
Send your pdf file via email to firstname.lastname@example.org.
- Submission deadline for extended abstracts: 1st of September, 2022
- Notification of acceptance: 15th of September, 2022
- Workshop Date: 23rd of October, 2022
The workshop will take place on Sunday 23rd of October, 2022.
|09.00 – 09.30||Introduction by the organizers|
|09.30 – 10.00||Talk 1 by Speaker 1|
|10.00 – 10.30||Talk 2 by Speaker 2|
|10.30 – 11.00||Talk 3 by Speaker 3|
|11.00 – 11.30||Coffee Break|
|11.30 – 12.00||Talk 4 by Speaker 4|
|12.00 – 12.30||Talk 5 by Speaker 5|
|12.30 – 13.30||Lunch|
|13.30 – 14.00||Talk 6 by Speaker 6|
|14.00 – 14.30||Spotlight presentations|
|14.30 – 15.30||Poster session|
|15.30 – 16.00||Coffee Break|
|16.00 – 16.30||Talk 7 by Speaker 7|
|16.30 – 17.00||Talk 8 by Speaker 8|
|17.00 – 19.00||Round Table Discussions|
Wansoo Kim, Assistant Professor
Hanyang University, Republic of Korea
Wansoo Kim is an assistant professor at Hanyang University ERICA, Republic of Korea. He received the B.S. degree in mechanical engineering from Hanyang University, Korea in 2008 and a Ph.D. degree in mechanical engineering from Hanyang University, Korea in 2015 (Integrated MS/PhD program). He was with Human-Robot Interfaces and Physical Interaction Lab, Italian Institute of Technology in Genoa, Italy from 2016 to 2020. He has developed several exoskeleton systems such as HEXAR-Hanyang Exoskeleton Assistive Robot, and conducted research on the control of the powered exoskeleton robot through the physical human-robot interaction (pHRI) forces. He is currently involved in a project Horizon-2020 project SOPHIA. He has contributed to several projects in the field of exoskeleton robot in Korea projects (High responsive control technology of a lower-limb exoskeleton under rough terrain-1415144732, Development of Wearable Robot for Industrial Labor Support-1415135223, etc.), and joint R&D projects with a company (DSME and LIG Nex1). He was the winner of the Solution Award 2019 (Premio Innovazione Robotica at MECSPE2019), the winner of the KUKA Innovation Award 2018, the winner of the HYU best PhD paper award 2015, and the winner of the ICCAS best presentation award 2014. His research interests are in Physical human-robot interaction (pHRI), human-robot collaboration, Shared Control, Ergonomics, Human modelling, Feedback devices, and powered exoskeleton robot.
Luka Peternel, Assistant Professor
Delft University of Technology, Netherlands
Luka Peternel received a Ph.D. in robotics from Faculty of Electrical Engineering, University of Ljubljana, Slovenia in 2015. He conducted Ph.D. studies at Department of Automation, Biocybernetics and Robotics, Jožef Stefan Institute in Ljubljana from 2011 to 2015, and at Department of Brain-Robot Interface, ATR Computational Neuroscience Laboratories in Kyoto, Japan in 2013 and 2014. He was with Human-Robot Interfaces and Physical Interaction Lab, Advanced Robotics, Italian Institute of Technology in Genoa, Italy from 2015 to 2018. From 2019, Luka Peternel is an Assistant Professor at Department of Cognitive Robotics, Delft University of Technology in the Netherlands.
Arash Ajoudani, Principal Investigator
Italian Institute of Technology, Italy
Arash Ajoudani received his PhD degree in Robotics and Automation from Centro "E Piaggio", University of Pisa, and Advanced Robotics Department (ADVR), Italian Institute of Technology (IIT), Italy (July 2014). His PhD thesis was a finalist for the Georges Giralt PhD award 2015 - best European PhD thesis award in robotics. He is currently a tenure-track scientist and the leader of the Human-Robot Interfaces and Physical Interaction (HRI2) lab of the IIT. He was a winner of the Amazon Research Awards 2019, the winner of the Werob best poster award 2018, winner of the KUKA Innovation Award 2018, a finalist for the best conference paper award at Humanoids 2018, a finalist for the best interactive paper award at Humanoids 2016, a finalist for the best oral presentation award at Automatica (SIDRA) 2014, the winner of the best student paper award and a finalist for the best conference paper award at ROBIO 2013, and a finalist for the best manipulation paper award at ICRA 2012. He is the author of the book "Transferring Human Impedance Regulation Skills to Robots" in the Springer Tracts in Advanced Robotics (STAR), and several publications in journals, international conferences, and book chapters. He is currently serving as the executive manager of the IEEE-RAS Young Reviewers' Program (YRP), chair and representative of the IEEE-RAS Young Professionals Committee, and co-chair of the IEEE-RAS Member Services Committee. He has been serving as a member of scientific advisory committee and as an associate editor for several international journals and conferences such as IEEE RAL, Biorob, ICORR, etc. His main research interests are in physical human-robot interaction and cooperation, robotic manipulation, robust and adaptive control, rehabilitation robotics, and tele-robotics.
Eiichi Yoshida, Professor
Department of Applied Electronics, Faculty of Advanced Engineering, Japan
Eiichi Yoshida received M.E and Ph. D degrees on Precision Machinery Engineering from Graduate School of Engineering, the University of Tokyo in 1996. He then joined former Mechanical Engineering Laboratory, later in 2001 reorganized as National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan. He served as Co-Director of AIST-CNRS JRL (Joint Robotics Laboratory) at LAAS-CNRS, Toulouse, France, from 2004 to 2008, and at AIST, Tsukuba, Japan from 2009 to 2021. He was also Deputy Director of Industrial Cyber-Physical Systems Research Center, and TICO-AIST Cooperative Research Laboratory for Advanced Logistics in AIST from 2020 to 2021. From 2022, he is Professor of Tokyo University of Science, at Department of Applied Electronics, Faculty of Advanced Engineering. He was previously invited as visiting professor at Karlsrule Institute of Technology and University of Tsukuba. He was awarded Chevalier, l’Ordre National du Mérite from French Government in 2016 for his long-term contributions to French-Japanese collaboration on robotics. He is IEEE Fellow, and member of RSJ and JSME. His research interests include robot task and motion planning, human modeling, humanoid robots and advanced logistics.
“Trust in Collaborative Robotics: A Neuroergonomics Perspective”
Investigations into physiological or neurological correlates of trust have increased in popularity due to the need for a continuous measure of trust, including for trust-sensitive or adaptive systems, measurements of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection. This presentation will highlight the limitations and generalizability of the dynamics associated with trust across different technology domains, such as collaborative robotics, automated vehicle technologies, and cyber aids, and introduce a neuroergonomics perspective to measure and support human cognition when interacting with collaborative robots. There is a lot left to unravel about what features in the brain signal trust levels and whether brain activity can capture changes in trust perceptions (i.e., during trust-building, breach, and/or repair). Optical brain imaging and graph-theoretical analyses of functional brain networks will shed light on the neural correlates of trust in shared space human-robot collaboration. These neural processes differ by gender or operator fatigue states, as such, their implications on modeling effective human-automation trust calibrations will be discussed. Finally, miscalibrated levels of trust do not always influence the operator’s behavior. As such, the neural correlates associated with an operator’s identification of a trust influencer and the decision to act upon the trust perception will be presented.
"Agent Transparency for Effective Human-Robot Collaboration"
Among the many challenges identified by various expert groups, agent transparency has been raised repeatedly as a key area of research that is critical to achieve effective Human-Robot Collaboration (HRC) and assured autonomy. This talk reviews theoretical frameworks and recent findings on agent transparency (operator task performance, trust calibration, workload, and individual/cultural differences) in the contexts of HRC. Particularly, this talk highlights the Situation awareness-based Agent Transparency (SAT) framework and its applications in various domains related to HRC. Current challenges and future research directions are also discussed.
"Advances in 3D-simulation and virtual commissioning of human-robot collaboration systems"
"Using expressive movement in human-robot collaboration"
"Human-robot task allocation strategies based on ergonomics"
Allocation of tasks among humans and robots in cooperative assemblies is a challenging problem that encompasses issues of productivity, optimal use of resources, and human safety and well-being. In this talk we will discuss methods to perform such allocation: one of such methods estimates online the muscle fatigue experienced by the worker during the task execution, leveraging an elaborated musculoskeletal model of the human upper body and a 3D vision system used to track human motions in real-time. Based on this estimate, a strategy that dynamically allocates the task activities to the human and to the robot is implemented. In another method, the agents’ capabilities are tested with respect to a list of predefined criteria (including ergonomic factors for the human), and task allocation is statically performed using a modified version of the Hungarian Algorithm that solves unbalanced assignment problems. Experiments show the feasibility and the results of these methods.
"Bayesian and Deep Recursive Filters for Ergonomic Motion Prediction and Generation"
The challenge of modern, physical human-robot interaction is such that any decision making must incorporate predictive abilities. Predicting the biomechanical ramification of the user’s future motion is critical to make decisions that proactively nudge human motion towards an ergonomic and desirable outcome. In prior work, we have shown that nonlinear Bayesian filters can be used to make such predictions in the face of uncertainty and noisy measurements.However, these filters require accurate models of the human-robot system and do not scale to high-dimensional observations, e.g., using images from on-board camera as input. In this presentation, I will present a novel class of Bayes filter that utilizes a learned inverse observation function to reduce the computational complexity of the inference process while increasing the range of effective models. Our framework uses deep learning models to create nonlinear transformations from the observation space to the state space along with estimates of the underlying uncertainty. I will show how such "Differentiable Stochastic Filters" can be trained to make accurate predictions of human motions while also gracefully degrading when noise is introduced into the system.
"A framework for affect-based human-robot interaction"
Since the last decade robots have been entering our lives more and more, being not any more simple (although reliable) tools, but real collaborators, assistants and workmates. While safety in robotics has always been a primary concern, this new trend has highlighted that to let robots permeate human environments they must be easy to interact with. Much focus has increasingly been put on the design of proper interaction means that cover the gap existing between humans and complex robotic systems. Building upon these lines, we have recently proposed a novel interaction approach for letting a human interact with a generic robot in a natural manner. The main idea is that of defining an interaction system that does not require neither any specific knowledge in robotics nor infrastructure or device, but that relies on commonly utilized objects while leaving the user’s hands free. The approach enjoys two main features: natural interaction and affect-based adaptation of robot’s behavior. By measuring user’s affect and cognitive workload, it is then possible to provide optimal assistive strategies to relief her/him when she/he is overwhelmed by the task. If necessary, additional teaching support can be provided, e.g., by virtual reality. We call this approach MATE: Measure, Adapt and TEach.
The following IEEE-RAS Technical Committees have acknowledged the full support of the proposed workshop: