Browsing by Author "Khan, Fahad"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access Communication components for Human Intention Prediction – a survey(AHFE International, 2023-07-24) Khan, Fahad; Asif, Seemal; Webb, PhilIn this review we address the communication components for human intention prediction for Human-robot collaboration (HRC). The HRC is the approach in which human and robot(s) work towards achieving the same goal. The interaction can be both levels physical and cognitive. The traditional settings of the HRC system provides fixed robot program based on waypoints or gestures. It is difficult to predefine the instructions of the situation in complex and variable environment. The understanding of human intention on dynamic basis is crucial for the success of such systems. The core character of co-existence of human and the robot is to understand the dynamic scenes of human intentions. To understand the human intention there is need to understand the components of intention communication. This paper provides comprehensive overview about the understanding the intention as communication components and modelling those components by using machine learning technology in HRC. Multiple ways of communicating intention are possible by using speech, action, gesture, haptic, physiological signals, etc. The article details various approaches to understand the human intention communication aspect particularly in the Human Robot Collaboration setting.Item Open Access Human facial emotion recognition for adaptive human robot collaboration in manufacturing(Springer, 2024-08-31) Khan, Fahad; Asif, Seemal; Webb, PhilThe integration of robots into various industries, including manufacturing, has introduced new challenges in achieving efficient human-robot collaboration. A crucial aspect of successful collaboration is the ability of robots to understand and respond to human emotions. In the context of human-robot collaboration in manufacturing, accurately predicting human emotions is essential for enhancing efficiency and safety. This paper presents a setup for human emotion detection, focusing on facial emotion recognition. The proposed model and descriptive summary involve the utilising state-of-the-art algorithms such as AlexNet, HaarCascade (HCC), MTCNN (Multi-Task Cascaded Convolutional Neural Networks), and SVM (Support Vector Machine), applied to datasets like CK+, JAFFE, and AffectNet. The performance of each facial recognition model is evaluated in real-time scenarios, resulting in significant progress with an accuracy improvement from 40% to 78.1%. These results demonstrate the effectiveness of the approach in enabling adaptive robot control based on human emotions and enhancing collaboration quality. This research uniquely integrates facial emotion recognition and robot control to enable adaptive responses during human-robot collaboration in manufacturing settings. By understanding and responding to human emotions, robots can improve their interactions with humans, leading to increased productivity and improved overall collaboration efficiency.Item Open Access Towards robot software abstraction: ROS 2-based framework for object handling within a robot cell(IEEE, 2024-08-18) Viso, Mikel Bueno; Huang, Jingjing; Asif, Seemal; Khan, Fahad; Webb, PhilRecent advancements in industrial automation have led to the development of increasingly adaptable and reconfigurable systems, driven by the necessity for flexibility and efficiency in manufacturing. This paper introduces a ROS 2-based software framework tailored for object handling within a robot cell, addressing challenges pertaining to reconfigurability, modularity, and interoperability. The proposed solution simplifies the deployment process of robotic applications and provides a standardized ROS 2-based platform, making it particularly beneficial for small and medium automation enterprises that require frequent reprogramming and adaptation of robot cells to different settings. By ensuring software and hardware agnosticism, the framework presents a comprehensive pipeline for designing, developing, and deploying object pick-and-place applications, demonstrated through an automated kitting use-case where a robot sorts industrial spare parts from a conveyor belt using a simple camera. The system integrates real-time object detection and classification capabilities with ROS 2 to facilitate effective communication between perception and robot action. This research advocates for open-source solutions and collaboration, aiming to enhance the adaptability and efficiency of robotic solutions across diverse industrial applications.