Browsing by Author "Xing, Yang"
Now showing 1 - 20 of 40
Results Per Page
Sort Options
Item Open Access An adaptive energy efficient MAC protocol for RF energy harvesting WBANs(IEEE, 2022-11-17) Hu, Juncheng; Xu, Gaochao; Hu, Liang; Li, Shujing; Xing, YangContinuous and remote health monitoring medical applications with heterogeneous requirements can be realized through wireless body area networks (WBANs). Energy harvesting is adopted to enable low-power health applications and long-term monitoring without battery replacement, which have drawn significant interest recently. Because energy harvesting WBANs are obviously different from battery-powered ones, network protocols should be designed accordingly to improve network performance. In this article, an efficient cross-layer media access control protocol is proposed for radio frequency powered energy harvesting WBANs. We redesigned the superframe structure, which can be rescheduled by the coordinator dynamically. A time switching (TS) strategy is used when sensors harvest energy from radio frequency signals broadcast by the coordinator, and a transmission power adjustment scheme is proposed for sensors based on the energy harvesting efficiency and the network environment. Energy efficiency can be effectively improved that more packets can be uploaded using limited energy. The length of the energy harvesting period is determined by the coordinator to balance the channel resources and energy requirements of sensors and further improve the network performance. Numerical simulation results show that our protocol can provide superior system performance for long-term periodic health monitoring applications.Item Open Access Advances in vision-based lane detection: algorithms, integration, assessment, and perspectives on ACP-based parallel vision(IEEE, 2018-05-01) Xing, Yang; Lv, Chen; Chen, Long; Wang, Huaji; Wang, Hong; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueLane detection is a fundamental aspect of most current advanced driver assistance systems (ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous vision-based lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system, and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed.Item Open Access CogEmoNet: A cognitive-feature-augmented driver emotion recognition model for smart cockpit(IEEE, 2021-11-30) Li, Wenbo; Zeng, Guanzhong; Zhang, Juncheng; Xu, Yan; Xing, Yang; Zhou, Rui; Guo, Gang; Shen, Yu; Cao, Dongpu; Wang, Fei-YueDriver's emotion recognition is vital to improving driving safety, comfort, and acceptance of intelligent vehicles. This article presents a cognitive-feature-augmented driver emotion detection method that is based on emotional cognitive process theory and deep networks. Different from the traditional methods, both the driver's facial expression and cognitive process characteristics (age, gender, and driving age) were used as the inputs of the proposed model. Convolutional techniques were adopted to construct the model for driver's emotion detection simultaneously considering the driver's facial expression and cognitive process characteristics. A driver's emotion data collection was carried out to validate the performance of the proposed method. The collected dataset consists of 40 drivers' frontal facial videos, their cognitive process characteristics, and self-reported assessments of driver emotions. Another two deep networks were also used to compare recognition performance. The results prove that the proposed method can achieve well detection results for different databases on the discrete emotion model and dimensional emotion model, respectively.Item Open Access Cooperative driving of connected autonomous vehicles in heterogeneous mixed traffic: a game theoretic approach(IEEE, 2024-05-13) Fang, Shiyu; Hang, Peng; Wei, Chongfeng; Xing, Yang; Sun, JianHigh-density, unsignalized intersections have always been a bottleneck of efficiency and safety. The emergence of Connected Autonomous Vehicles (CAVs) results in a mixed traffic condition, further increasing the complexity of the transportation system. Against this background, this paper aims to study the intricate and heterogeneous interaction of vehicles and conflict resolution at the high-density, mixed, unsignalized intersection. Theoretical insights about the interaction between CAVs and Human-driven Vehicles (HVs) and the cooperation of CAVs are synthesized, based on which a novel cooperative decision-making framework in heterogeneous mixed traffic is proposed. Normalized Cooperative game is concatenated with Level-k game (NCL game) to generate a system optimal solution. Then Lattice planner generates the optimal and collision-free trajectories for CAVs. To reproduce HVs in mixed traffic, interactions from naturalistic human driving data are extracted as prior knowledge. Non-cooperative game and Inverse Reinforcement Learning (IRL) are integrated to mimic the decision-making of heterogeneous HVs. Finally, three cases are conducted to verify the performance of the proposed algorithm, including the comparative analysis with different methods, the case study under different Rates of Penetration (ROP) and the interaction analysis with heterogeneous HVs. It is found that the proposed cooperative decision-making framework is beneficial to driving conflict resolution and the traffic efficiency improvement of the mixed unsignalized intersection. Besides, due to the consideration of driving heterogeneity, better human-machine interaction and cooperation can be realized in this paper.Item Open Access Driver activity recognition for intelligent vehicles: a deep learning approach(IEEE, 2019-04-01) Xing, Yang; Lv, Chen; Wang, Huaji; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueDriver decisions and behaviors are essential factors that can affect the driving safety. To understand the driver behaviors, a driver activities recognition system is designed based on the deep convolutional neural networks (CNN) in this study. Specifically, seven common driving activities are identified, which are the normal driving, right mirror checking, rear mirror checking, left mirror checking, using in-vehicle radio device, texting, and answering the mobile phone, respectively. Among these activities, the first four are regarded as normal driving tasks, while the rest three are classified into the distraction group. The experimental images are collected using a low-cost camera, and ten drivers are involved in the naturalistic data collection. The raw images are segmented using the Gaussian mixture model (GMM) to extract the driver body from the background before training the behavior recognition CNN model. To reduce the training cost, transfer learning method is applied to fine tune the pre-trained CNN models. Three different pre-trained CNN models, namely, AlexNet, GoogLeNet, and ResNet50 are adopted and evaluated. The detection results for the seven tasks achieved an average of 81.6% accuracy using the AlexNet, 78.6% and 74.9% accuracy using the GoogLeNet and ResNet50, respectively. Then, the CNN models are trained for the binary classification task and identify whether the driver is being distracted or not. The binary detection rate achieved 91.4% accuracy, which shows the advantages of using the proposed deep learning approach. Finally, the real-world application are analysed and discussed.Item Open Access Driver anomaly quantification for intelligent vehicles: a contrastive learning approach with representation clustering(IEEE, 2022-03-30) Hu, Zhongxu; Xing, Yang; Gu, Weihao; Cao, Dongpu; Lv, ChenDriver anomaly quantification is a fundamental capability to support human-centric driving systems of intelligent vehicles. Existing studies usually treat it as a classification task and obtain discrete levels for abnormalities. Meanwhile, the existing data-driven approaches depend on the quality of dataset and provide limited recognition capability for unknown activities. To overcome these challenges, this paper proposes a contrastive learning approach with the aim of building a model that can quantify driver anomalies with a continuous variable. In addition, a novel clustering supervised contrastive loss is proposed to optimize the distribution of the extracted representation vectors to improve the model performance. Compared with the typical contrastive loss, the proposed loss can better cluster normal representations while separating abnormal ones. The abnormality of driver activity can be quantified by calculating the distance to a set of representations of normal activities rather than being produced as the direct output of the model. The experiment results with datasets under different modes demonstrate that the proposed approach is more accurate and robust than existing ones in terms of recognition and quantification of unknown abnormal activities.Item Open Access Driver lane change intention inference for intelligent vehicles: framework, survey, and challenges(IEEE, 2019-03-06) Xing, Yang; Lv, Chen; Wang, Huaji; Wang, Hong; Ai, Yunfeng; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueIntelligent vehicles and advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver status since ADAS share the vehicle control authorities with the human driver. This study provides an overview of the ego-vehicle driver intention inference (DII), which mainly focus on the lane change intention on highways. First, a human intention mechanism is discussed in the beginning to gain an overall understanding of the driver intention. Next, the ego-vehicle driver intention is classified into different categories based on various criteria. A complete DII system can be separated into different modules, which consists of traffic context awareness, driver states monitoring, and the vehicle dynamic measurement module. The relationship between these modules and the corresponding impacts on the DII are analyzed. Then, the lane change intention inference (LCII) system is reviewed from the perspective of input signals, algorithms, and evaluation. Finally, future concerns and emerging trends in this area are highlighted.Item Open Access Driver lane change intention inference using machine learning methods.(2018-04) Xing, Yang; Cao, Dongpu; Velenis, EfstathiosLane changing manoeuvre on highway is a highly interactive task for human drivers. The intelligent vehicles and the advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver. The ADAS also need to understand the driver potential intent correctly since it shares the control authority with the human driver. This study provides a research on the driver intention inference, particular focus on the lane change manoeuvre on highways. This report is organised in a paper basis, where each chapter corresponding to a publication, which is submitted or to be submitted. Part Ⅰ introduce the motivation and general methodology framework for this thesis. Part Ⅱ includes the literature survey and the state-of-art of driver intention inference. Part Ⅲ contains the techniques for traffic context perception that focus on the lane detection. A literature review on lane detection techniques and its integration with parallel driving framework is proposed. Next, a novel integrated lane detection system is designed. Part Ⅳ contains two parts, which provides the driver behaviour monitoring system for normal driving and secondary tasks detection. The first part is based on the conventional feature selection methods while the second part introduces an end-to-end deep learning framework. The design and analysis of driver lane change intention inference system for the lane change manoeuvre is proposed in Part Ⅴ. Finally, discussions and conclusions are made in Part Ⅵ. A major contribution of this project is to propose novel algorithms which accurately model the driver intention inference process. Lane change intention will be recognised based on machine learning (ML) methods due to its good reasoning and generalizing characteristics. Sensors in the vehicle are used to capture context traffic information, vehicle dynamics, and driver behaviours information. Machine learning and image processing are the techniques to recognise human driver behaviour.Item Open Access Driver steering behaviour modelling based on neuromuscular dynamics and multi‑task time‑series transformer(Springer, 2024-01-11) Xing, Yang; Hu, Zhongxu; Mo, Xiaoyu; Hang, Peng; Li, Shujing; Liu, Yahui; Zhao, Yifan; Lv, ChenDriver steering intention prediction provides an augmented solution to the design of an onboard collaboration mechanism between human driver and intelligent vehicle. In this study, a multi-task sequential learning framework is developed to predict future steering torques and steering postures based on upper limb neuromuscular electromyography signals. The joint representation learning for driving postures and steering intention provides an in-depth understanding and accurate modelling of driving steering behaviours. Regarding different testing scenarios, two driving modes, namely, both-hand and single-right-hand modes, are studied. For each driving mode, three different driving postures are further evaluated. Next, a multi-task time-series transformer network (MTS-Trans) is developed to predict the future steering torques and driving postures based on the multi-variate sequential input and the self-attention mechanism. To evaluate the multi-task learning performance and information-sharing characteristics within the network, four distinct two-branch network architectures are evaluated. Empirical validation is conducted through a driving simulator-based experiment, encompassing 21 participants. The proposed model achieves accurate prediction results on future steering torque prediction as well as driving posture recognition for both two-hand and single-hand driving modes. These findings hold significant promise for the advancement of driver steering assistance systems, fostering mutual comprehension and synergy between human drivers and intelligent vehicles.Item Open Access Driver workload estimation using a novel hybrid method of error reduction ratio causality and support vector machine(Elsevier, 2017-10-04) Xing, Yang; Lv, Chen; Cao, Dongpu; Wang, Huaji; Zhao, YifanMeasuring driver workload is of great significance for improving the understanding of driver behaviours and supporting the improvement of advanced driver assistance systems technologies. In this paper, a novel hybrid method for measuring driver workload estimation for real-world driving data is proposed. Error reduction ratio causality, a new nonlinear causality detection approach, is being proposed in order to assess the correlation of each measured variable to the variation of workload. A full model describing the relationship between the workload and the selected important measurements is then trained via a support vector regression model. Real driving data of 10 participants, comprising 15 measured physiological and vehicle-state variables are used for the purpose of validation. Test results show that the developed error reduction ratio causality method can effectively identify the important variables that relate to the variation of driver workload, and the support vector regression based model can successfully and robustly estimate workload.Item Open Access End-to-end one-shot path-planning algorithm for an autonomous vehicle based on a convolutional neural network considering traversability cost(MDPI, 2022-12-10) Bian, Tongfei; Xing, Yang; Zolotas, ArgyriosPath planning plays an important role in navigation and motion planning for robotics and automated driving applications. Most existing methods use iterative frameworks to calculate and plan the optimal path from the starting point to the endpoint. Iterative planning algorithms can be slow on large maps or long paths. This work introduces an end-to-end path-planning algorithm based on a fully convolutional neural network (FCNN) for grid maps with the concept of the traversability cost, and this trains a general path-planning model for 10 × 10 to 80 × 80 square and rectangular maps. The algorithm outputs the lowest-cost path while considering the cost and the shortest path without considering the cost. The FCNN model analyzes the grid map information and outputs two probability maps, which show the probability of each point in the lowest-cost path and the shortest path. Based on the probability maps, the actual optimal path is reconstructed by using the highest probability method. The proposed method has superior speed advantages over traditional algorithms. On test maps of different sizes and shapes, for the lowest-cost path and the shortest path, the average optimal rates were 72.7% and 78.2%, the average success rates were 95.1% and 92.5%, and the average length rates were 1.04 and 1.03, respectively.Item Embargo Energy consumption optimisation for unmanned aerial vehicle based on reinforcement learning framework(Inderscience, 2024-04-16) Wang, Ziyue; Xing, YangThe average battery life of drones in use today is around 30 minutes, which poses significant limitations for ensuring long-range operation, such as seamless delivery and security monitoring. Meanwhile, the transportation sector is responsible for 93% of all carbon emissions, making it crucial to control energy usage during the operation of UAVs for future net-zero massive-scale air traffic. In this study, a reinforcement learning (RL)-based model was implemented for the energy consumption optimisation of drones. The RL-based energy optimisation framework dynamically tunes vehicle control systems to maximise energy economy while considering mission objectives, ambient circumstances, and system performance. RL was used to create a dynamically optimised vehicle control system that selects the most energy-efficient route. Based on training times, it is reasonable to conclude that a trained UAV saves between 50.1% and 91.6% more energy than an untrained UAV in this study by using the same map.Item Open Access An ensemble deep learning approach for driver lane change intention inference(Elsevier, 2020-04-23) Xing, Yang; Lv, Chen; Wang, Huaji; Cao, Dongpu; Velenis, EfstathiosWith the rapid development of intelligent vehicles, drivers are increasingly likely to share their control authorities with the intelligent control unit. For building an efficient Advanced Driver Assistance Systems (ADAS) and shared-control systems, the vehicle needs to understand the drivers’ intent and their activities to generate assistant and collaborative control strategies. In this study, a driver intention inference system that focuses on the highway lane change maneuvers is proposed. First, a high-level driver intention mechanism and framework are introduced. Then, a vision-based intention inference system is proposed, which captures the multi-modal signals based on multiple low-cost cameras and the VBOX vehicle data acquisition system. A novel ensemble bi-directional recurrent neural network (RNN) model with Long Short-Term Memory (LSTM) units is proposed to deal with the time-series driving sequence and the temporal behavioral patterns. Naturalistic highway driving data that consists of lane-keeping, left and right lane change maneuvers are collected and used for model construction and evaluation. Furthermore, the driver's pre-maneuver activities are statistically analyzed. It is found that for situation-aware, drivers usually check the mirrors for more than six seconds before they initiate the lane change maneuver, and the time interval between steering the handwheel and crossing the lane is about 2 s on average. Finally, hypothesis testing is conducted to show the significant improvement of the proposed algorithm over existing ones. With five-fold cross-validation, the EBiLSTM model achieves an average accuracy of 96.1% for the intention that is inferred 0.5 s before the maneuver starts.Item Open Access Eyes-out airborne object detector for pilots situational awareness(IEEE, 2024-05-13) Benoit, Paul; Xing, Yang; Tsourdos, AntoniosWith the exponential development of new flying objects, pilots need to pay even more attention to evaluate their environment, make decisions, and fly safely. Such situation awareness (SA) has multiple codified rules to guarantee the safety of pilots. This paper analyses the feasibility of a portable perception augmentation module (PAM) to help pilots improve their situational awareness based on two key actions on long-distance airborne objects, namely, object detection and distance and trajectory estimation. The developed object detection pipeline based on the state-of-the-art (SOTA) YOLOv8 architecture achieves high accuracy with a mAP50 of 0.835 for objects up to 3000 meters. The inference of the system is 1 second for a 360° scan of the aeroplane surroundings thanks to 4 wide FOV high-resolution cameras. The data used for the model is generated by Airsim in a completely automatized process. The potential implementation of stereo vision and the influence to the PAM are also evaluated. All of these tests are also performed on additional real-life data to evaluate generalization performances, which also show satisfactory results. Efforts in the development of the PAM were made to find the best balance between various constraints such as weight, energy consumption, and accuracy. Characteristic analysis of the PAM such as weight, energy consumption, and accuracy are proposed to seek the optimal balance between various real-world constraints. Real hardware considerations are made to estimate the hardware cost of the PAM based on the simulated results in this study. With further improvement in the trajectory estimation and model generalization, a prototype could be made, deployed, and sold to recreational pilots for safer flights. The code and data are available on: https://github.com/Alcharyx/IRP-Eye-out/Item Open Access Guest editorial: Decision making and control for connected and automated vehicles(Institution of Engineering and Technology (IET), 2022-10-17) Lv, Chen; Hang, Peng; Xing, Yang; Nguyen, Anh-Tu; Jolfaei, AlirezaItem Open Access Human-machine collaboration for automated driving using an intelligent two-phase haptic interface(Wiley, 2021-02-12) Lv, Chen; Li, Yutong; Xing, Yang; Huang, Chao; Cao, Dongpu; Zhao, Yifan; Liu, YahuiPrior to realizing fully autonomous driving, human intervention is periodically required to guarantee vehicle safety. This poses a new challenge in human–machine interaction, particularly during the control authority transition from automated functionality to a human driver. Herein, this challenge is addressed by proposing an intelligent haptic interface based on a newly developed two‐phase human–machine interaction model. The intelligent haptic torque is applied to the steering wheel and switches its functionality between predictive guidance and haptic assistance according to the varying state and control ability of human drivers. This helps drivers gradually resume manual control during takeover. The developed approach is validated by conducting vehicle experiments with 26 participants. The results suggest that the proposed method effectively enhances the driving state recovery and control performance of human drivers during takeover compared with an existing approach. Thus, this new method further improves the safety and smoothness of human–machine interaction in automated vehicles.Item Open Access A hybrid motion planning framework for autonomous driving in mixed traffic flow(Elsevier, 2022-11-28) Yang, Lei; Lu, Chao; Xiong, Guangming; Xing, Yang; Gong, JianweiAs a core part of an autonomous driving system, motion planning plays an important role in safe driving. However, traditional model- and rule-based methods lack the ability to learn interactively with the environment, and learning-based methods still have problems in terms of reliability. To overcome these problems, a hybrid motion planning framework (HMPF) is proposed to improve the performance of motion planning, which is composed of learning-based behavior planning and optimization-based trajectory planning. The behavior planning module adopts a deep reinforcement learning (DRL) algorithm, which can learn from the interaction between the ego vehicle (EV) and other human-driven vehicles (HDVs), and generate behavior decision commands based on environmental perception information. In particular, the intelligent driver model (IDM) calibrated based on real driving data is used to drive HDVs to imitate human driving behavior and interactive response, so as to simulate the bidirectional interaction between EV and HDVs. Meanwhile, trajectory planning module adopts the optimization method based on road Frenet coordinates, which can generate safe and comfortable desired trajectory while reducing the solution dimension of the problem. In addition, trajectory planning also exists as a safety hard constraint of behavior planning to ensure the feasibility of decision instruction. The experimental results demonstrate the effectiveness and feasibility of the proposed HMPF for autonomous driving motion planning in urban mixed traffic flow scenarios.Item Open Access Hybrid-learning-based classification and quantitative inference of driver braking intensity of an electrified vehicle(IEEE, 2018-02-21) Lv, Chen; Xing, Yang; Lu, Chao; Liu, Yahui; Guo, Hongyan; Gao, Hongbo; Cao, DongpuThe recognition of driver's braking intensity is of great importance for advanced control and energy management for electric vehicles. In this paper, the braking intensity is classified into three levels based on novel hybrid unsupervised and supervised learning methods. First, instead of selecting threshold for each braking intensity level manually, an unsupervised Gaussian Mixture Model is used to cluster the braking events automatically with brake pressure. Then, a supervised Random Forest model is trained to classify the correct braking intensity levels with the state signals of vehicle and powertrain. To obtain a more efficient classifier, critical features are analyzed and selected. Moreover, beyond the acquisition of discrete braking intensity level, a novel continuous observation method is proposed based on Artificial Neural Networks to quantitative analyze and recognize the brake intensity using the prior determined features of vehicle states. Experimental data are collected in an electric vehicle under real-world driving scenarios. Finally, the classification and regression results of the proposed methods are evaluated and discussed. The results demonstrate the feasibility and accuracy of the proposed hybrid learning methods for braking intensity classification and quantitative recognition with various deceleration scenarios.Item Open Access Hybrid-learning-based driver steering intention prediction using neuromuscular dynamics(IEEE, 2021-02-23) Xing, Yang; Lv, Chen; Liu, Ya-hui; Zhao, Yifan; Cao, Dongpu; Kawahara, SadahiroThe emerging automated driving technology poses a new challenge on driver-automation collaboration. In this study, oriented by human-machine mutual understanding, a driver steering intention prediction method is proposed to better understand human driver's expectation during driver-vehicle interaction. The steering intention is predicted based on a novel hybrid-learning-based time-series model with deep learning networks. Two different driving modes, namely, both hands and single right-hand driving modes, are studied. Different electromyography (EMG) signals from the upper limb muscles are collected and used for the steering intention prediction. The relationship between the neuromuscular dynamics and the steering torque is analyzed first. Then, the hybrid-learning-based model is developed to predict both the continuous and discrete steering intentions. The two intention prediction networks share the same temporal pattern exaction layer, which is built with the Bi-directional Recurrent Neural Network (RNN) and Long short-term memory (LSTM) cells. The model prediction performance is evaluated with a varied history and prediction horizon to exploit the model capability further. The experimental data are collected from 21 participants of varied ages and driving experience. The results show that the proposed method can achieve a prediction accuracy of around 95% steering under the two driving modes.Item Open Access Identification and analysis of driver postures for in-vehicle driving activities and secondary tasks recognition(IEEE, 2018-12-25) Xing, Yang; Lv, Chen; Zhang, Zhaozhong; Wang, Huaji; Na, Xiaoxiang; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueDriver decisions and behaviors regarding the surrounding traffic are critical to traffic safety. It is important for an intelligent vehicle to understand driver behavior and assist in driving tasks according to their status. In this paper, the consumer range camera Kinect is used to monitor drivers and identify driving tasks in a real vehicle. Specifically, seven common tasks performed by multiple drivers during driving are identified in this paper. The tasks include normal driving, left-, right-, and rear-mirror checking, mobile phone answering, texting using a mobile phone with one or both hands, and the setup of in-vehicle video devices. The first four tasks are considered safe driving tasks, while the other three tasks are regarded as dangerous and distracting tasks. The driver behavior signals collected from the Kinect consist of a color and depth image of the driver inside the vehicle cabin. In addition, 3-D head rotation angles and the upper body (hand and arm at both sides) joint positions are recorded. Then, the importance of these features for behavior recognition is evaluated using random forests and maximal information coefficient methods. Next, a feedforward neural network (FFNN) is used to identify the seven tasks. Finally, the model performance for task recognition is evaluated with different features (body only, head only, and combined). The final detection result for the seven driving tasks among five participants achieved an average of greater than 80% accuracy, and the FFNN tasks detector is proved to be an efficient model that can be implemented for real-time driver distraction and dangerous behavior recognition.