Browsing by Author "Wang, Huaji"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item Open Access Advances in vision-based lane detection: algorithms, integration, assessment, and perspectives on ACP-based parallel vision(IEEE, 2018-05-01) Xing, Yang; Lv, Chen; Chen, Long; Wang, Huaji; Wang, Hong; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueLane detection is a fundamental aspect of most current advanced driver assistance systems (ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous vision-based lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system, and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed.Item Open Access Analysis of autopilot disengagements occurring during autonomous vehicle testing(IEEE, 2017-12-20) Lv, Chen; Cao, Dongpu; Zhao, Yifan; Auger, Daniel J.; Sullman, Mark; Wang, Huaji; Millen Dutka, Laura; Skrypchuk, Lee; Mouzakitis, AlexandrosIn present-day highly-automated vehicles, there are occasions when the driving system disengages and the human driver is required to take-over. This is of great importance to a vehicle U+02BC s safety and ride comfort. In the U.S state of California, the Autonomous Vehicle Testing Regulations require every manufacturer testing autonomous vehicles on public roads to submit an annual report summarizing the disengagements of the technology experienced during testing. On 1 January 2016, seven manufacturers submitted their first disengagement reports: Bosch, Delphi, Google, Nissan, Mercedes-Benz, Volkswagen, and Tesla Motors. This work analyses the data from these disengagement reports with the aim of gaining abetter understanding of the situations in which a driver is required to takeover, as this is potentially useful in improving the Society of Automotive Engineers U+0028 SAE U+0029 Level 2 and Level 3 automation technologies. Disengagement events from testing are classified into different groups based on attributes and the causes of disengagement are investigated and compared in detail. The mechanisms and time taken for take-over transition occurred in disengagements are studied. Finally, recommendations for OEMs, manufacturers, and government organizations are also discussed.Item Open Access Characterization of driver neuromuscular dynamics for human-automation collaboration design of automated vehicles(IEEE, 2018-03-05) Lv, Chen; Wang, Huaji; Cao, Dongpu; Zhao, Yifan; Auger, Daniel J.; Sullman, Mark; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, AlexandrosIn order to design an advanced human-automation collaboration system for highly automated vehicles, research into the driver's neuromuscular dynamics is needed. In this paper a dynamic model of drivers' neuromuscular interaction with a steering wheel is firstly established. The transfer function and the natural frequency of the systems are analyzed. In order to identify the key parameters of the driver-steering-wheel interacting system and investigate the system properties under different situations, experiments with driver-in-the-loop are carried out. For each test subject, two steering tasks, namely the passive and active steering tasks, are instructed to be completed. Furthermore, during the experiments, subjects manipulated the steering wheel with two distinct postures and three different hand positions. Based on the experimental results, key parameters of the transfer function model are identified by using the Gauss-Newton algorithm. Based on the estimated model with identified parameters, investigation of system properties is then carried out. The characteristics of the driver neuromuscular system are discussed and compared with respect to different steering tasks, hand positions and driver postures. These experimental results with identified system properties provide a good foundation for the development of a haptic take-over control system for automated vehicles.Item Open Access Data for "An Orientation Sensor based Head Tracking System for Driver Behaviour Monitoring"(Cranfield University, 2017-11-21 13:42) Zhao, Yifan; Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, AlexandrosData used for this paper - files created in MATLAB.Item Open Access Data for the paper "Analysis of Autopilot Disengagements Occurring during Autonomous Vehicle Testing"(Cranfield University, 2017-12-11 08:19) Lyu, Chen; Cao, Dongpu; Zhao, Yifan; Auger, Daniel; Sullman, Mark; Wang, HuajiData used in the paper "Analysis of Autopilot Disengagements Occurring during Autonomous Vehicle Testing".Item Open Access Driver activity recognition for intelligent vehicles: a deep learning approach(IEEE, 2019-04-01) Xing, Yang; Lv, Chen; Wang, Huaji; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueDriver decisions and behaviors are essential factors that can affect the driving safety. To understand the driver behaviors, a driver activities recognition system is designed based on the deep convolutional neural networks (CNN) in this study. Specifically, seven common driving activities are identified, which are the normal driving, right mirror checking, rear mirror checking, left mirror checking, using in-vehicle radio device, texting, and answering the mobile phone, respectively. Among these activities, the first four are regarded as normal driving tasks, while the rest three are classified into the distraction group. The experimental images are collected using a low-cost camera, and ten drivers are involved in the naturalistic data collection. The raw images are segmented using the Gaussian mixture model (GMM) to extract the driver body from the background before training the behavior recognition CNN model. To reduce the training cost, transfer learning method is applied to fine tune the pre-trained CNN models. Three different pre-trained CNN models, namely, AlexNet, GoogLeNet, and ResNet50 are adopted and evaluated. The detection results for the seven tasks achieved an average of 81.6% accuracy using the AlexNet, 78.6% and 74.9% accuracy using the GoogLeNet and ResNet50, respectively. Then, the CNN models are trained for the binary classification task and identify whether the driver is being distracted or not. The binary detection rate achieved 91.4% accuracy, which shows the advantages of using the proposed deep learning approach. Finally, the real-world application are analysed and discussed.Item Open Access Driver lane change intention inference for intelligent vehicles: framework, survey, and challenges(IEEE, 2019-03-06) Xing, Yang; Lv, Chen; Wang, Huaji; Wang, Hong; Ai, Yunfeng; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueIntelligent vehicles and advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver status since ADAS share the vehicle control authorities with the human driver. This study provides an overview of the ego-vehicle driver intention inference (DII), which mainly focus on the lane change intention on highways. First, a human intention mechanism is discussed in the beginning to gain an overall understanding of the driver intention. Next, the ego-vehicle driver intention is classified into different categories based on various criteria. A complete DII system can be separated into different modules, which consists of traffic context awareness, driver states monitoring, and the vehicle dynamic measurement module. The relationship between these modules and the corresponding impacts on the DII are analyzed. Then, the lane change intention inference (LCII) system is reviewed from the perspective of input signals, algorithms, and evaluation. Finally, future concerns and emerging trends in this area are highlighted.Item Open Access Driver workload estimation using a novel hybrid method of error reduction ratio causality and support vector machine(Elsevier, 2017-10-04) Xing, Yang; Lv, Chen; Cao, Dongpu; Wang, Huaji; Zhao, YifanMeasuring driver workload is of great significance for improving the understanding of driver behaviours and supporting the improvement of advanced driver assistance systems technologies. In this paper, a novel hybrid method for measuring driver workload estimation for real-world driving data is proposed. Error reduction ratio causality, a new nonlinear causality detection approach, is being proposed in order to assess the correlation of each measured variable to the variation of workload. A full model describing the relationship between the workload and the selected important measurements is then trained via a support vector regression model. Real driving data of 10 participants, comprising 15 measured physiological and vehicle-state variables are used for the purpose of validation. Test results show that the developed error reduction ratio causality method can effectively identify the important variables that relate to the variation of driver workload, and the support vector regression based model can successfully and robustly estimate workload.Item Open Access An ensemble deep learning approach for driver lane change intention inference(Elsevier, 2020-04-23) Xing, Yang; Lv, Chen; Wang, Huaji; Cao, Dongpu; Velenis, EfstathiosWith the rapid development of intelligent vehicles, drivers are increasingly likely to share their control authorities with the intelligent control unit. For building an efficient Advanced Driver Assistance Systems (ADAS) and shared-control systems, the vehicle needs to understand the drivers’ intent and their activities to generate assistant and collaborative control strategies. In this study, a driver intention inference system that focuses on the highway lane change maneuvers is proposed. First, a high-level driver intention mechanism and framework are introduced. Then, a vision-based intention inference system is proposed, which captures the multi-modal signals based on multiple low-cost cameras and the VBOX vehicle data acquisition system. A novel ensemble bi-directional recurrent neural network (RNN) model with Long Short-Term Memory (LSTM) units is proposed to deal with the time-series driving sequence and the temporal behavioral patterns. Naturalistic highway driving data that consists of lane-keeping, left and right lane change maneuvers are collected and used for model construction and evaluation. Furthermore, the driver's pre-maneuver activities are statistically analyzed. It is found that for situation-aware, drivers usually check the mirrors for more than six seconds before they initiate the lane change maneuver, and the time interval between steering the handwheel and crossing the lane is about 2 s on average. Finally, hypothesis testing is conducted to show the significant improvement of the proposed algorithm over existing ones. With five-fold cross-validation, the EBiLSTM model achieves an average accuracy of 96.1% for the intention that is inferred 0.5 s before the maneuver starts.Item Open Access Identification and analysis of driver postures for in-vehicle driving activities and secondary tasks recognition(IEEE, 2018-12-25) Xing, Yang; Lv, Chen; Zhang, Zhaozhong; Wang, Huaji; Na, Xiaoxiang; Cao, Dongpu; Velenis, Efstathios; Wang, Fei-YueDriver decisions and behaviors regarding the surrounding traffic are critical to traffic safety. It is important for an intelligent vehicle to understand driver behavior and assist in driving tasks according to their status. In this paper, the consumer range camera Kinect is used to monitor drivers and identify driving tasks in a real vehicle. Specifically, seven common tasks performed by multiple drivers during driving are identified in this paper. The tasks include normal driving, left-, right-, and rear-mirror checking, mobile phone answering, texting using a mobile phone with one or both hands, and the setup of in-vehicle video devices. The first four tasks are considered safe driving tasks, while the other three tasks are regarded as dangerous and distracting tasks. The driver behavior signals collected from the Kinect consist of a color and depth image of the driver inside the vehicle cabin. In addition, 3-D head rotation angles and the upper body (hand and arm at both sides) joint positions are recorded. Then, the importance of these features for behavior recognition is evaluated using random forests and maximal information coefficient methods. Next, a feedforward neural network (FFNN) is used to identify the seven tasks. Finally, the model performance for task recognition is evaluated with different features (body only, head only, and combined). The final detection result for the seven driving tasks among five participants achieved an average of greater than 80% accuracy, and the FFNN tasks detector is proved to be an efficient model that can be implemented for real-time driver distraction and dangerous behavior recognition.Item Open Access An orientation sensor based head tracking system for driver behaviour monitoring(MDPI, 2017-11-22) Zhao, Yifan; Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel J.; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, AlexandrosAlthough at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers’ behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone.