Recognition of visual-related non-driving activities using a dual-camera monitoring system
dc.contributor.author | Yang, Lichao | |
dc.contributor.author | Dong, Kuo | |
dc.contributor.author | Ding, Yan | |
dc.contributor.author | Brighton, James | |
dc.contributor.author | Zhan, Zhenfei | |
dc.contributor.author | Zhao, Yifan | |
dc.date.accessioned | 2021-03-29T09:52:17Z | |
dc.date.available | 2021-03-29T09:52:17Z | |
dc.date.issued | 2021-03-25 | |
dc.description.abstract | For a Level 3 automated vehicle, according to the SAE International Automation Levels definition (J3016), the identification of non-driving activities (NDAs) that the driver is engaging with is of great importance in the design of an intelligent take-over interface. Much of the existing literature focuses on the driver take-over strategy with associated Human-Machine Interaction design. This paper proposes a dual-camera based framework to identify and track NDAs that require visual attention. This is achieved by mapping the driver's gaze using a nonlinear system identification approach, on the object scene, recognised by a deep learning algorithm. A novel gaze-based region of interest (ROI) selection module is introduced and contributes about a 30% improvement in average success rate and about a 60% reduction in average processing time compared to the results without this module. This framework has been successfully demonstrated to identify five types of NDA required visual attention with an average success rate of 86.18%. The outcome of this research could be applicable to the identification of other NDAs and the tracking of NDAs within a certain time window could potentially be used to evaluate the driver's attention level for both automated and human-driving vehicles | en_UK |
dc.identifier.citation | Yang L, Dong K, Ding Y, et al., (2021) Recognition of visual-related non-driving activities using a dual-camera monitoring system. Pattern Recognition, Volume 116, August 2021, Article number 107955 | en_UK |
dc.identifier.issn | 0031-3203 | |
dc.identifier.uri | https://doi.org/10.1016/j.patcog.2021.107955 | |
dc.identifier.uri | https://dspace.lib.cranfield.ac.uk/handle/1826/16514 | |
dc.language.iso | en | en_UK |
dc.publisher | Elsevier | en_UK |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject | activities identification | en_UK |
dc.subject | deep learning | en_UK |
dc.subject | computer vision | en_UK |
dc.subject | level 3 automation | en_UK |
dc.subject | Driver behaviour | en_UK |
dc.title | Recognition of visual-related non-driving activities using a dual-camera monitoring system | en_UK |
dc.type | Article | en_UK |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- visual-related_non-driving_activities-2021.pdf
- Size:
- 1.37 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.63 KB
- Format:
- Item-specific license agreed upon to submission
- Description: