The main objective of this project is for detecting driver distraction using multimodal data. In this the main contributions are statistical analysis for the identification of the best features and modalities for detecting each of four types of distraction, namely, cognitive, emotional, sensorimotor and mixed distraction.Comparison of classical ML and end-to-end deep learning (DL) models for driver distraction detection, including an analysis with respect to the size of the input window and the type of the input modality.
It is only a matter of time until autonomous vehicles become ubiquitous; however, human driving supervision will remain a necessity for decades. To assess the driver’s ability to take control over the vehicle in critical scenarios, driver distractions can be monitored using wearable sensors or sensors that are embedded in the vehicle, such as video cameras. The types of driving distractions that can be sensed with various sensors is an open research question that this study attempts to answer. This study compared data from physiological sensors (palm electrodermal activity (pEDA), heart rate and breathing rate) and visual sensors (eye tracking, pupil diameter, nasal EDA (nEDA), emotional activation and facial action units (AUs)) for the detection of four types of distractions.
Keywords: Machine Learning, Deep Learning, Driver Distraction, Sensors, Facial Expressions.
NOTE: Without the concern of our team, please don't submit to the college. This Abstract varies based on student requirements.

HARDWARE SPECIFICATIONS:
SOFTWARE SPECIFICATIONS: