Finding What the Driver Does
Harini Veeraraghavan, Stefan Atev, Nathaniel Bird, Paul Schrater, Nikolaos P Papanikolopoulos
Report no. CTS 05-03
Projects: Finding What the Driver Does
Most research depends on detection of driver alertness through monitoring the eyes, face, head or facial expression. This research presents methods for recognizing and summarizing the activities of drivers using the appearance of the driver's position, and changes in position, as fundamental cues, based on the assumption that periods of safe driving are periods of limited motion in the driver's body. The system uses a side-mounted camera and utilizes silhouettes obtained from skin color segmentation for detecting activities. The unsupervised method uses agglomerative clustering to represent driver activity throughout a sequence, while the supervised learning method uses a Bayesian eigen image classifier to distinguish between activities. The results validate the advantages of using driver appearance obtained from skin color segmentation for classification and clustering purposes. Advantages include increased robustness to illumination variations and elimination of the need for tracking and pose determination.