This project addresses the problem of data extraction from traffic video sequences. The system under development automatically learns the layout of a traffic site (for example, an intersection) from trajectories of vehicles obtained by a vision-tracking system. This approach enables the automatic extraction of sophisticated and complex data such as unusual events, near misses, and vehicle trajectory clusters. The contributions of this work are focused on a novel adaptive technique for detecting moving shadows and distinguishing them from moving objects in video sequences. Most methods for detecting shadows function in a static setting with significant human input. A more general semi-supervised learning technique is used to address this problem. First, characteristic differences in color and edges in the video frames are used to produce a set of features useful for classification. Second, researchers employ a learning technique that uses support sector machines and the co-training algorithm, which relies on only a small set of human-labeled data. Observations indicate that co-training can counter the effects of changing underlying probability distributions in the feature space. From the standpoint of detecting shadows, once deployed, the new method can dynamically adapt to varying conditions without any manual intervention and is better at classifying than previous methods on static and dynamic environments alike. The strengths of the technique are that it requires a small quantity of human-labeled data and that it is able to adapt automatically to changing scene conditions.