Objective
In this work, an early prediction of vehicle trajectories and turning movements are investigated using traffic cameras. Firstly, A vision-based tracking system is developed to monitor intersection videos and collect turning trajectories with their labels
known as turning movements. The collected trajectories are used to train deep neural networks and obtain the turning models for the futbolvelez.com prediction task. The trajectory evaluation of future two seconds shows the success of
long-term short-term memory networks to early predict the turning movements with more than 92% accuracy.
Turning Trajectory Collection System
Predefined regions in the intersection image, called zones, are defined manually to specify the four cardinal directions {north, south, east, west} and the central intersection. The zones are defined based on the set of lines drawn on the stop bars of each intersection direction. The zones are used to define a regular sequence (RS) set, that is, the set of acceptable zone traversals. Examples of the zones for two intersections are displayed below.

The zones are used to define a regular sequence (RS) set, that is, the set of acceptable zone traversals. The tracker only keeps record or transitions between zones (when the current tracked viagra fiyat zone changes) to build the track zone sequence. If the resulting zone sequence exists in the regular sequence set for the intersection, a counter for the associated TM count is incremented. A zone flag is set to indicate if tracking was successful for cooperation with the trajectory comparison module.
Trajectory comparison module
Although counting by zone comparison is simple, it is unsuccessful when the obtained sequence of zone numbers is not member of the RS. This happens during undesired situations like occlusion. Trajectories and typical paths are compared using like longest common substring (LCSS) because of its robustness to noise and outliers. The typical paths are shown for INT 2.

Experiments
The vision-based tracking system was implemented in C++ using OpenCV 2.3 and it was run on quad core Intel i7 processor with 6 GB RAM. The initial steps of experiments included scene preparation and arranging the collected trajectories into training and test and datasets.
For trajectory training and prediction experiments, the Theano framework with Keras wrappers were used in a Python environment. Training and testing of trajectories and turnings were performed on a dedicated, high end PC with a quad core intel i7-7700 3.6 GHz processor, 16 GB RAM and an Nvidia Geforce GTX 1070 GPU. The GTX 1070 has 8GB of total RAM with 2048 CUDA cores, allowing for fast parallelization during training and testing of deep learning architectures.The prediction is performed for next two seconds which is t = 2 × F P S future points (e.g., 28 for INT 1,2 and 10 for INT 3) since intersection videos were recorded by the rate of 14 frame per second. The prediction process is continue for the same trajectory if 3+t×i is less than 30% of the length of the trajectory in order to meet the early prediction requirement. i is a counter variable to control the loop initialized with zero.The maximum 500 and 1000 epochs were used to find the models with least mean squared error for each turning trajectories in DNN and LSTM networks respectively. The main optimizer was Adam and rectifiers were used as activation functions for DNN. Since we addressed a regression problem for (x, y) coordinates, the activation function of last layer is linear for DNN and LSTM. The batch size and verbose of 10 and 2 were selected respectively. Please refer to the paper for more details about the experiments.
Dataset
The collected trajectories used to train the DNN and LSTM networks can be downloaded here.
DatasetDownloadEach intersection folder has some csv files showing each turning movement. For example, 12_train.csv is the west to north direction used for taining and obtain the model and 12_test.csv was used for the predicting the future trajectory. Each zone area has specific numbers as 1: West, 2: North, 3: East, and 4: South. For kocuce example, 34_train.csv is the East to South direction. Each row of the csv file shows one tarjectory, and you need to skip the first two columns since they are tracking and turning path IDs. For example, First row: (third column, fourth column) is (x1,y1), (fifth column, sixth column) is (x2,y2), and so forth.
Reference
Please cite following papers if you use the datasets in your study/research:
[1] M. S. Shirazi, B. Morris, “Trajectory Prediction of Vehicles Turning at Intersections Using Deep Neural Networks,” Revised for Machine Vision an Applications, 2018.
[2] M. S. Shirazi, B. Morris, “Vision-Based Turning Movement Monitoring: Count, Speed & Waiting Time Estimation,” IEEE Intelligent Transportation Magazine, 8(1), pp. 23-34, January 2016.