GithubHelp home page GithubHelp logo

cidnn's Introduction

CIDNN

CIDNN: Encoding Crowd Interaction with Deep Neural Network

This repo is the official open source of CIDNN, CVPR 2018 by Yanyu Xu, Zhixin Piao and Shenghua Gao.

architecture

It is implemented in Pytorch 0.4 and Python 3.x.

If you find this useful, please cite our work as follows:

@INPROCEEDINGS{xu2018cidnn, 
	author={Yanyu Xu and Zhixin Piao and Shenghua Gao}, 
	booktitle={2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 
	title={Encoding Crowd Interaction with Deep Neural Network for Pedestrian Trajectory Prediction}, 
	year={2018}
}

DataSet

DataSet Link
GC [1] BaiduYun or DropBox
ETH [2] website
UCY [3] website
CUHK Crowd [4] webiste or BaiduYun
subway station [5] website

Update 2019.07.17: Because the website of GC Dataset has been deleted, here we give the download link and decription of it.

GC Dataset Description:

  1. This dataset contains two folders, naming ‘Annotation’ and ‘Frame’, respectively.

  2. The ‘Annotation’ folder contains the manually labeled walking paths of 12,684 pedestrians. Annotations are named as ‘XXXXXX.txt’. ‘XXXXXX’ is pedestrian index.

  3. For each of the annotation txt file. It contains multiple integers, corresponding to the (x,y,t)s of the current pedestrian. ‘x’ and ‘y’ are point coordinates and ‘t’ is frame index. There should be 3N integers if this pedestrian appears in N frames. All pedestrians within Frame 000000 to 100000 are labeled from the time point he(she) arrives to the time point he(she) leaves.

  4. The ‘Frame’ folder contains 6001 frames sampled from a surveillance video captured at the Grand Central Train Station of New York. These frames are named as ‘XXXXXX.jpg’. ‘XXXXXX’ is frame index. It starts from ‘000000’ and ends at ‘120000’. One frame is sampled every 20 frames from the surveillance video clip.

Pipeline

  1. download dataset and set it in CIDNN/data dir:
CIDNN/data/GC # for example, we use GC dataset
  1. open tools/create_dataset.py and set data path:
GC_raw_data_path = 'data/GC/Annotation'
GC_meta_data_path = 'data/GC_meta_data.json'
GC_train_test_data_path = 'data/GC.npz'

where GC_raw_data_path is GC dataset we has downloaded, GC_meta_data_path is an intermediate file to help create GC.npz, which is final data we use for our network.

  1. build GC.npz:
cd CIDNN
python tools/create_dataset.py

you will get this output:

pedestrian_data_list size:  12685
frame_data_list size:  6001
create data/GC_meta_data.json successfully!
float32
train_X: (11630, 20, 5, 2), train_Y: (11630, 20, 5, 2)
test_X: (1306, 20, 5, 2), test_Y: (1306, 20, 5, 2)
create data/GC.npz successfully!

It means that GC.npz has four part: train_X, train_Y, test_X, test_Y. X means observed trace, Y means predicted trace. each data has structure like (batch_num, pedestrian_num, obv_frame / pred_frame, dim)

for example, train_X(11630, 20, 5, 2) means there are 11630 samples, each sample has 20 pedestrians in the same 2d scene, we observe 5 frame.

  1. train CIDNN (all hyper-parameter in Class Model) and do whatever you want:
python train.py

Reference

  1. Understanding Pedestrian Behaviors from Stationary Crowd Groups

    Shuai Yi, Hongsheng Li, and Xiaogang Wang. In CVPR, 2015.

  2. You’ll never walk alone: Modeling social behavior for multi-target tracking

    Stefano Pellegrini, Andreas Ess, Konrad Schindler, Luc Van Gool. In ICCV 2009.

  3. Crowds by Example

    Alon Lerner, Yiorgos Chrysanthou, Dani Lischinski. In EUROGRAPHICS 2007.

  4. Scene-Independent Group Profiling in Crowd

    Jing Shao, Chen Change Loy, Xiaogang Wang. In CVPR 2014.

  5. Understanding Collective Crowd Behaviors: Learning a Mixture Model of Dynamic Pedestrian-Agents

    Bolei Zhou, Xiaogang Wang, Xiaoou Tang. In CVPR 2012.

cidnn's People

Contributors

piaozhx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cidnn's Issues

The Baiduyun link of the CUHK crowd dataset is invalid.

Thanks a lot for sharing the dataset of CUHK crowd dataset, however, the Baiduyun link of the first part file is invaild. Could you refresh the link for that part? And other files could not be decompressed without the first part file.

Dataset split

Hi! Sounds like shuffling between train/test split could cause lower prediction error.
random_idx = random.sample(range(frame_num), frame_num)
train_idx = random_idx[:int(frame_num * train_rate)]
test_idx = random_idx[int(frame_num * train_rate):]
Is this setting exactly used in the experiments when producing the result in the original paper? Just looking for a fair experimental setting for our comparison.

CUHK Crowd百度云链接失效

大佬你好啊,可不可以再次分享一下CUHK Crowd的数据集的第一个链接呢?显示失效了。谢谢大佬

Results not as reported

I was running the network with all defeault settings on the GC dataset till convergence (up to about 1000 epochs) several times with different random seeds and never got the reported error of 0.0125.
The best I was actually able to achieve was 0.0138 by doing some small changes.
However, by using only a simple LSTM (no social component) I already got an error of 0.0136.

Is there some bug in the repository?

Could you upload working network weights?

Edit: By using 3 layers in the regression net I got 0.0125 with a simple LSTM, so exactly what the paper reports.

Unable to get the datasets

Thanks for making your code available, but I can't get the CUHK-Crowd-Dataset and the subway-station-dataset, could you please make it available like the GC dataset?

Sounds overlap between test set and train set

The train and set set has completely overlap

In file create_dataset.py, lines 118-120 the frame indices are shuffled and form the train_idx and test_idx.
Let's assume frames 10 is in test_idx and frame 9 is in train_idx.
Then when you call get_samples_from_frame(f_i=9), it would use the information of frame 10 to generate the samples (lines 81:82).
Then let's say the test data almost perfectly exist in train_set as well!!!
Is there something I misunderstood?

Visualization

In your paper you show the predicted trajectories together with the ground truth and the past trajectories. Can you upload the scripts you were using for this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.