GithubHelp home page GithubHelp logo

tsinghua-mars-lab / m2i Goto Github PK

View Code? Open in Web Editor NEW
186.0 186.0 24.0 53.95 MB

M2I is a simple but effective joint motion prediction framework through marginal and conditional predictions by exploiting the factorized relations between interacting agents.

Home Page: https://tsinghua-mars-lab.github.io/M2I/

License: MIT License

Python 85.24% Cython 14.76%
motion-prediction

m2i's People

Contributors

hangzhaomit avatar larksq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

m2i's Issues

About interpolating missing trajectories

@larksq
Hi, thank you for your nice work!
btw, in WOMD there are some missing trajectories(incomplete trajectories).

how did you handle this?
did you interpolate missing points or just drop this whole trajectories?

Best,

There are doubts about the function utils_cython.get_normalized

Thanks for your contributions to the field of motion prediction. In the process of reading the code, I have doubts about the function utils_cython.get_normalized, I think it rotates the coordinates of all agents based on the coordinates and angular velocity of the 11th frame of the influencer, but I still can't find the rotation pattern after visualizing several scenes, and I don't know what the role of rotation is.
631a3e33d8e518a81925ba6dd777bf9
5c5e1ecd20cd96456d502f48b466730
83101b64a1bc39f45d6ed1307f58152
(The left side is the original trajectory, the right side is the trajectory after rotation)

evaluate the conditional prediction result

I am not sure which 6 groups should I merge. When running the evaluation command, it generates 6 predicted trajectories for each scenario. However, in result() in class MotionMetrics in src/waymo_tutorial.py, only 1 trajectory is left for each scenario. The shape of prediction_trajectory in result() in class MotionMetrics is [num_scenarios, 80, 2].
image

At the same time, in REDAME.md, --eval_rst_saving_number can have 6 different values (from 0 to 5).

Should I merge 6 predicted trajectories for a fixed eval_rst_saving_number or merge predicted trajectories generated using 6 different eval_rst_saving_number?

Additionally, when merging the trajectory of the influencer and the trajectory of the reactor, how should I permute them?
The shape of the merged trajectories is (num_scenarios, 6, n=2, 80, 2). Is [:, :, 0, :] for influencers and [:, :, 1, :] for reactors?
Or, is there a different way to arrange influencer trajectory and reactor trajectory?

How should I merge the scores of the influencer and the scores of the reactor? Should I multiply them or stack them?

relation.yaml

There is no 'src/configs/relation.yaml', could you release the file ? Thanks a lot !

Missing Files in Google Drive

I cannot find some of the files when running the validation scripts for marginal trajectory prediction and conditional trajectory prediction. Specifically, checkpoint paths such as densetnt.raster.vehicle.1 and densetnt.reactor.Tgt-Rgt.raster_inf.v2v are in the scripts but are not on the provided Google Drive page.

I tried to use checkpoints whose names are most similar to the ones in the scripts but using these checkpoints results in missing keys and size mismatches. Additionally, from the Google Drive page, I cannot find specified RELATION_PRED_DIR and INFLUENCER_PRED_DIR, whose names match those mentioned in the scripts, either.

Could you please provide the missing files and more detailed instructions for running the validation scripts?

difference between 'influencer' and 'dominant'

In src/dataset_waymo.py, line 290, there is some comments as follows:
' # [id1, id2, label, relation_type].'
' # id1 always influencer, id2 always reactor'
' # interaction_label: 1 - bigger id agent dominant, 0 - smaller id agent dominant, 2 - no relation'
' # agent_pair_label: 1 - v2v, 2 - v2p, 3 - v2c, 4 - others'

I am confused about 'influencer' and 'dominant'. In my understanding, 'influencer' dominates the interaction. So why are there there cases in 'interaction_label' ?
Besides, when will the code used for generating the ground truth label released ?
Looking forward to your reply.

CUDA out of memory error

Hi,

Thanks for making the code available for such an interesting work.

I have tried to train the Relation Prediction model on GPUs with 32 GB of memory but it lead to CUDA out of memory error. I have also tried to train with vgg16(pretrain=True) but still ran into the same problem. So, I wonder what kind of GPU you used for your experiments and how you manage the memory in training.

What is the input for relation predictor?

Hi, I am really looking forward to the code!

Before that, may I please ask what is the input for the relation predictor? Is it the whole graph, car's trajectory or any others?

Very Expect for your kind answer!

Assertion Error when Training Relation Predictor

I filtered the interactive training data using the command in the readme. That command generated 1000 files in the output directory. Their total size is 242G.

However, when running the command for "Training Relation Predictor", an assertion error was thrown. The length of len(self.ex_list) is 0, which is smaller than 200.

image

Marginal trajectory prediction error.

When I want to evaluation at step2 which named marginal trajectory predictoin , I meet a error about tensor. The script I ran can be seen at the top of the image.
image

Changing hidden size leads to runtime error

When training the conditional predictor, changing the hidden_size from 128 to 64 results in an runtime error: RuntimeError: The expanded size of the tensor (64) must match the existing size (128) at non-singleton dimension 1. Target sizes: [10, 64]. Tensor sizes: [10, 128]

Similarly, when training the relation prediction, hidden size == 64 does not work.

What other hyper parameters do I need to change simultaneously when changing hidden_size?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.