GithubHelp home page GithubHelp logo

epic-kitchens / epic-kitchens-55-action-models Goto Github PK

View Code? Open in Web Editor NEW
74.0 8.0 12.0 146 KB

EPIC-KITCHENS-55 baselines for Action Recognition

License: Other

Python 100.00%
epic-kitchens epic kitchens action recognition action-recognition pytorch pytorch-hub tsn trn

epic-kitchens-55-action-models's Introduction

CircleCI GitHub release arXiv-1804.02748 Data

EPIC-KITCHENS-55 is the largest dataset in first-person (egocentric) vision; 55 hours of multi-faceted, non-scripted recordings in native environments - i.e. the wearers' homes, capturing all daily activities in the kitchen over multiple days. Annotations are collected using a novel `live' audio commentary approach.

Authors

Dima Damen (1) Hazel Doughty (1) Giovanni Maria Farinella (3) Sanja Fidler (2) Antonino Furnari (3) Evangelos Kazakos (1) Davide Moltisanti (1) Jonathan Munro (1) Toby Perrett (1) Will Price (1) Michael Wray (1)

  • (1 University of Bristol)
  • (2 University of Toronto)
  • (3 University of Catania)

Contact: [email protected]

Citing

When using the dataset, kindly reference:

@INPROCEEDINGS{Damen2018EPICKITCHENS,
   title={Scaling Egocentric Vision: The EPIC-KITCHENS Dataset},
   author={Damen, Dima and Doughty, Hazel and Farinella, Giovanni Maria  and Fidler, Sanja and 
           Furnari, Antonino and Kazakos, Evangelos and Moltisanti, Davide and Munro, Jonathan 
           and Perrett, Toby and Price, Will and Wray, Michael},
   booktitle={European Conference on Computer Vision (ECCV)},
   year={2018}
} 

(Check publication here)

Dataset Details

Ground Truth

We provide ground truth for action segments and object bounding boxes.

  • Objects: Full bounding boxes of narrated objects for every annotated frame.
  • Actions: Split into narrations and action labels:
    • Narrations containing the narrated sentence with the timestamp.
    • Action labels containing the verb and noun labels along with the start and end times of the segment.

Dataset Splits

The dataset is comprised of three splits with the corresponding ground truth:

  • Training set - Full ground truth.
  • Seen Kitchens (S1) Test set - Start/end times only.
  • Unseen Kitchens (S2) Test set - Start/end times only.

Initially we are only releasing the full ground truth for the training set in order to run action and object challenges.

Important Files

Additional Files

We direct the reader to RDSF for the videos and rgb/flow frames.

We provide html and pdf alternatives to this README which are auto-generated.

Files Structure

EPIC_train_action_labels.csv

CSV file containing 14 columns:

Column Name Type Example Description
uid int 6374 Unique ID of the segment.
video_id string P03_01 Video the segment is in.
narration string close fridge English description of the action provided by the participant.
start_timestamp string 00:23:43.847 Start time in HH:mm:ss.SSS of the action.
stop_timestamp string 00:23:47.212 End time in HH:mm:ss.SSS of the action.
start_frame int 85430 Start frame of the action (WARNING only for frames extracted as detailed in Video Information).
stop_frame int 85643 End frame of the action (WARNING only for frames extracted as detailed in Video Information).
participant_id string P03 ID of the participant.
verb string close Parsed verb from the narration.
noun string fridge First parsed noun from the narration.
verb_class int 3 Numeric ID of the parsed verb's class.
noun_class int 10 Numeric ID of the parsed noun's class.
all_nouns list of string (1 or more) ['fridge'] List of all parsed nouns from the narration.
all_noun_classes list of int (1 or more) [10] List of numeric IDs corresponding to all of the parsed nouns' classes from the narration.

Please note we have included a python pickle file for ease of use. This includes a pandas dataframe with the same layout as above. This pickle file was created with pickle protocol 2 on pandas version 0.22.0.

EPIC_train_invalid_labels.csv

CSV file containing 14 columns:

Column Name Type Example Description
uid int 6374 Unique ID of the segment.
video_id string P03_01 Video the segment is in.
narration string close fridge English description of the action provided by the participant.
start_timestamp string 00:23:43.847 Start time in HH:mm:ss.SSS of the action.
stop_timestamp string 00:23:47.212 End time in HH:mm:ss.SSS of the action.
start_frame int 85430 Start frame of the action (WARNING only for frames extracted as detailed in Video Information).
stop_frame int 85643 End frame of the action (WARNING only for frames extracted as detailed in Video Information).
participant_id string P03 ID of the participant.
verb string close Parsed verb from the narration.
noun string fridge First parsed noun from the narration.
verb_class int 3 Numeric ID of the parsed verb's class.
noun_class int 10 Numeric ID of the parsed noun's class.
all_nouns list of string (1 or more) ['fridge'] List of all parsed nouns from the narration.
all_noun_classes list of int (1 or more) [10] List of numeric IDs corresponding to all of the parsed nouns' classes from the narration.

Please note we have included a python pickle file for ease of use. This includes a pandas dataframe with the same layout as above. This pickle file was created with pickle protocol 2 on pandas version 0.22.0.

EPIC_train_action_narrations.csv

CSV file containing 5 columns:

Note: The start/end timestamp refers to the start/end time of the narration, not the action itself.

Column Name Type Example Description
participant_id string P03 ID of the participant.
video_id string P03_01 Video the segment is in.
start_timestamp string 00:23:43.847 Start time in HH:mm:ss.SSS of the narration.
stop_timestamp string 00:23:47.212 End time in HH:mm:ss.SSS of the narration.
narration string close fridge English description of the action provided by the participant.

EPIC_train_object_labels.csv

CSV file containing 6 columns:

Column Name Type Example Description
noun_class int 20 Integer value representing the class in noun-classes.csv.
noun string bag Original string name for the object.
participant_id string P01 ID of participant.
video_id string P01_01 Video the object was annotated in.
frame int 056581 Frame number of the annotated object.
bounding_boxes list of 4-tuple (0 or more) "[(76, 1260, 462, 186)]" Annotated boxes with format (<top:int>,<left:int>,<height:int>,<width:int>).

EPIC_train_object_action_correspondence.csv

CSV file containing 5 columns:

Column Name Type Example Description
participant_id string P01 ID of participant.
video_id string P01_01 Video the frames are part of.
object_frame int 56581 Frame number of the object detection image from object_detection_images.
action_frame int 56638 Frame number of the corresponding image in the released frames for action recognition in frames_rgb_flow.
timestamp string 00:00:00.00 Timestamp in HH:mm:ss.SS corresponding to the frame.

Please note we have included a python pickle file for ease of use. This includes a pandas dataframe with the same layout as above. This pickle file was created with pickle protocol 2 on pandas version 0.22.0.

EPIC_test_s1_object_action_correspondence.csv

CSV file containing 5 columns:

Column Name Type Example Description
participant_id string P01 ID of participant.
video_id string P01_11 Video containing the object s1 test frames.
object_frame int 33601 Frame number of the object detection image from object_detection_images.
action_frame int 33635 Frame number of the corresponding image in the released frames for action recognition in frames_rgb_flow.
timestamp string 00:09:20.58 Timestamp in HH:mm:ss.SS corresponding to the frames.

Please note we have included a python pickle file for ease of use. This includes a pandas dataframe with the same layout as above. This pickle file was created with pickle protocol 2 on pandas version 0.22.0.

EPIC_test_s2_object_action_correspondence.csv

CSV file containing 5 columns:

Column Name Type Example Description
participant_id string P09 ID of participant.
video_id string P09_05 Video containing the object s2 test frames.
object_frame int 15991 Frame number of the object detection image from object_detection_images.
action_frame int 16007 Frame number of the corresponding image in the released frames for action recognition in frames_rgb_flow.
timestamp string 00:04:26.78 Timestamp in HH:mm:ss.SS corresponding to the frames.

Please note we have included a python pickle file for ease of use. This includes a pandas dataframe with the same layout as above. This pickle file was created with pickle protocol 2 on pandas version 0.22.0.

EPIC_test_s1_object_video_list.csv

CSV file listing the videos used to obtain the object s1 test frames. The frames can be obtained from RDSF under object_detection_images/test. Please test all frames from this folder for the videos listed in this csv.

Column Name Type Example Description
video_id string P01_11 Video containing the object s1 test frames.
participant_id string P01 ID of the participant.

EPIC_test_s2_object_video_list.csv

CSV file listing the videos used to obtain the object s2 test frames. The frames can be obtained from RDSF under object_detection_images/test. Please test all frames from this folder for the videos listed in this csv.

Column Name Type Example Description
video_id string P01_11 Video containing the object s2 test frames.
participant_id string P01 ID of the participant.

EPIC_test_s1_timestamps.csv

CSV file containing 7 columns:

Column Name Type Example Description
uid int 1924 Unique ID of the segment.
participant_id string P01 ID of the participant.
video_id string P01_11 Video the segment is in.
start_timestamp string 00:00:00.000 Start time in HH:mm:ss.SSS of the action.
stop_timestamp string 00:00:01.890 End time in HH:mm:ss.SSS of the action.
start_frame int 1 Start frame of the action (WARNING only for frames extracted as detailed in Video Information).
stop_frame int 93 End frame of the action (WARNING only for frames extracted as detailed in Video Information).

Please note we have included a python pickle file for ease of use. This includes a pandas dataframe with the same layout as above. This pickle file was created with pickle protocol 2 on pandas version 0.22.0.

EPIC_test_s2_timestamps.csv

CSV file containing 7 columns:

Column Name Type Example Description
uid int 15582 Unique ID of the segment.
participant_id string P09 ID of the participant.
video_id string P09_01 Video the segment is in.
start_timestamp string 00:00:01.970 Start time in HH:mm:ss.SSS of the action.
stop_timestamp string 00:00:03.090 End time in HH:mm:ss.SSS of the action.
start_frame int 118 Start frame of the action (WARNING only for frames extracted as detailed in Video Information).
stop_frame int 185 End frame of the action (WARNING only for frames extracted as detailed in Video Information).

Please note we have included a python pickle file for ease of use. This includes a pandas dataframe with the same layout as above. This pickle file was created with pickle protocol 2 on pandas version 0.22.0.

EPIC_noun_classes.csv

CSV file containing 3 columns:

Note: a colon represents a compound noun with the more generic noun first. So pan:dust should be read as dust pan.

Column Name Type Example Description
noun_id int 2 ID of the noun class.
class_key string pan:dust Key of the noun class.
nouns list of string (1 or more) "['pan:dust', 'dustpan']" All nouns within the class (includes the key).

EPIC_verb_classes.csv

CSV file containing 3 columns:

Column Name Type Example Description
verb_id int 3 ID of the verb class.
class_key string close Key of the verb class.
verbs list of string (1 or more) "['close', 'close-off', 'shut']" All verbs within the class (includes the key).

EPIC_descriptions.csv

CSV file containing 4 columns:

Column Name Type Example Description
video_id string P01_01 ID of the video.
date string 30/04/2017 Date on which the video was shot.
time string 13:49:00 Local recording time of the video.
description string prepared breakfast with soy milk and cereals Description of the activities contained in the video.

EPIC_many_shot_verbs.csv

CSV file containing the many shot verbs. A verb class is considered many shot if it appears more than 100 times in training. (NOTE: this file is derived from EPIC_train_action_labels.csv, checkout the accompanying notebook demonstrating how we compute these classes)

Column Name Type Example Description
verb_class int 1 Numeric ID of the verb class
verb string put Verb corresponding to the verb class

EPIC_many_shot_nouns.csv

CSV file containing the many shot nouns. A noun class is considered many shot if it appears more than 100 times in training. (NOTE: this file is derived from EPIC_train_action_labels.csv, checkout the accompanying notebook demonstrating how we compute these classes)

Column Name Type Example Description
noun_class int 3 Numeric ID of the noun class
noun string tap Noun corresponding to the noun class

EPIC_many_shot_actions.csv

CSV file containing the many shot actions. An action class (composed of a verb class and noun class) is considered many shot if BOTH the verb class and noun class are many shot AND the action class appears in training at least once. (NOTE: this file is derived from EPIC_train_action_labels.csv, checkout the accompanying notebook demonstrating how we compute these classes)

Column Name Type Example Description
action_class (int, int) (9, 84) Numeric Pair of IDs, first the verb, then the noun
verb_class int 9 Numeric ID of the verb class
verb string move Verb corresponding to the verb class
noun_class int 84 Numeric ID of the noun class
noun string sausage Noun corresponding to the noun class

EPIC_video_info.csv

CSV file containing information for each video

Column Name Type Example Description
video (string) P01_01 Video ID
resolution (string) 1920x1080 Resolution of the video, format is WIDTHxHEIGHT
duration (float) 1652.152817 Duration of the video, in seconds
fps (float) 59.9400599400599 Frame rate of the video

File Downloads

Due to the size of the dataset we provide scripts for downloading parts of the dataset:

Note: These scripts will work for Linux and Mac. For Windows users a bash installation should work.

These scripts replicate the folder structure of the dataset release, found here.

If you wish to download part of the dataset instructions can be found here.

Video Information

Videos are recorded in 1080p at 59.94 FPS on a GoPro Hero 5 with linear field of view. There are a minority of videos which were shot at different resolutions, field of views, or FPS due to participant error or camera. These videos identified using ffprobe are:

  • 1280x720: P12_01, P12_02, P12_03, P12_04.
  • 2560x1440: P12_05, P12_06
  • 29.97 FPS: P09_07, P09_08, P10_01, P10_04, P11_01, P18_02, P18_03
  • 48 FPS: P17_01, P17_02, P17_03, P17_04
  • 90 FPS: P18_09

The GoPro Hero 5 was also set to drop the framerate in low light conditions to preserve exposure leading to variable FPS in some videos. If you wish to extract frames we suggest you resample at 60 FPS to mitigate issues with variable FPS, this can be achieved in a single step with FFmpeg:

ffmpeg -i "P##_**.MP4" -vf "scale=-2:256" -q:v 4 -r 60 "P##_**/frame_%010d.jpg"

where ## is the Participant ID and ** is the video ID.

Optical flow was extracted using a fork of gpu_flow made available on github. We set the parameters: stride = 2, dilation = 3, bound = 25 and size = 256.

License

All files in this dataset are copyright by us and published under the Creative Commons Attribution-NonCommerial 4.0 International License, found here. This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes.

Disclaimer

EPIC-KITCHENS-55 and EPIC-KITCHENS-100 were collected as a tool for research in computer vision, however, it is worth noting that the dataset may have unintended biases (including those of a societal, gender or racial nature).

Changelog

See release history for changelog.

epic-kitchens-55-action-models's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

epic-kitchens-55-action-models's Issues

HTTP 404 Error

I am not able to download TSM or MTRM models from pytorch-hub, getting an HTTP 404 error.

Exception while importing pretrainedmodels.

Hey,

I am trying to load pretrained models using pytorch hub. However, there is exception in tsn.py at line 35 (import pretrainedmodels). Can you please help me how I can fix this.

Hyperparameter for reproducing TSM RGB 8 frame

Hi,

Could you tell me a hyperparameter for reproducing the TSM result? I cannot find the hyperparameter in your workshop report and this repo.

Does it same as what have written in the TSM for other dataset?
i.e. imagenet pre-trained, lr 0.01, lr decay at 20, 40 epoch, training until 50 epoch, weight decay 1e-5?

Thank you.

Test Clips

Hello,

maybe the question is quite stupid, but I trying to get the following test clips:

{"v_id": "P01_18", "start": 27355, "stop": 27463, "verb": 1, "noun": 18}
{"v_id": "P22_15", "start": 12976, "stop": 13066, "verb": 0, "noun": 2}
{"v_id": "P01_18", "start": 113331, "stop": 113497, "verb": 1, "noun": 21}
{"v_id": "P02_09", "start": 36194, "stop": 36274, "verb": 3, "noun": 10}
{"v_id": "P06_09", "start": 264, "stop": 341, "verb": 2, "noun": 10}
{"v_id": "P01_18", "start": 175489, "stop": 175556, "verb": 9, "noun": 6}
{"v_id": "P03_20", "start": 655, "stop": 724, "verb": 0, "noun": 27}
{"v_id": "P22_16", "start": 78997, "stop": 79095, "verb": 2, "noun": 8}
{"v_id": "P30_11", "start": 6381, "stop": 6487, "verb": 1, "noun": 7}
{"v_id": "P01_19", "start": 9240, "stop": 9302, "verb": 0, "noun": 21}
{"v_id": "P22_16", "start": 36970, "stop": 37023, "verb": 12, "noun": 28}
{"v_id": "P24_07", "start": 16243, "stop": 16327, "verb": 2, "noun": 12}
{"v_id": "P06_09", "start": 78742, "stop": 78830, "verb": 0, "noun": 18}

I have downloaded the dataset, so I have: frames_rgb_flow object_detection_images, with train and test splits.
But I can find the video ids mentioned above, could you please help me?
Thanks in advance!

Training code?

Hi @willprice

thanks for providing this. I can't find the training code (loop). Just to confirm, is it out of the scope of the repo?

I guess you used the original author's code. Do you happen to have those public?
I found your tsm-fork, but not the code for epic-kitchen ๐Ÿ™‚

Thanks!
Victor

Question about Pytorch Hub

Hi. I am currently trying to use the model on the PyTorch Hub. I could see how your code extracts the features and logits. However, I really have no idea how to use those extracted logits for the classification.

Is there any way to get the name of the label by using the extracted noun and verb logits?

How to convert a stack of RGB images into correct input format?

(60, 256, 456, 3) - Shape of video segment. Where the Video consists of 60 frames, 256 is the width, 456 is the height and 3 is the number of channels.

I convert this into a list of PIL images and use the transform code provided and I get an output of shape (180, 224, 224), which is not the same input shape which the model takes.

How did I fix this?

Annotation problem

Hi,
It's very nice of you to provide the weights for us.
I have a naive question that the annotation you use while training is the class-key or the exact verb?
I want to use it as plug-in so I may not so familiar with the background knowledge, can you help me?

Test with very low accuracy.

Sorry to disturb you again, It's very nice of you to tell me how to load the checkpoints.
I use the weights with the code you told me to abandon the weights for predicting noun.
Unluckily, I meet a very low accuracy when testing with RGB frames to predict the verbs only. I didn't test my whole dataset because even when I am testing it with frames I took manually from the epic-train p01(the frames I test only contain the pics of one action, like cut, and I take about 200 pics for each test, and I will manually make sure that there will not be any frames about other actions), I can only achieve a very low accuracy. For cut, I can only get 1 right answer in the top5 out of 20 times of test, and I have no idea why.(For the anotation I use the 125 verbs(key of class and I strictly obey the order)
I see you said you have achieved about 58% of accuracy for verb using RGB only, I want to know how to repeat this result. Thank you very much!

Result format is different with the dataset

Hi,
Thank you so much for your pre-trained models. I was trying to use it to replicate the baseline.
Since the input shape is 8 frames, our output looks like this. (it gets predictions for every 8 frames )
image
However, the reference annotation and test CSV are in a different format: the frames of action length are bigger than 8 frames.
Therefore, could you help me to understand how to generate correct outputs for each action ID?
Thank you so much!
image

Test problem, can not load the module.

Sorry to bother again,
I want to use the pretrained weights in the TRN-pytorch project by Bolei Zhou, directly, but it fails.

My bash is like that:

$ python test_video.py --arch BNInception --dataset EPIC --modality RGB --weights pretrain/TRN_arch=BNInception_modality=RGB_segments=8-a770bfbd.pth.tar --frame_folder sample_data/frames_cut --consensus_type TRN

To load the module, according to your code, I use:

net = TSN(num_class,
          args.test_segments,
          args.modality,
          base_model=args.arch,
          consensus_type=args.consensus_type,
          img_feature_dim=args.img_feature_dim, print_spec=False)

checkpoint = torch.load(args.weights)

net.load_state_dict(checkpoint['state_dict'])
net.cuda().eval()

And the traceback is:

Missing key(s) in state_dict: "consensus.classifier.1.weight", "consensus.classifier.1.bias", "consensus.classifier.3.weight", "consensus.classifier.3.bias". 
	Unexpected key(s) in state_dict: "consensus.pre_classifier.1.weight", "consensus.pre_classifier.1.bias", "consensus.classifiers.0.weight", "consensus.classifiers.0.bias", "consensus.classifiers.1.weight", "consensus.classifiers.1.bias".

Can you help me out? I want to use this to predict actions in kitchen which will be used to build an ontology for robot.

PyTorch 1.5 compatibility

Running tests/test_checkpoints.py produces the following stack traces under PT 1.5

___________________________________________________________ test_demo[args16] ___________________________________________________________

args = ['tsn', '--tsn-consensus-type=max']

@pytest.mark.parametrize("args", example_cli_args)
def test_demo(args: List[str]):
>       demo.main(demo.parser.parse_args(args))

tests/test_checkpoints.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
demo.py:135: in main
output = model(input)
../.miniconda3/envs/epic-action-models/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__
result = self.forward(*input, **kwargs)
tsn.py:442: in forward
return self.logits(features)
tsn.py:416: in logits
output_verb = self.consensus(logits_verb)
../.miniconda3/envs/epic-action-models/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__
result = self.forward(*input, **kwargs)
ops/basic_ops.py:46: in forward
return SegmentConsensus(self.consensus_type, self.dim)(input)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <ops.basic_ops.SegmentConsensus object at 0x7f90019447d0>
args = (tensor([[[-0.0125,  0.0014,  0.1074, -0.0365, -0.0816,  0.0358,  0.0590,
0.0926, -0.0277,  0.0524,  0.0169..., -0.0461,  0.0107,
0.0453,  0.0764, -0.0235,  0.0126, -0.0139, -0.0625]]],
grad_fn=<ViewBackward>),)
kwargs = {}

def __call__(self, *args, **kwargs):
raise RuntimeError(
>           "Legacy autograd function with non-static forward method is deprecated. "
"Please use new-style autograd function with static forward method. "
"(Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)")
E       RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

../.miniconda3/envs/epic-action-models/lib/python3.7/site-packages/torch/autograd/function.py:145: RuntimeError
--------------------------------------------------------- Captured stdo

Basically we need to update the consensus function/strip it out to work with the new style autograd functions in PT1.5:

In previous versions of PyTorch, there were two ways to write autograd Functions. We deprecated one of them in 1.3.0 and dropped support for it entirely in 1.5.0. Old-style autograd Functions will no longer work in user code.

These Functions be identified by not having staticmethod forward and backward functions (see the example below) Please see the current documentation for how to write new-style Functions.
-- https://github.com/pytorch/pytorch/releases/tag/v1.5.0

Where is the dataset processed?

HI , Dear Prof

Thank you first for this excellent repo~
I am recently conducting TSN and TRN on Epic-Kitchen dataset. However, I didn't find the exact place where you process the kitchen dataset. Maybe I just too careless to find that out, Would you mind pointing that out?
Thank you very very much~~

Faithfully
Gao

Support torch hub

To make it even easier to leverage model definitions, we should support torch hub.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.