aichallenger / ai_challenger_2017 Goto Github PK
View Code? Open in Web Editor NEWAI Challenger, a platform for open datasets and programming competitions to artificial intelligence (AI) talents around the world.
Home Page: https://challenger.ai/
AI Challenger, a platform for open datasets and programming competitions to artificial intelligence (AI) talents around the world.
Home Page: https://challenger.ai/
Any example? Thx!
Got this error when execute run.sh after run prepare.sh successfully
n_baseline/train$ ./run.sh
python: can't open file '../tensor2tensor/bin/t2t-trainer': [Errno 2] No such file or directory
python: can't open file '../tensor2tensor/bin/t2t-datagen': [Errno 2] No such file or directory
python: can't open file '../tensor2tensor/bin/t2t-trainer': [Errno 2] No such file or directory
system : ubuntu 16.04
Is there something I missed?
数据集是AI Challenger第一届比赛的三个赛道的数据集:Caption,keypoints, Scene
本数据集属于创新工场,作为研究目的所使用,不能作为商业目的使用。
关于数据集的所有解释权归创新工场所拥有。现友情提供下载链接, 仅限用于学术研究,仅限用于学术研究,仅限用于学术研究 。
Caption train:链接:https://pan.baidu.com/s/1YziBPLiU2WmE0j35oaXeKw 密码:asix
Caption validation :链接:https://pan.baidu.com/s/1p_0V89d4wfxk-7f7QsU9rg 密码:dcnn
Keypoint train:链接:https://pan.baidu.com/s/1soAkYImmQrXnSsxcF-YjxA 密码:43om
Keypoint validation:链接:https://pan.baidu.com/s/16pnIBBRqU16noVlZh-ksYA 密码:ti41
Scene train :链接:https://pan.baidu.com/s/1ZOJosoulaW2U_E9nHM8NeA 密码:vou3
Scene validation :链接:https://pan.baidu.com/s/1qHVnZ8T59ioetzVv14-grQ 密码:5ogk
Originally posted by @zhhezhhe in #42 (comment)
there is anyone has download the ai_challenger_translation dataset. can anybody can share for me.Thanks!
I can't download ai challenger dataset from aichallenger dataset website? Is there any link that i can download?
Thanks in advance...
@kingulight @hitvoice @zhhezhhe @leonlulu @bmyan
Values of key "caption" indicate that they are Unicode utf-8 characters but decoded and saved as ASCII.
I'm not sure whether it's the format required or just an oversight in coding although it may have no affect.
按照示例当中的代码保存为json格式,还是会出现格式错误问题??
with io.open('result6.json', 'w', encoding='utf-8') as fd: fd.write(unicode(json.dumps(data,ensure_ascii=False, sort_keys=True, indent=2, separators=(',', ': '))))
Hey guys, could you please share it on google drive, dropbox, S3 buckets or somewhere you can easily host datasets, and it is super cheap.
百度云盘 is really annoying. And they keep getting expired after a few days.
您好,数据集的链接都失效了,能麻烦您再补一份吗
在目录 AI_Challenger/AI_Challenger_eval_public/caption_eval
下运行
python run_evaluations.py -submit ./data/id_to_test_caption.json -ref ./data/id_to_words.json
报错如下
loading annotations into memory...
0:00:00
creating index...
index created!
Loading and preparing results...
Building prefix dict from the default dictionary ...
Loading model from cache c:\users\lc\appdata\local\temp\jieba.cache
Loading model cost 0.420 seconds.
Prefix dict has been built succesfully.
{'error': 1}
OS:windows 10
python版本:2.7.13
看了下中文caption的评测程序,发现产生的句子首先经过jieba中文分词了,然后在计算四个指标前又经过 PTBTokenizer 分词了,请问这样做有什么原因吗?为什么不直接只做一步分词?
NotFoundError: /home/sk/ai_challenger_caption_train_20170902/caption_train_images_20170902/e55ba6db106e61f50802ed3547b325ced2e32a3a.jpg
python run_evaluations_test.py
报错信息:
loading annotations into memory...
0:00:00.000170
creating index...
index created!
list indices must be integers, not str
.loading annotations into memory...
0:00:00.000142
creating index...
index created!
list indices must be integers, not str
Hi, 为何我提交image cpaitoning结果,毫无任何反馈。我已经按照贵方的要求组织json了。按道理应该和mscoco online server一样给个反馈。
hello, I see your file for keypoint eval---'keypoint_eval.py', and I think your method has some problems.
in your methods, if your image has 2 person, but you give only 1 and i use my model to predict 2, then the result will be bad, because your code:
oks_all = np.concatenate((oks_all, np.max(oks, axis=0)), axis=0)
oks_num += np.max(oks.shape)
I think this should change to
_oks_all = np.concatenate((oks_all, np.max(oks, axis=1)), axis=0)
oks_num += np.min(oks.shape)_
hang on computing meteor
Error: Could not find or load main class edu.stanford.nlp.process.PTBTokenizer
Error: Could not find or load main class edu.stanford.nlp.process.PTBTokenizer
setting up scorers...
computing Bleu score...
{'reflen': 0, 'guess': [0, 0, 0, 0], 'testlen': 0, 'correct': [0, 0, 0, 0]}
ratio: 1e-06
Bleu_1: 0.000
Bleu_2: 0.000
Bleu_3: 0.000
Bleu_4: 0.000
computing METEOR score...
Hi, what is the license for this data use? Can it be used for commercial purposes? Thanks
链接: https://pan.baidu.com/s/1THPLarF9A2njs7FsCA_QoQ 密码: 9t16
The previous link is no longer valid
`
train_step, cross_entropy, logits, keep_prob = network.inference(features, one_hot_labels)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
`
I submitted the authentication, but has been in the audit status, can you help me to solve it, I want to download data for research
下载的ai_challenger_keypoint_validation_20170911 关键点验证集中没有json标注文件?是我下错了还是本来就没提供?
Key Variable/Adam_1 not found in checkpoin
there is anyone has download the dataset. can anybody can share for me
https://github.com/AIChallenger/AI_Challenger/blob/a41de91a6544fea6cd3a68879a6f3150b14d8fd9/AI_Challenger_eval_public/scene_classification_eval/scene_eval.py#L66
warnning -> warning
https://github.com/AIChallenger/AI_Challenger/blob/a41de91a6544fea6cd3a68879a6f3150b14d8fd9/AI_Challenger_eval_public/scene_classification_eval/scene_eval.py#L114
Evalation -> Evaluation
原谅我是一个强迫症患者:)
validation和test都是一张一张的caption, 1080卡,测一次要几个小时,大家也没遇到这个问题吗?
What's the meaning of elements of delta?
能提供一下id_to_word.json的生成脚本的问题吗?或者是中文评价的数据集,注释等等,谢谢!
翻译任务中,直接利用给定的run.sh运行,出现下面的错误,有没有人知道为什么呢?
Traceback (most recent call last):
File "../tensor2tensor/tensor2tensor/bin/t2t-datagen", line 213, in
tf.app.run()
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "../tensor2tensor/tensor2tensor/bin/t2t-datagen", line 160, in main
raise ValueError(error_msg)
ValueError: You must specify one of the supported problems to generate data for:
Traceback (most recent call last):
File "../tensor2tensor/tensor2tensor/bin/t2t-trainer", line 96, in
tf.app.run()
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "../tensor2tensor/tensor2tensor/bin/t2t-trainer", line 92, in main
schedule=FLAGS.schedule)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 352, in run
hparams=hparams)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 193, in run
experiment = wrapped_experiment_fn(run_config=run_config, hparams=hparams)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 79, in wrapped_experiment_fn
experiment = experiment_fn(run_config, hparams)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 123, in experiment_fn
run_config=run_config)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 135, in create_experiment
run_config=run_config)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 183, in create_experiment_components
add_problem_hparams(hparams, FLAGS.problems)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 245, in add_problem_hparams
raise LookupError(error_msg)
LookupError: translate_enzh not in the set of supported problems:
iebsn@iebsn-HP-Z440-Workstation:~/project/scene-classification/AI_Challenger_2017-master/Evaluation/scene_classification_eval$ ### python scene_eval.py --submit ./s
ubmit.json --ref ./ref.json
warnning: lacking image 7df98fcd7a85281f845910af403ba65ca1494b60.jpg in your submission file
Evaluation time of your result: 0.014003 s
{'warning': ['Inconsistent number of images between submission and reference data \n', u'lacking image 7df98fcd7a85281f845910af403ba65ca1494b60.jpg in your submission file \n'], 'score': '0.8', 'error': []}
when I run eg.eval file,I got this error,how to sovle it ? thanks.
in caption_validation_annotations_20170910.json
"image_id": "3cd32bef87ed98572bac868418521852ac3f6a70.jpg"
So the in the predicted json file for submit just be like below?
[
{
"caption": "一个面对着蓝天大海的女人坐在海边的沙滩椅子",
"image_id": "3cd32bef87ed98572bac868418521852ac3f6a70"
}
]
remove '.jpg' for each image ?
As part of my research on the MMPose library, I require access to a dataset for testing and validation purposes. I am particularly interested in datasets with various human poses in different environments. This data will be used exclusively for academic purposes and handled confidentially. Your support would greatly contribute to the success of my thesis. Thank you for considering my request.
您好!
首先,感谢你们提供高质量的数据。
请问,这些数据可用为商业目的吗?还是只能许可研究目的?
谢谢🙏
Can you tell me why there is only 1 person in the label??Why the little boy does not be labeled?? what is the standard of labeling person??
why does this have 3 person label
but this has 3????
I really can not understand your standard of judging a person. and the above images is choosed randomly from 50 images. And I dont know for the following images, how many peosons have you labeled
I hope you can give me one standard of judging a person. Thanks
Our BLEU_4 score (0.73298) on the LB is approximately equal to the BLEU_1 score (0.73563) we measured offline on eval dataset, and much higher than our offline BLEU_4 score (0.41735).
I wonder if the score display on the LB is wrong.
Hi,
Can anyone provide any links to download the chinese2english dataset ?
Have been struggling to download like forever.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.