zichaohuang / transe Goto Github PK
View Code? Open in Web Editor NEWA TensorFlow implementation of TransE model
A TensorFlow implementation of TransE model
在evaluation中,得到了distance_tail_prediction和distance_head_prediction,transE原理是让正样本的距离尽可能小,但是在计算hit10时用了tf.nn.top_k找了距离最大的10个index,不应该是距离最小的10个吗?
thanks for your sharing, could you please support some data link to download in your code, thx
During the training stage, why the l2 norm of entity and relation embeddings remain unchanged (always 1)? I'm really confused about this. Could you please explain that?
What would be the input format for the files "train.txt" (assuming "test.txt" and "valid.txt" follow the same format as this one) , "entity2id.txt", "relation2id.txt"?
Is there any restriction for the kind of values accepted as entity labels?
-----Start evaluation-----
Traceback (most recent call last):
File "main.py", line 45, in
main()
File "main.py", line 40, in main
kge_model.launch_evaluation(session=sess)
File "D:\Python\TransE\src\model.py", line 153, in launch_evaluation
'out_queue': rank_result_queue}).start()
File "D:\Python\Anaconda3\envs\tensorflow\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Python\Anaconda3\envs\tensorflow\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Python\Anaconda3\envs\tensorflow\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Python\Anaconda3\envs\tensorflow\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "D:\Python\Anaconda3\envs\tensorflow\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
(tensorflow) D:\Python\TransE\src>Traceback (most recent call last):
File "", line 1, in
File "D:\Python\Anaconda3\envs\tensorflow\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Python\Anaconda3\envs\tensorflow\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
sorry,I'm freshman.when used yuor code,I have problem as list.
使用原始的train_wn18.sh训练时,每次eval时内存都会增加800MB左右的大小,是发生了内存泄露吗?
如果将eval_freq调小,应该可以显著观察到
环境是ubuntu18.04+tensorflow 1.13.0-rc0
我使用了tracemalloc观察内存,但没有发现异常之处
In the evaluate
method of model.py, the rank calculation seems wrong to me.
If I understand correctly, tf.nn.top_k(tf.reduce_sum(tf.abs(distance_head_prediction), axis=1), k=self.kg.n_entity)
will return the entities that are furthest from the predicted head, instead of the closest. So shouldn't this line be
tf.nn.top_k(-tf.reduce_sum(tf.abs(distance_head_prediction), axis=1), k=self.kg.n_entity)
如题
dear author:
thank u for your code, it helps me a lot!
运行sh train_fb15k.sh会在model.py第168行 eval_result_queue.join() 卡住,但是运行sh train_wn18.sh.sh没有任何问题。人为ctrl+c结束会显示
Traceback (most recent call last):
File "main.py", line 44, in
main()
File "main.py", line 40, in main
kge_model.launch_evaluation(session=sess)
File "/Users/shishang.zy/Desktop/Table/TransE-master/src/model.py", line 168, in launch_evaluation
eval_result_queue.join()
File "/anaconda3/lib/python3.6/multiprocessing/queues.py", line 305, in join
self._cond.wait()
File "/anaconda3/lib/python3.6/multiprocessing/synchronize.py", line 262, in wait
return self._wait_semaphore.acquire(True, timeout)
KeyboardInterrupt
Hi there, thank you so much for your work, it is very helpful. But I just have a small question about your code. It seems like you didn't apply filter results(removing all the other correct entity from the ranking) in the evaluation part. Is that correct?
This may have a huge impact on the final result.
Hongming
请问模型如何将嵌入向量导出?代码中有相应的模块吗?
Taking a specific tail entity prediction task (s, p, ?)
for example, is the head entity s
always ranked 0 in idx_head_prediction[::-1]? I.e., is the most promising tail entity ranked as 1 in idx_head_prediction[::-1]?
I want to check the Prediction Head and Tail Predictions at (Hits@10) on different types of relation like 1-to-1, 1-to-N, N-to-1 and N-to-N of TransE model like the results mentioned in the paper. How can I perform the Head and Tail Prediction on 1-to-1, 1-to-N, N-to-1 and N-to-N of relations of FB15k dataset?
I run your code and print the result about hit@1, hit@3 and hit@10. It is so strange that your hit@1 is lower than other TransE. In KB2E, the raw hit@1 is 0.168069, and the filter hit@1 is 0.364274. The results in this code are 0.04 and 0.12. (The hyperparameters will not change the range.)
Thanks for your code and patience~ :)
作者您好!我想问您一下您的TransE代码中model.py中的n_generator,n_rank_calculator和ckpt_dir分别表示什么意思
您好。请问,我把得到的embedding打印出来,这些embedding与entity、relation的对应关系是怎样的?
你好,再次感谢你的代码。有一个小地方没有看明白,在model.py的 launch_training函数中,存在
for _ in range(self.n_generator):
raw_batch_queue.put(None)
请问为什么要有这一句,是什么意思呢?
what is the 'Put (None)'means?
thank you
What's the result of mean rank and Hit@10 is your experiment?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.