GithubHelp home page GithubHelp logo

jinze1994 / atrank Goto Github PK

View Code? Open in Web Editor NEW
362.0 16.0 125.0 161 KB

An Attention-Based User Behavior Modeling Framework for Recommendation

License: Apache License 2.0

Python 99.75% Shell 0.25%
recommendation-system

atrank's Introduction

ATRank

An Attention-Based User Behavior Modeling Framework for Recommendation

Introduction

This is an implementation of the paper ATRank: An Attention-Based User Behavior Modeling Framework for Recommendation. Chang Zhou, Jinze Bai, Junshuai Song, Xiaofei Liu, Zhengchao Zhao, Xiusi Chen, Jun Gao. AAAI 2018.

Bibtex:

@paper{zhou2018atrank,
  author = {Chang Zhou and Jinze Bai and Junshuai Song and Xiaofei Liu and Zhengchao Zhao and Xiusi Chen and Jun Gao},
  title = {ATRank: An Attention-Based User Behavior Modeling Framework for Recommendation},
  conference = {AAAI Conference on Artificial Intelligence},
  year = {2018}
}

This repository also contains all the competitor's methods mentioned in the paper. Some implementations consults the Transfomer, and Text-CNN.

Note that, the heterogeneous behavior datasets used in the paper is private, so you could not run multi-behavior code directly. But you could run the code on amazon dataset directly and review the heterogeneous behavior code.

Requirements

  • Python >= 3.6.1
  • NumPy >= 1.12.1
  • Pandas >= 0.20.1
  • TensorFlow >= 1.4.0 (Probably earlier version should work too, though I didn't test it)
  • GPU with memory >= 10G

Download dataset and preprocess

  • Step 1: Download the amazon product dataset of electronics category, which has 498,196 products and 7,824,482 records, and extract it to raw_data/ folder.
mkdir raw_data/;
cd utils;
bash 0_download_raw.sh;
  • Step 2: Convert raw data to pandas dataframe, and remap categorical id.
python 1_convert_pd.py;
python 2_remap_id.py

Training and Evaluation

This implementation not only contains the ATRank method, but also provides all the competitors' method, including BPR, CNN, RNN and RNN+Attention. The training procedures of all method is as follows:

  • Step 1: Choose a method and enter the folder.
cd atrank;

Alternatively, you could also run other competitors's methods directly by cd bpr cd cnn cd rnn cd rnn_att, and follow the same instructions below.

Note that, the heterogeneous behavior datasets used in the paper is private, so you could't run the code of this part directly. But you could review the neural network code we use in this paper by cd multi.

  • Step 2: Building the dataset adapted to current method.
python build_dataset.py
  • Step 3: Start training and evaluating using default arguments in background mode.
python train.py >log.txt 2>&1 &
  • Step 4: Check training and evaluating progress.
tail -f log.txt
tensorboard --logdir=save_path

Note that the evaluating producure alternate with training producure, so run the command above may cost five to ten hours until converge completely according to the different methods. If you need to kill that job instantly:

nvidia-smi  # Fetch the PID of current training process.
kill -9 PID # Kill the target process.

You could change the training and networks hyperparameters by command arguments, like python train.py --learning_rate=0.1. To see all command arguments you could use python train.py --help.

Results

You always could use tensorboard --logdir=save_path to see the AUC curve and check all kinds of embedding histogram. The collected AUC curve of test set is as follows

AUC curve in test set

atrank's People

Contributors

changzhou-pku avatar jinze1994 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atrank's Issues

ValueError: setting an array element with a sequence.

运行rnn、rnn_att、bpr模型时,在

       def train(self, sess, uij, l):
103     loss, _ = sess.run([self.loss, self.train_op], feed_dict={
104         self.u: uij[0],
105         self.i: uij[1],
106         self.y: uij[2],
107         self.hist_i: uij[3],
108         self.sl: uij[4],
109         self.lr: l,
110         })
111     return loss
       def train(self, sess, uij, l):
113     loss, _ = sess.run([self.loss, self.train_op], feed_dict={
114         self.u: uij[0],
115         self.i: uij[1],
116         self.y: uij[2],
117         self.hist_i: uij[2],
118         self.sl: uij[4],
119         self.lr: l,
120         })
121     return loss

都遇到了这个问题,希望作者能够更新代码解决这个问题。

dataset

Could you offer me a public dataset? maybe a data pattern,I can follow your way to run my experiment。I don`t know what the data should look like. Thank u.

TypeError: Object of type 'HelpFlag' is not JSON serializable

Traceback (most recent call last):
File "train.py", line 184, in
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "train.py", line 181, in main
train()
File "train.py", line 132, in train
model = create_model(sess, config, cate_list)
File "train.py", line 54, in create_model
print(json.dumps(config, indent=4), flush=True)
File "/usr/lib/python3.6/json/init.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.6/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.6/json/encoder.py", line 430, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.6/json/encoder.py", line 437, in _iterencode
o = _default(o)
File "/usr/lib/python3.6/json/encoder.py", line 180, in default
o.class.name)
TypeError: Object of type 'HelpFlag' is not JSON serializable

Some question about ATRank

It seems that you atrank is different from the one described in paper. For example, bilinear attention is used in paper, but scale-dot attention here. Vanilla attention in paper, but multi-head attention here.

High cpu usage when running the code

I run the atrank network on my own dataset. I use the top command and find that the code consumes high cpu usage(1500%). I'm sure the network causes that because if I remove the network, it's normal.And I'm sure the network is running on Gpu. And if i use "session_conf = tf.ConfigProto( intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)" to limit the number of CPUs that tensorflow can use, the usage of cpu will not be high but the speed is very slow .So is there some ops in atrank consuming cpu? When you run the code, is there a high cpu usage?

why is hist_i padding with 0?

In the code /ATRank/utils/2_remap_id_raw.py, asin(item) ids are remapped to 0 ~ item_count.
In the code /ATRank/atrank/input.py, hist_i is padding with value of 0. So I guess there are two kinds of 0 in the hist_i matrix: one is real asin id 0 and the other is padding value 0.
When using tf.nn.embedding_lookup with hist_i to generate h_emb in the code /ATRank/atrank/model.py, will you fill h_emb with a lot of real item 0's vectors which do not exist in the purchase history?

Do I miss anything in your code or misunderstand any part of it?

atrank

it seems that you atrank is different from the one described in paper.

How to train ATRank with different types behaviours?

In multi/model.py, it seems we can only use one type behavior train the model. But in your paper, you concat different type of feature from different bg. How can I use different type behaviors at the same time?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.