GithubHelp home page GithubHelp logo

dien's People

Contributors

mouna99 avatar zhougr1993 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dien's Issues

embedding size

What is the maximum size of embedding tables based on the datasets given in the papers? The number of columns is 18, how about the rows?

有没有朋友能教一下怎么跑通这个程序啊?

我在pycharm跑train.py的时候出现错误
Traceback (most recent call last):
File "E:/CTR/dien-master/script/train.py", line 233, in
if sys.argv[1] == 'train':
IndexError: list index out of range

然后在命令行输入python train.py train DIEN也有错误
Traceback (most recent call last):
File "train.py", line 234, in
train(model_type=sys.argv[2], seed=SEED)
File "train.py", line 145, in train
model = Model_DIN_V2_Gru_Vec_attGru_Neg(n_uid, n_mid, n_cat, EMBEDDING_DIM, HIDDEN_SIZE, ATTENTION_SIZE)
File "e:\CTR\dien-master\script\model.py", line 387, in init
self.build_fcn_net(inp, use_dice=True)
File "e:\CTR\dien-master\script\model.py", line 102, in build_fcn_net
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.lr).minimize(self.loss)
AttributeError: 'Model_DIN_V2_Gru_Vec_attGru_Neg' object has no attribute 'lr'

DIN has different auc on public datasets?

the original paper of DIN reports 0.882 AUC on amazon Electronics dataset, but in your paper AUC of DIN only achieves 0.760. What are the different settings in this two experiments?

QUESTION

[self.noclk_mid_his_batch_embedded[:, :, 0, :], self.noclk_cat_his_batch_embedded[:, :, 0, :]], -1)# 0 means only using the first negative item ID. 3 item IDs are inputed in the line 24.

I cannot understand. Could u illustrate it in detail?

过拟合问题

DIEN模型,训练一个epoch之后,测试集开始过拟合。auc曲线可视化如下:
image
请问是因为代码中少了正则化的部分吗?论文中好像没有提到过拟合的问题

It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.

将模型一直到自己的框架里时报这个错误,同时注意到在dien源代码中没有使用优化函数自动的global_step,而是自己写了itr变量,想问下作者又遇到这个问题所以才这么处理了吗。

It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.

din_atttention()如何处理不同长度的用户历史行为?

queries = tf.tile(query, [1, tf.shape(facts)[1]])
queries = tf.reshape(queries, tf.shape(facts))
din_all = tf.concat([queries, facts, queries-facts, queries*facts], axis=-1)
d_layer_1_all = tf.layers.dense(din_all, 80, activation=tf.nn.sigmoid, name='f1_att' + stag)
d_layer_2_all = tf.layers.dense(d_layer_1_all, 40, activation=tf.nn.sigmoid, name='f2_att' + stag)
d_layer_3_all = tf.layers.dense(d_layer_2_all, 1, activation=None, name='f3_att' + stag)
d_layer_3_all = tf.reshape(d_layer_3_all, [-1, 1, tf.shape(facts)[1]])
scores = d_layer_3_all

看上面的代码,好像没有对不同长度的历史行为做处理。这样OK吗?

split_by_user with file local_test???

阅读local_aggretor.py,生成了文件
ftrain = open("local_train", "w")
ftest = open("local_test", "w")

但在split_by_user.py中,用local_test文件生成训练和测试文件:
ftrain = open("local_train_splitByUser", "w")
ftest = open("local_test_splitByUser", "w")
这是什么神奇操作???

有2个疑问点

  1. build_fcn_net中, ctr_loss为啥只考虑正样本概率?常规cross_enproty是 ylog(y')+(1-y)log(1-y'), 代码中貌似没有1-y部分?
  2. 代码中所有tf.layers.batch_normalization,都没有考虑training这个参数项,在做预测时会有问题吧?

some problems

Nice work! But, there are some problems really confusing me.
1、negative sample : the negative instance is only not equal to current candidate item. If it is contained in historical items?
while True: asin_neg_index = random.randint(0, len(item_list) - 1) asin_neg = item_list[asin_neg_index] if asin_neg == asin: continue items[1] = asin_neg print>>fo, "0" + "\t" + "\t".join(items) + "\t" + meta_map[asin_neg]
2、In your model, embedding user_id is added into prediction layer, however, your datasets have been split by user_id which means the user_id in testset will not appear in trainset.
while True: rand_int = random.randint(1, 10) noclk_line = fi.readline().strip() clk_line = fi.readline().strip() if noclk_line == "" or clk_line == "": break if rand_int == 2: print >> ftest, noclk_line print >> ftest, clk_line else: print >> ftrain, noclk_line print >> ftrain, clk_line

关于输入问题,如何将ID1,ID1的tag1,tag2,tag3,合并起来

这个问题可能无关这个模型,只是个人百思不得其解。
输入的时候,我们有用户的历史点击ID1,以及ID1的tag1,tag2,tag3,那如何将他们组合起来呢?ID1无可厚非,embedding即可,但是tag有三个,如何组装起来呢?继续embedding,然后取sum(avg)?

无法在Books数据集中复现0.9281的AUC

我下载了源代码,在实验室的服务器做了实验,想要复现实验结果。可是无法得到论文中Table 2: Results(AUC) on public datasets中的Books(AUC) DIEN 0.9281的结果。

itr: 0
iter: 1000 ----> train_loss: 1.6092 ---- train_accuracy: 0.5897 ---- tran_aux_loss: 1.2785
test_auc: 0.6695 ----test_loss: 1.5685 ---- test_accuracy: 0.6192 ---- test_aux_loss: 1.3612
iter: 2000 ----> train_loss: 1.5228 ---- train_accuracy: 0.6465 ---- tran_aux_loss: 1.2142
test_auc: 0.7304 ----test_loss: 1.4456 ---- test_accuracy: 0.6646 ---- test_aux_loss: 1.3199
iter: 3000 ----> train_loss: 1.4194 ---- train_accuracy: 0.6822 ---- tran_aux_loss: 1.1306
test_auc: 0.7627 ----test_loss: 1.3634 ---- test_accuracy: 0.6907 ---- test_aux_loss: 1.2456
iter: 4000 ----> train_loss: 1.3485 ---- train_accuracy: 0.7085 ---- tran_aux_loss: 1.0726
test_auc: 0.7847 ----test_loss: 1.3135 ---- test_accuracy: 0.7130 ---- test_aux_loss: 1.2510
iter: 5000 ----> train_loss: 1.3035 ---- train_accuracy: 0.7262 ---- tran_aux_loss: 1.0380
test_auc: 0.8018 ----test_loss: 1.2749 ---- test_accuracy: 0.7249 ---- test_aux_loss: 1.1818
save model iter: 5000
iter: 6000 ----> train_loss: 1.2689 ---- train_accuracy: 0.7381 ---- tran_aux_loss: 1.0113
test_auc: 0.8163 ----test_loss: 1.2417 ---- test_accuracy: 0.7382 ---- test_aux_loss: 1.1733
iter: 7000 ----> train_loss: 1.2372 ---- train_accuracy: 0.7471 ---- tran_aux_loss: 0.9857
test_auc: 0.8218 ----test_loss: 1.2169 ---- test_accuracy: 0.7469 ---- test_aux_loss: 1.1991
iter: 8000 ----> train_loss: 1.2099 ---- train_accuracy: 0.7556 ---- tran_aux_loss: 0.9644
test_auc: 0.8337 ----test_loss: 1.1918 ---- test_accuracy: 0.7550 ---- test_aux_loss: 1.1223
itr: 1
iter: 9000 ----> train_loss: 0.5829 ---- train_accuracy: 0.4106 ---- tran_aux_loss: 0.4726
test_auc: 0.8269 ----test_loss: 1.2050 ---- test_accuracy: 0.7483 ---- test_aux_loss: 1.1159
iter: 10000 ----> train_loss: 1.0740 ---- train_accuracy: 0.8471 ---- tran_aux_loss: 0.9013
test_auc: 0.8137 ----test_loss: 1.2136 ---- test_accuracy: 0.7356 ---- test_aux_loss: 1.1241
save model iter: 10000
iter: 11000 ----> train_loss: 1.0228 ---- train_accuracy: 0.8775 ---- tran_aux_loss: 0.8820
test_auc: 0.8243 ----test_loss: 1.2259 ---- test_accuracy: 0.7446 ---- test_aux_loss: 1.0999
iter: 12000 ----> train_loss: 0.9986 ---- train_accuracy: 0.8885 ---- tran_aux_loss: 0.8711
test_auc: 0.8330 ----test_loss: 1.2076 ---- test_accuracy: 0.7512 ---- test_aux_loss: 1.0986
iter: 13000 ----> train_loss: 0.9614 ---- train_accuracy: 0.9066 ---- tran_aux_loss: 0.8539
test_auc: 0.8410 ----test_loss: 1.2085 ---- test_accuracy: 0.7606 ---- test_aux_loss: 1.1340
iter: 14000 ----> train_loss: 0.9401 ---- train_accuracy: 0.9136 ---- tran_aux_loss: 0.8409
test_auc: 0.8434 ----test_loss: 1.2056 ---- test_accuracy: 0.7654 ---- test_aux_loss: 1.1038
iter: 15000 ----> train_loss: 0.9175 ---- train_accuracy: 0.9261 ---- tran_aux_loss: 0.8309
test_auc: 0.8431 ----test_loss: 1.2477 ---- test_accuracy: 0.7663 ---- test_aux_loss: 1.1385
save model iter: 15000
iter: 16000 ----> train_loss: 0.8970 ---- train_accuracy: 0.9295 ---- tran_aux_loss: 0.8152
test_auc: 0.8467 ----test_loss: 1.2512 ---- test_accuracy: 0.7694 ---- test_aux_loss: 1.1397
itr: 2
iter: 17000 ----> train_loss: 0.0262 ---- train_accuracy: 0.0251 ---- tran_aux_loss: 0.0228
test_auc: 0.8428 ----test_loss: 1.2087 ---- test_accuracy: 0.7651 ---- test_aux_loss: 1.1152
iter: 18000 ----> train_loss: 0.9043 ---- train_accuracy: 0.9177 ---- tran_aux_loss: 0.8047
test_auc: 0.8289 ----test_loss: 1.3211 ---- test_accuracy: 0.7490 ---- test_aux_loss: 1.0893
iter: 19000 ----> train_loss: 0.8561 ---- train_accuracy: 0.9534 ---- tran_aux_loss: 0.7959
test_auc: 0.8239 ----test_loss: 1.3750 ---- test_accuracy: 0.7371 ---- test_aux_loss: 1.0886
iter: 20000 ----> train_loss: 0.8327 ---- train_accuracy: 0.9670 ---- tran_aux_loss: 0.7909
test_auc: 0.8389 ----test_loss: 1.3757 ---- test_accuracy: 0.7550 ---- test_aux_loss: 1.1187
save model iter: 20000
iter: 21000 ----> train_loss: 0.8127 ---- train_accuracy: 0.9748 ---- tran_aux_loss: 0.7810
test_auc: 0.8461 ----test_loss: 1.3983 ---- test_accuracy: 0.7636 ---- test_aux_loss: 1.0857
iter: 22000 ----> train_loss: 0.7974 ---- train_accuracy: 0.9779 ---- tran_aux_loss: 0.7694
test_auc: 0.8453 ----test_loss: 1.4055 ---- test_accuracy: 0.7637 ---- test_aux_loss: 1.1258
iter: 23000 ----> train_loss: 0.7831 ---- train_accuracy: 0.9799 ---- tran_aux_loss: 0.7575
test_auc: 0.8501 ----test_loss: 1.3910 ---- test_accuracy: 0.7702 ---- test_aux_loss: 1.1075
iter: 24000 ----> train_loss: 0.7664 ---- train_accuracy: 0.9822 ---- tran_aux_loss: 0.7434
test_auc: 0.8459 ----test_loss: 1.4197 ---- test_accuracy: 0.7682 ---- test_aux_loss: 1.1221
iter: 25000 ----> train_loss: 0.7561 ---- train_accuracy: 0.9848 ---- tran_aux_loss: 0.7362
test_auc: 0.8432 ----test_loss: 1.4291 ---- test_accuracy: 0.7650 ---- test_aux_loss: 1.1176
save model iter: 25000

可以观察到训练过程中的test_auc最高为0.8501。此外,可以观察到模型出现了过拟合的情况,因为可以看到train_accuracy虽然持续上升,test_accuracy最高是停留在了0.77附近。我也训练和测试了dien-master中的DIN模型,最好的test_auc也只有0.7336,test_accuracy最高是停留在了0.66附近。
对此我觉得有点奇怪,我阅读了代码,注意到在生成数据过程的其中一个步骤,split_by_user.py中是使用"local_test"而不是"local_train"来生成“local_train_splitByUser”和“local_test_splitByUser” 。我觉得有可能是训练样本不够,于是使用"local_train"来生成“local_train_splitByUser”和“local_test_splitByUser”来训练。但是结果并没有改善,甚至有所下降。
我是按照https://github.com/mouna99/dien的README来操作,采用method 1来生成数据。除了添加注释、日志和指定GPU之外,没有更改核心代码。为什么得不到论文中记录的0.9281的AUC呢,我很奇怪。能否请您赐教?如果您能赐教,我将非常感激!

样本组织方式

为什么需要用这种方式组织样本,貌似对结果影响会很大。

script/data_iterator.py

sort by history behavior length

跑数据的时候一直卡在while TRUE循环,noclk_index 一直小于5

                `while True:
                    noclk_mid_indx = random.randint(0, len(self.mid_list_for_random)-1)
                    noclk_mid = self.mid_list_for_random[noclk_mid_indx]
                    if noclk_mid == pos_mid:
                        continue
                    noclk_tmp_mid.append(noclk_mid)
                    noclk_tmp_cat.append(self.meta_id_map[noclk_mid])
                    noclk_index += 1
                    print(noclk_index) ##
                    if noclk_index >= 5:
                        break`

有人跑出来了吗,有人能解释下这段是做什么的吗

No module named 'cPickle'

Hello,

I notice the codes use print(), say using python3.
But there is no cPickle package for python3.

How to solve this issue, I’m confused.

run issure?

What's the correct tensorflow version? I use tensorflow 1.10, and got many importing error "No module" or "No attribute", such as
image

Some problems w.r.t the files and code

Hi, first of all, thanks for releasing the code. While I found there exist some problems that might need to be resolved:

  1. Missing two files (reviews-info and reviews-info) which are required for training the models.
  2. In line 131 of train.py, it called deepFM, but it isn't implemented.
  3. The code is incompatible with TensorFlow > v1.4, as something changed in rnn_cell_impl.py (after v.14, TF abandoned the use of _Linear() when implementing RNNCells).

In DIN model , The Out Product make me confused

From your model DIN, When you get the different user representation respect to different item., you use Out Product , This attention operation make me confused. Have you try other operation. such as direct send the concat vector to feed forward network!

some problems about rnn.py

UnimplementedError (see above for traceback): TensorArray has size zero, but element shape [?,128] is not fully defined. Currently only static shapes are supported when packing zero-size TensorArrays.
[[node rnn_1/gru1/TensorArrayStack/TensorArrayGatherV3 (defined at /home//rnn.py:787) = TensorArrayGatherV3[dtype=DT_FLOAT, element_shape=[?,128], _device="/job:localhost/replica:0/task:0/device:CPU:0"](rnn_1/gru1/TensorArray, rnn_1/gru1/TensorArrayStack/range, rnn_1/gru1/while/Exit_1)]]

TensorArray has size zero, but element shape [?,128] is not fully defined. Currently only static shapes are supported when packing zero-size TensorArrays.
[[node rnn_1/gru1/TensorArrayStack/TensorArrayGatherV3 (defined at /home//rnn.py:787) = TensorArrayGatherV3[dtype=DT_FLOAT, element_shape=[?,128], _device="/job:localhost/replica:0/task:0/device:CPU:0"](rnn_1/gru1/TensorArray, rnn_1/gru1/TensorArrayStack/range, rnn_1/gru1/while/Exit_1)]]

Caused by op 'rnn_1/gru1/TensorArrayStack/TensorArrayGatherV3', defined at:
File "update_model_2.gru_dien.py", line 835, in
os.makedirs(paths)
File "update_model_2.gru_dien.py", line 376, in init
rnn_outputs, _ = dynamic_rnn(GRUCell(HIDDEN_SIZE), inputs=self.item_his_eb,
File "/home//rnn.py", line 606, in dynamic_rnn
dtype=dtype)
File "/home//rnn.py", line 787, in _dynamic_rnn_loop
final_outputs = tuple(ta.stack() for ta in output_final_ta)

No such file or directory: 'item-info'

I am trying to run python train.py train DNN :

File "dien/script/data_iterator.py", line 49, in __init__ 
    f_meta = open("item-info", "r")`
IOError: [Errno 2] No such file or directory: 'item-info'

How should I get the data file?

无法复现到论文结果

根据readme步骤,换了亚马逊电子的数据集运行代码,源代码是3个epoch的,最终best_auc 只能达到0.76左右,和论文0.9相差得有点远,想请问一下有人能跑到论文实验给到的auc吗?

PNN model

hello mouna99, I was reading your code, and confused about the PNN implement, you do like this
inp = tf.concat([self.uid_batch_embedded, self.item_eb, self.item_his_eb_sum, self.item_eb * self.item_his_eb_sum], 1)
it look likes you just use field and some field product as fcn input. but contact with PNN it look likes loss one layer that is l = [lz,lp], My understanding is correct? or you use some tricks?

关于辅助损失函数

为什么在计算辅助损失函数是,click_prob和noclick_prob都只需辅助网络结果为0的概率呢?
不应该是click_prop取1的概率,noclick_prob取0的概率,然后使得两者的损失和最小么?

The historical behavior length problem

  • 问题描述:
    在用其他数据集训练DIEN时,当样本历史行为长度<=1时,训练阶段报错.
  • 问题分析:
    在auxiliary_losss函数中,将GRU的输出h_states(h(t)分别与嵌入向量click_seq(e(t+1)),noclick_seq(e'(t+1))进行concat作为auxiliary_net的输入. 调用auxiliary_losss的传入参数如下:
    aux_loss_1 = self.auxiliary_loss(rnn_outputs[:, :-1, :], self.item_his_eb[:, 1:, :], self.noclk_item_his_eb[:, 1:, :], self.mask[:, 1:], stag="gru")
    rnn_outputs的第2个维度取[:-1],item_his_eb和noclk_item_his_eb的第二个维度取[1:]这样做为了将h(t)和e(t+1),e'(t+1)方便的concat在一起.但是这里假设了所有输入样本的历史行为长度>=2.如果历史行为长度<2,则[1:]取到的切片为空,导致后面的reshape报错.
  • 问题解决:
    作者在DataIterator的初始化参数中提供了minlen=None参数,只需要将该参数设置为1即可.
    if self.minlen != None: if len(mid_list) <= self.minlen: continue

About "queries-facts" and softmax in din attention part

Two questions:

  1. why we need "queries-facts" in din_attention input part? In the paper, the input only includes emb from user and ad, with out-product of them
  2. the softmax of attention scores is used while in the paper, it said this constraint is relaxed since it can reserve the intensity of user interests.

Appreciate for your reply ~

关于实际问题中的序列构造问题

请问一下,有没有试过购买和收藏序列,而且可以按时间对一个序列切分生产多个序列。这种情况下就有很多序列了,然后这些序列都用相同的dien结构么。或者说只需要最长的点击序列就够了,其他序列加上没有提升。另外,论文里好像用的是用户最近两周的行为序列,那最大长度截断是多少呢。

problem at fcn inputlayer:concat u_h_embedding sum pooling with AUGRU state

hi, @mouna99 @zhougr1993

  1. DIEN tf implement fcn inputlayer as follow:
    inp = tf.concat([self.uid_batch_embedded, self.item_eb, self.item_his_eb_sum, self.item_eb * self.item_his_eb_sum, final_state2], 1)
    self.build_fcn_net(inp, use_dice=True)
    DIN has same impement, i have some problem, can you help me, thanks.
  2. problem
    1)want to know this design
    2)what is auc lift for sum pooling and final_state2 seperately
    3)can final_state2/attention-layer replace sum pooling

如何标记正负样本

嗨,您好。请问正负样本是如何标记的呢?
    for key in user_map:
    sorted_user_bh = sorted(user_map[key], key=lambda x:x[1])
    for line, t in sorted_user_bh:
        items = line.split("\t")
        asin = items[1]
        j = 0
        while True:
            asin_neg_index = random.randint(0, len(item_list) - 1)
            asin_neg = item_list[asin_neg_index]
            if asin_neg == asin:
                continue 
            items[1] = asin_neg
            print>>fo, "0" + "\t" + "\t".join(items) + "\t" + meta_map[asin_neg]
            j += 1
            if j == 1:             #negative sampling frequency
                break
        if asin in meta_map:
            print>>fo, "1" + "\t" + line + "\t" + meta_map[asin]
        else:
            print>>fo, "1" + "\t" + line + "\t" + "default_cat"

看这段代码,负样本是随机生成的?

I got a super overfitting problem when I train DIN

I got a super overfitting problem when I train DIN finally, but first of all I'd like to point out some probelms

  1. We'd to use .split(" ") instead of .split("") when we split the history behavior part of the dataset?
  2. Related to point 1, shall we use + " " instead of + "" in building the history behavior part?
  3. After changing the above 2 points, I still got a very overfitting result that train accuracy of almost 1. and test accuracy of about 0.5 -0.6 only.
    Please give me some guideline if you could, thx.

batch_normalization 为啥不传入training字段啊,

tf.layers.batch_normalization(inputs=in_, name='bn1' + stag, reuse=tf.AUTO_REUSE)

这里为啥不传入training 这个字段啊,之前看batch_normalization都需要传入training字段啊,训练时传入True 测试时传入False

training: Either a Python boolean, or a TensorFlow boolean scalar tensor
  (e.g. a placeholder). Whether to return the output in training mode
  (normalized with statistics of the current batch) or in inference mode
  (normalized with moving statistics). **NOTE**: make sure to set this
  parameter correctly, or else your training/inference will not work
  properly.

On difference between experiment setting of DIN and DIEN

Hi,

As mentioned in #6 , there is a huge difference between DIN and DIEN papers on their experiment settings.

I still did not get the point. Why did the same research group use different data processing strategies on the same dataset and the same task? I read through the experiment section of DIEN but did not found any explanation of such change. Why bother to change the processing strategy if we can get a better result without changing?

Edit: the Amazon dataset has updated to 2018 version, did you change the dataset from 2014 to 2018 amazon dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.