GithubHelp home page GithubHelp logo

beader / tianchi_nl2sql Goto Github PK

View Code? Open in Web Editor NEW
507.0 507.0 144.0 2.19 MB

追一科技首届中文NL2SQL挑战赛决赛第3名方案+代码

Python 19.43% Jupyter Notebook 80.57%
nl2sql nlp tianchi

tianchi_nl2sql's People

Contributors

beader avatar hyiiego avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tianchi_nl2sql's Issues

task1关于DataSequence随机输出的问题

train_seq = DataSequence(train_data, query_tokenizer, label_encoder, shuffle=False, max_len=160, batch_size=2)
sample_batch_inputs, sample_batch_outputs = train_seq[0]
这个里面取索引为0的数据
for name, data in sample_batch_outputs.items():
print('{} : shape{}'.format(name, data.shape))
print(data)
打印sample_batch_outputs这个data,发现每次运行都是不一样的。请问是为何?

model2 保存 load的时候有错误

model.save('model2.h5')
model = load_model('model2.h5')
我是用这个方法保存的 读取时候报错 ValueError: Unknown layer: TokenEmbedding 然后我没找到自定义的TokenEmbedding 我想问下这个应该怎么把它加载到load_model里面

项目model1.py程序报错

运行model1.py的时候出现以下错误:
Traceback (most recent call last):
File "/home/liwei/ss/NLP2SQL/tianchi_nl2sql-master/code_nlp/model1.py", line 378, in
bert_model = load_trained_model_from_checkpoint(paths.config, paths.checkpoint, seq_len=None)
File "/home/liwei/ss/NLP2SQL/tianchi_nl2sql-master/keras_bert/loader.py", line 169, in load_trained_model_from_checkpoint
**kwargs)
File "/home/liwei/ss/NLP2SQL/tianchi_nl2sql-master/keras_bert/loader.py", line 58, in build_model_from_config
**kwargs)
File "/home/liwei/ss/NLP2SQL/tianchi_nl2sql-master/keras_bert/bert.py", line 84, in get_model
dropout_rate=dropout_rate,
File "/home/liwei/ss/NLP2SQL/tianchi_nl2sql-master/keras_bert/layers/embedding.py", line 37, in get_embedding
)(inputs[0]),
File "/home/liwei/anaconda3/envs/python360-gpu/lib/python3.6/site-packages/keras/engine/base_layer.py", line 497, in call
arguments=user_kwargs)
File "/home/liwei/anaconda3/envs/python360-gpu/lib/python3.6/site-packages/keras/engine/base_layer.py", line 565, in _add_inbound_node
output_tensors[i]._keras_shape = output_shapes[i]
IndexError: list index out of range

当前环境:
python==3.6.0
keras==2.2.4
tensorflow==1.14

数据集

您好,看了您的分享,深受鼓舞,请问您能提供一些数据以供学习嘛,谢谢啦

model2找不到数字

我完全使用了原始代码,并且获得了成功的task1_output。但在model2产生的final output文件中,尽管cond_value在查找文本时是基本正确的,可应该出现数字的地方似乎绝大部分都返回了空值。另一个疑惑是,所有的输出都没有包含id。

出错的时候就像这样:

test: {"question": "PE2011大于11或者EPS2011大于11的公司有哪些", "table_id": "69d4941c334311e9aefd542696d6e445"}

task1_output: {"sel": [1], "agg": [0], "cond_conn_op": 2, "conds": [[3, 0], [6, 0]]}

standard: {"agg": [0], "cond_conn_op": 2, "sel": [1], "table_id": "69d4941c334311e9aefd542696d6e445", "conds": [[6, 0, "11"], [3, 0, "11"]]}

final_output: {"sel": [1], "agg": [0], "cond_conn_op": 2, "conds": []}

用cpu运行报错

您好,我电脑没有gpu,在运行您代码过程中直接跳过model = multi_gpu_model(model, gpus=NUM_GPUS)这步,在训练模型的过程中报如下错误NotImplementedError,请问有什么解决措施吗。

NotImplementedError Traceback (most recent call last)
in ()
----> 1 history = model.fit(train_dataseq, epochs=num_epochs, callbacks=callbacks)

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside run_distribute_coordinator already.

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
846 batch_size=batch_size):
847 callbacks.on_train_batch_begin(step)
--> 848 tmp_logs = train_function(iterator)
849 # Catch OutOfRangeError for Datasets of unknown size.
850 # This blocks until the batch has finished executing.

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in call(self, *args, **kwds)
578 xla_context.Exit()
579 else:
--> 580 result = self._call(*args, **kwds)
581
582 if tracing_count == self._get_tracing_count():

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
625 # This is the first call of call, so we have to initialize.
626 initializers = []
--> 627 self._initialize(args, kwds, add_initializers_to=initializers)
628 finally:
629 # At this point we know that the initialization is complete (or less

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
504 self._concrete_stateful_fn = (
505 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 506 *args, **kwds))
507
508 def invalid_creator_scope(*unused_args, **unused_kwds):

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2444 args, kwargs = None, None
2445 with self._lock:
-> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2447 return graph_function
2448

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2775
2776 self._function_cache.missed.add(call_context_key)
-> 2777 graph_function = self._create_graph_function(args, kwargs)
2778 self._function_cache.primary[cache_key] = graph_function
2779 return graph_function, args, kwargs

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2665 arg_names=arg_names,
2666 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2667 capture_by_value=self._capture_by_value),
2668 self._function_attributes,
2669 # Tell the ConcreteFunction to clean up its graph once it goes out of

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
979 _, original_func = tf_decorator.unwrap(python_func)
980
--> 981 func_outputs = python_func(*func_args, **func_kwargs)
982
983 # invariant: func_outputs contains only Tensors, CompositeTensors,

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
439 # wrapped allows AutoGraph to swap in a converted function. We give
440 # the function a weak reference to itself to avoid a reference cycle.
--> 441 return weak_wrapped_fn().wrapped(*args, **kwds)
442 weak_wrapped_fn = weakref.ref(wrapped_fn)
443

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise

NotImplementedError: in user code:

/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
    outputs = self.distribute_strategy.run(
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
    return fn(*args, **kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:541 train_step  **
    self.trainable_variables)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1814 _minimize
    optimizer.apply_gradients(zip(gradients, trainable_variables))
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:508 apply_gradients
    "name": name,
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2420 merge_call
    return self._merge_call(merge_fn, args, kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2427 _merge_call
    return merge_fn(self._strategy, *args, **kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:592 _distributed_apply  **
    var, apply_grad_to_update_var, args=(grad,), group=False))
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2013 update
    return self._update(var, fn, args, kwargs, group)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2659 _update
    return self._update_non_slot(var, fn, (var,) + tuple(args), kwargs, group)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2665 _update_non_slot
    result = fn(*args, **kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:563 apply_grad_to_update_var  **
    grad.values, var, grad.indices, **apply_kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1083 _resource_apply_sparse_duplicate_indices
    **kwargs)
/data/software/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1104 _resource_apply_sparse
    raise NotImplementedError()

NotImplementedError: 

后面我用默认的optimizer model.compile( loss='sparse_categorical_crossentropy', optimizer = 'adam'),会报如下错误,请问能指点一下imput哪里有问题导致的吗? 不胜感激
InvalidArgumentError: indices[27,3] = 33 is not in [0, 32)
[[node model_5/header_seq_gather/GatherV2 (defined at :4) ]] [Op:__inference_train_function_122352]

Errors may have originated from an input operation.
Input Source operations connected to node model_5/header_seq_gather/GatherV2:
IteratorGetNext (defined at :1)
model_5/model_4/Encoder-12-FeedForward-Norm/add_1 (defined at /data/software/anaconda3/lib/python3.7/site-packages/keras_layer_normalization/layer_normalization.py:99)

Function call stack:
train_function

模型结果复现问题咨询

博主你好,根据你们开源出来的代码复现后,初始的logic acc=0.796,execut acc=0.846左右,然后自己做了一部分的修改,包括当预测的cond_conn_op为0时只选取分数最高的value、改用取value排序分数前两个等,指标提升到了logic acc=0.8323,execut acc=0.8781,但是跟你们ppt里面的logic acc=0.87还是有比较大的差距,所以想请问下你们还有什么其他的后处理操作嘛,或者能够给一点提示建议什么的,十分感激~

docker pull error

Error response from daemon: received unexpected HTTP status: 500 Internal Server Error

For real industrial application, what strategy to locate the exact table?

The datasets is that the table corresponding to question is given.

But in real industrial application, we have 100+ tables for 1 new question.

Thank you!

真正落地的时候,给一个问题,会有100多张表候选,要先定位到表,才能生成SQL,这个有什么方法定位到表?

代码环境

代码的依赖包请更新一下 ,用您的requirements 根本就跑不起来代码

.

.

关于model1里面pred_sqls的问题

您好,看到您的代码有完整的解释和代码,想学习一下,有一个疑问,请问为什么在model1中预测sql,用的是验证集序列decode呢?
sqls = outputs_to_sqls(preds_cond_conn_op, preds_sel_agg, preds_cond_op, header_lens, val_dataseq.label_encoder)

期待您的答复,谢谢~

代码复现环境报错

复现代码,在docker容器内运行或者新建环境,都会在bert_model = load_trained_model_from_checkpoint(paths.config, paths.checkpoint, seq_len=None) 这步报错:AttributeError: 'tuple' object has no attribute 'layer'
所有依赖和requirements.txt里要求的一样,尝试安装了tensorflow_gpu=1.14和1.13,都会报同样的错误,请问是什么情况?
报错信息:
`>>> from keras_bert import load_vocabulary, load_trained_model_from_checkpoint, Tokenizer, get_checkpoint_paths
Using TensorFlow backend.

bert_model_path = '/opt/zhuiyi/tianchi_nl2sql-master/model/chinese_wwm_L-12_H-768_A-12'
paths = get_checkpoint_paths(bert_model_path)
bert_model = load_trained_model_from_checkpoint(paths.config, paths.checkpoint, seq_len=None)
WARNING: Logging before flag parsing goes to stderr.
W0225 03:19:48.417089 140073566517056 deprecation_wrapper.py:118] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0225 03:19:48.457948 140073566517056 deprecation_wrapper.py:118] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W0225 03:19:48.498272 140073566517056 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/initializers.py:119: calling RandomUniform.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W0225 03:19:49.240138 140073566517056 deprecation_wrapper.py:118] From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/api/_v1/estimator/init.py:10: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.6/dist-packages/keras_bert/loader.py", line 170, in load_trained_model_from_checkpoint
seq_len=seq_len,
File "/usr/local/lib/python3.6/dist-packages/keras_bert/loader.py", line 56, in build_model_from_config
output_layer_num=output_layer_num,
File "/usr/local/lib/python3.6/dist-packages/keras_bert/bert.py", line 99, in get_model
dropout_rate=dropout_rate,
File "/usr/local/lib/python3.6/dist-packages/keras_bert/layers/embedding.py", line 56, in get_embedding
)(embed_layer)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 733, in call
inputs, outputs, args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 1833, in set_connectivity_metadata
input_tensors=inputs, output_tensors=outputs, arguments=kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 1920, in _add_inbound_node
input_tensors)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/nest.py", line 527, in map_structure
structure[0], [func(*x) for x in entries],
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/nest.py", line 527, in
structure[0], [func(*x) for x in entries],
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 1919, in
inbound_layers = nest.map_structure(lambda t: t._keras_history.layer,
AttributeError: 'tuple' object has no attribute 'layer'
环境版本:Package Version


absl-py 0.7.1
asn1crypto 0.24.0
astor 0.8.0
attrs 19.1.0
backcall 0.1.0
bleach 3.1.0
cn2an 0.3.6
cryptography 2.1.4
cycler 0.10.0
decorator 4.4.0
defusedxml 0.6.0
entrypoints 0.3
enum34 1.1.6
gast 0.2.2
google-pasta 0.1.7
grpcio 1.21.1
h5py 2.9.0
idna 2.6
ipykernel 5.1.1
ipython 7.5.0
ipython-genutils 0.2.0
ipywidgets 7.4.2
jedi 0.13.3
Jinja2 2.10.1
jsonschema 3.0.1
jupyter 1.0.0
jupyter-client 5.2.4
jupyter-console 6.0.0
jupyter-core 4.4.0
jupyter-http-over-ws 0.0.6
Keras 2.2.4
Keras-Applications 1.0.8
keras-bert 0.68.1
keras-embed-sim 0.10.0
keras-layer-normalization 0.16.0
keras-multi-head 0.29.0
keras-pos-embd 0.13.0
keras-position-wise-feed-forward 0.8.0
Keras-Preprocessing 1.1.0
keras-self-attention 0.51.0
keras-transformer 0.40.0
keyring 10.6.0
keyrings.alt 3.0
kiwisolver 1.1.0
Markdown 3.1.1
MarkupSafe 1.1.1
matplotlib 3.1.0
mistune 0.8.4
nbconvert 5.5.0
nbformat 4.4.0
notebook 5.7.8
numpy 1.16.4
pandas 1.1.5
pandocfilters 1.4.2
parso 0.4.0
pexpect 4.7.0
pickleshare 0.7.5
pip 21.3.1
prometheus-client 0.7.0
prompt-toolkit 2.0.9
protobuf 3.8.0
ptyprocess 0.6.0
pycrypto 2.6.1
Pygments 2.4.2
PyGObject 3.26.1
pyparsing 2.4.0
pyrsistent 0.15.2
python-apt 1.6.4
python-dateutil 2.8.0
pytz 2021.3
pyxdg 0.25
PyYAML 6.0
pyzmq 18.0.1
qtconsole 4.5.1
scipy 1.5.4
SecretStorage 2.3.1
Send2Trash 1.5.0
setuptools 41.0.1
six 1.11.0
tb-nightly 1.14.0a20190614
termcolor 1.1.0
terminado 0.8.2
testpath 0.4.2
tf-estimator-nightly 1.14.0.dev2019061701
tf-nightly-gpu 1.14.1.dev20190617
thulac 0.2.0
tornado 6.0.2
tqdm 4.62.3
traitlets 4.3.2
wcwidth 0.1.7
webencodings 0.5.1
Werkzeug 0.15.4
wheel 0.30.0
widgetsnbextension 3.4.2
wrapt 1.11.1
zhon 1.1.5
新建虚拟环境的安装步骤:pip install tensorflow-gpu==1.14
pip install -r requirements.txt
`
安装环境我是少了什么步骤吗?

model2结果复现问题

您好,我最近在复现您的代码效果时中碰到一些问题想向您请教。
就是按照您的代码model1可以复现出ppt中的87.0%的准确率,但是model2直接使用代码中的超参在验证集上logical form完全匹配的准确率只能达到79.5%(5个epoch,阈值卡在0.9,评测方法就是在model1的conds的set中加入cond_value),因为跟您ppt中85.37%的准确率有一定差距,并且代码中阈值是卡在0.995。所以想跟您请教是我实验做的有问题还是model2需要继续调参或者加上后处理才能达到比较好的效果。

求数据集,非常感谢

您好,请问您有数据集吗,错过了比赛,想自己先研究一下,麻烦还能共享下数据吗?非常非常感谢!如果可以麻烦请发到邮箱[email protected]。已经star,希望麻烦您给发一份数据

model1中outputs_to_sqls函数里对cond_op和sel_agg的处理为何不同?

您好,我比较疑惑,在outputs_to_sqls函数中为何sel_agg要强制给column选一个agg,这里理解cond_op与sel_op的情况是类似的,都是column在分布上的概率,感觉sel_agg和cond_op一样直接做argmax就可以得到预测的label,不明白为啥还要再单独处理一下,希望得到您的解答

关于数据集的请求

大佬您好,如果可以的话,能给我发一下数据集的规格吗,数据我可以自己造,但是没有格式真的无从下手,谢谢!

关于model2的问题

为什么model1预测出来了条件列和cond_op,而通过model2后,条件处有空的存在
eg:model1的预测{"sel": [1, 2], "agg": [0, 0], "cond_conn_op": 0, "conds": [[3, 2]]}
通过model2后的结果为{"sel": [1, 2], "agg": [0, 0], "cond_conn_op": 0, "conds": []}
还有一种情况是通过model2后的条件少于model1预测出的条件

这个地方有些不理解

predict cond_conn_op

x_for_cond_conn_op = Lambda(lambda x: x[:, 0])(x) # (None, 768)

这块代码的意思是?
只用句子的第一个词的语义向量进行cond_conn_op预测?

tensorflow版本

您好,请问一下代码的复现,tensorflow要求什么版本呢?

关于先训练后加载模型预测ValueError

博主您好,当前训练完成后,后续通过其他脚本重新加载model的时候,出现了报错:
ValueError: You are trying to load a weight file containing 1 layers into a model with 4 layers.
用的是save_weights 和 load_weights~~~
请问有没有遇到过~~~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.