GithubHelp home page GithubHelp logo

clue-ai / chatyuan Goto Github PK

View Code? Open in Web Editor NEW
1.9K 1.9K 183.0 5.96 MB

ChatYuan: Large Language Model for Dialogue in Chinese and English

Home Page: https://www.clueai.cn

License: Other

Python 100.00%

chatyuan's People

Contributors

brightmart avatar joytianya avatar thomas-yanxin avatar vaas1993 avatar zhaokangkang0572 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatyuan's Issues

按照README上面的实例运行报错

感谢大佬分享!

我在按照README上面的实例:

加载模型

from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ClueAI/ChatYuan-large-v1")
model = T5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1")

使用

import torch
from transformers import AutoTokenizer

修改colab笔记本设置为gpu,推理更快

device = torch.device('cuda')
model.to(device)
def preprocess(text):
text = text.replace("\n", "\n").replace("\t", "\t")
return text

def postprocess(text):
return text.replace("\n", "\n").replace("\t", "\t")

def answer(text, sample=True, top_p=1, temperature=0.7):
'''sample:是否抽样。生成任务,可以设置为True;
top_p:0-1之间,生成的内容越多样'''
text = preprocess(text)
encoding = tokenizer(text=[text], truncation=True, padding=True, max_length=768, return_tensors="pt").to(device)
if not sample:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_new_tokens=512, num_beams=1, length_penalty=0.6)
else:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_new_tokens=512, do_sample=True, top_p=top_p, temperature=temperature, no_repeat_ngram_size=3)
out_text = tokenizer.batch_decode(out["sequences"], skip_special_tokens=True)
return postprocess(out_text[0])
print("end...")

input_text0 = "帮我写一个请假条,我因为新冠不舒服,需要请假3天,请领导批准"
input_text1 = "你能干什么"
input_text2 = "写一封英文商务邮件给英国客户,表达因为物流延误,不能如期到达,我们可以赔偿贵公司所有损失"
input_text3 = "写一个文章,题目是未来城市"
input_text4 = "写一个诗歌,关于冬天"
input_text5 = "从南京到上海的路线"
input_text6 = "学前教育专业岗位实习中,在学生方面会存在问题,请提出改进措施。800字"
input_text7 = "根据标题生成文章:标题:屈臣氏里的化妆品到底怎么样?正文:化妆品,要讲究科学运用,合理搭配。屈臣氏起码是正品连锁店。请继续后面的文字。"
input_text8 = "帮我对比几款GPU,列出详细参数对比,并且给出最终结论"
input_list = [input_text0, input_text1, input_text2, input_text3, input_text4, input_text5, input_text6, input_text7, input_text8]
for i, input_text in enumerate(input_list):
input_text = "用户:" + input_text + "\n小元:"
print(f"示例{i}".center(50, "="))
output_text = answer(input_text)
print(f"{input_text}{output_text}")

报如下错误:

=======================示例0========================
Traceback (most recent call last):
File "", line 4, in
File "", line 9, in answer
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/transformers/generation/utils.py", line 1252, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/transformers/generation/utils.py", line 617, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1055, in forward
layer_outputs = layer_module(
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 687, in forward
self_attention_outputs = self.layer[0](
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 593, in forward
attention_output = self.SelfAttention(
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 512, in forward
query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/s20230220e310fsb/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)

在dsw安装魔塔报错

pip install modelscope==1.4.2 -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html

ERROR: Could not find a version that satisfies the requirement datasets<=2.8.0,>=2.7.0 (from modelscope) (from versions: 0.0.9, 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.2.0, 1.2.1, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.2, 1.13.3, 1.14.0, 1.15.0, 1.15.1, 1.16.0, 1.16.1, 1.17.0, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 2.0.0, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.3.2, 2.4.0)
ERROR: No matching distribution found for datasets<=2.8.0,>=2.7.0

启动时DLL错误

Traceback (most recent call last):
File "C:\Users\李想\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1110, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\李想\AppData\Local\Programs\Python\Python310\lib\importlib_init
.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 25, in
from torch import nn
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn_init
.py", line 1, in
from .modules import * # noqa: F403
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules_init
.py", line 1, in
from .module import Module
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 8, in
from ..parameter import Parameter
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\parameter.py", line 2, in
from torch._C import _disabled_torch_function_impl
ImportError: DLL load failed while importing _C: 找不到指定的模块。

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\Desktop\ChatYuan\app_gradio.py", line 5, in
from transformers import T5Tokenizer, T5ForConditionalGeneration
File "", line 1075, in _handle_fromlist
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1101, in getattr
value = getattr(module, name)
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1100, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1112, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
DLL load failed while importing _C: 找不到指定的模块。

OMP: Error #15:

OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

如何基于YUAN模型进行无监督预训练?

我参考了一下hugging face的无监督训练代码,简单测试了一下。代码如下:
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained("premodel/ChatYuan-large-v1")
model = T5ForConditionalGeneration.from_pretrained("premodel/ChatYuan-large-v1")
input_ids = tokenizer("一只<extra_id_0>走在<extra_id_1>大街上", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0>可爱的<extra_id_1>宽敞的<extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
结果报错:
IndexError: index out of range in self
应该是embedding层索引越界,看了模型的词表,并没有<extra_id_0><extra_id_1>标记,但是tokenizer后没有报错
请问如何基于YUAN模型进行无监督预训练?无监督预训练的数据格式是什么,万分感谢

请问ChatYuan和Yuan的关系是什么?

我注意到有这么一篇论文,浪潮研究院提出的Yuan 1.0模型,请问和ChatYuan的关系是什么?是改进版吗?还是说只是名字不小心起的差不多了?
Wu, S., Zhao, X., Yu, T., Zhang, R., Shen, C., Liu, H., Li, F., Zhu, H., Luo, J., Xu, L., & Zhang, X. (2021). Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning (arXiv:2110.04725). arXiv. https://doi.org/10.48550/arXiv.2110.04725

代码效果略奇怪啊,finetune也没有效果

1676437769420
直接运行,就算在readme例子上的,我稍微改动两个字,输出就变上面这样了。
使用训练方法,就算是我直接复制训练数据的input字段作为输入, 使用本地模型后的推导效果也出不了我训练数据集的target。
这是为什么呢

经常转圈卡死是什么情况?

随便问一些问题就会卡死,应该不是拥塞的原因,因为问他一些基础的问题是秒答的,比如can you understand english,这是能理解的,但是问个复杂点的问题例如:帮我写一个正则表达式,可以从字符串中抽取出连续的数字,就直接卡死了。

fine-tune out of memory

请问使用这个参数fine-tune,显存占用大概多少?我这边16G的v100显示out of memory

image

请问如果单卡fine-tune显存不够用,有多卡的fine-tune代码吗

help:无法加载本地模型

考虑到服务器不能连接外网,需要加载已经下载好的本地模型。执行以下代码的时候,会报错:TypeError: not a string

from transformers import AutoTokenizer, AutoModel from transformers import T5Tokenizer, T5ForConditionalGeneration import os model_dir="./model/ChatYuan-large-v2/" tokenizer = T5Tokenizer.from_pretrained(model_dir)

关于distributed训练的问题

大神你好。我在实验distributed-training的代码进行分布式多卡训练的时候,提示这个问题:

Traceback (most recent call last):
  File "train.py", line 20, in <module>
    hvd.init()
AttributeError: module 'horovod.torch' has no attribute 'init'

请问这个是啥情况啊

我的环境:
horovod == 0.23.0
torch == 2.0.0

请问如何基于YUAN模型进行无监督预训练

我参考了一下hugging face的无监督训练代码,简单测试了一下。代码如下:
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained("premodel/ChatYuan-large-v1")
model = T5ForConditionalGeneration.from_pretrained("premodel/ChatYuan-large-v1")

training

input_ids = tokenizer("一只<extra_id_0>走在<extra_id_1>大街上", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0>可爱的<extra_id_1>宽敞的<extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
结果报错:
IndexError: index out of range in self
应该是embedding层索引越界,看了模型的词表,并没有<extra_id_0><extra_id_1>标记,但是tokenizer后没有报错
请问如何基于YUAN模型进行无监督预训练?无监督预训练的数据格式是什么,万分感谢

按照示例运行报错

from transformers import AutoTokenizer, AutoModel
import os
model_dir='ClueAI/ChatYuan-large-v2'
tokenizer = AutoTokenizer.from_pretrained(model_dir)

速度会受到网络影响,网络不好可以使用下面高级参数配置方式

model = AutoModel.from_pretrained(model_dir, trust_remote_code=True)
history = []
print("starting")
while True:
query = input("\n用户:")
if query == "stop":
break
if query == "clear":
history = []
os.system('clear')
continue
response, history = model.chat(tokenizer, query, history=history)
print(f"小元:{response}")

报错信息:
用户:你好
Traceback (most recent call last):
File "/root/work2/work2/chenzhihao/llm_chatbot/examples/chatyuan_interact.py", line 79, in
main()
File "/root/work2/work2/chenzhihao/llm_chatbot/examples/chatyuan_interact.py", line 71, in main
response = answer(query, context)
File "/root/work2/work2/chenzhihao/llm_chatbot/examples/chatyuan_interact.py", line 52, in answer
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_new_tokens=1024,
File "/root/anaconda3/envs/llm_chatbot-py39/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda3/envs/llm_chatbot-py39/lib/python3.9/site-packages/transformers/generation/utils.py", line 1437, in generate
return self.sample(
File "/root/anaconda3/envs/llm_chatbot-py39/lib/python3.9/site-packages/transformers/generation/utils.py", line 2479, in sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either inf, nan or element < 0

不断成长

起码先做点真东西出来再宣传,放github上更是让外国人看我们笑话
image

还需要调整

问:学前教育专业岗位实习中,在学生方面会存在问题,请提出改进措施。800字
------------------------------------------------------------------
答:学前教育专业岗位实习是学前教育专业课程中的重要组成部分,其目的在于帮助学生理解和掌握学前教育课程的理论知识和实践技能。然而,在实际实习中,学生可能会出现一些问题,比如缺乏实践经验、缺乏沟通能力、缺乏团队合作精神等。为了改进学生问题,我们需要采取以下措施。

一、加强实习过程中的指导

在实习中,学生需要进行实际操作和实践经验,因此,我们需要加强学生在实习过程中的指导。可以采用小组讨论、角色扮演等方式,让学生在实践中感受到所学知识的应用和运用。同时,我们还应该注重与学生的交流,了解学生的兴趣爱好、学习习惯和想法,帮助学生更好地理解和掌握所学知识。

二、鼓励学生参加课外活动

在实习中,学生需要参与一些课外活动,以拓展自己的视野和实践经验。我们可以通过举办讲座、课程观摩、小组讨论等方式,鼓励学生参加各种课外活动,拓宽学生的视野和经验。此外,我们还应该注重学生的参与度和互动性,让学生在活动中得到锻炼和提高。

三、提高学生的沟通能力

在实习中,学生需要进行与教师、同学和家长的交流。我们应该鼓励学生多与老师、同学和家长交流,了解他们的意见和建议,从而更好地与他们合作,提高学生的沟通能力。同时,我们也应该注重学生的表达能力,让学生在交流中更加自信,从而更好地完成任务。

总之,学前教育专业岗位实习是学前教育专业课程中非常重要的组成部分,其目的是帮助学生理解和掌握学前教育课程的理论知识和实践技能。因此,我们应该注重学生的实习指导、鼓励学生参加课外活动、提高学生的沟通能力、提高学生的表达能力,以更好地帮助学生完成实习任务。
------------------------------------------------------------------
问:没到800字
------------------------------------------------------------------
答:只有这样,才能更好地帮助学生在实践中掌握知识和技能,为未来的职业发展打下坚实的基础。

作者您好,请教问题:tokenizer词表大小和模型embedding层对应不上

作者您好,感谢您分享模型。之前问过您问题如何预训练。
我发现加载模型后embedding层大小是31128但是加载tokenzier分词器词表大小32228.原因就是多了预训练需要的extra_0到extra_100.而这是预训练所需要的。所以如何基于您分享这个embedding的32128的模型预训练。
tokenizer的
image
model的
image

Context & Cot seems to need more prompt learning

2023/03/28,Hugging face上的模型似乎有点问题。上下文和CoT的Zero-shot prompt回答不是很好。未来会加强再CoT的能力吗?似乎目前还没有很好的给予小模型逻辑思维能力的学习方法?

H : Write a snippet of code to demonstrate your CoT
AI: 以下是我使用CoT技术演示的代码:

python import random # 定义一个随机数生成器 def random_number_generator(): # 从0到1随机生成数字 number = random.randint(0, 10) # 生成一个0到10的整数 count = 0 while count < 10: # 生成第一个0到10的整数 number = random_number_generate() # 将数字加入生成器中 count += 1 # 输出生成后的数字 print(number) # 定义一个随机数生成器 random_number_generator = random.randint(-100, 100) # 调用random_number_generator函数 random_number_generate = random_number_guess(random_number_generator) # 输出生成后的随机数字 print(random_number_generate)

这段代码定义了一个随机数生成器,它通过从0到10的数字随机生成数字,并将这些数字加入生成器中。然后,它调用random_number_generate函数,生成第一个0到10的数字,并将其添加到生成器中。最后,它输出生成后的随机数字。


repeatitive answer

H : 写一段代码,证明你有逻辑思维能力
AI :

python import random # 定义一个随机数生成器 def random_number_generator(): number = random.randint(0, 10) count = 0 while count < 10: # 生成第一个0到10的整数 number = random_number_generate() count += 1 # 将数字加入生成器中 count += 1 # 定义一个随机数生成器 random_number_generator = random.randint(-100, 100) # 调用random_number_guess(random_number_generator) random_number_guess = random_number_guess([random.randint(-100, 10) for i in range(10)]) # 输出生成后的随机数字 print(random_number_guess)

这段代码定义了一个随机数生成器,它通过从0到10的数字随机生成数字,并将这些数字加入生成器中。然后,它调用random_number_guess函数,生成第一个0到10的数字,并将其添加到生成器中。最后,它输出生成后的随机数字。


repeatitive answer

H : 再写一段不同的代码,证明你有逻辑思维能力
AI :

python import random # 定义一个随机数生成器 def random_number_generator(): count = 0 while count < 10: # 生成第一个0到10的整数 number = random_number_generate() count += 1 # 定义一个随机数生成器 random_number_guess = random.randint(-100, 100) # 调用random_number_guess(random_number_generator) random_number_guess_result = random_number_guess([random.randint(-100, 10) for i in range(10)]) # 输出生成后的随机数字 print(random_number_guess_final)

这段代码定义了一个随机数生成器,它通过从0到10的数字随机生成数字,并将这些数字加入生成器中。然后,它调用random_number_guess函数,生成第一个0到10的数字,并将其添加到生成器中。最后,它输出生成后的随机数字。


out of context
H : 讲一个笑话,证明你有幽默的能力
AI :

)])

# 输出生成后的随机数字
print(random_number_guess_result)

这段代码定义了一个随机数生成器,它通过从0到10的数字随机生成数字,并将这些数字加入生成器中。然后,它调用random_number_guess函数,生成第一个0到10的数字,并将其添加到生成器中。最后,它输出生成后的随机数字。


out of context
H : 自问自答一个高中数学问题,think step by step
AI :

)])

# 输出生成后的随机数字
print(random_number_guess_result)

这段代码定义了一个随机数生成器,它通过从0到10的数字随机生成数字,并将这些数字加入生成器中。然后,它调用random_number_guess函数,生成第一个0到10的数字,并将其添加到生成器中。最后,它输出生成后的随机数字。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.