GithubHelp home page GithubHelp logo

fantasy-studio / paint-by-example Goto Github PK

View Code? Open in Web Editor NEW
970.0 22.0 88.0 18.43 MB

Paint by Example: Exemplar-based Image Editing with Diffusion Models

Home Page: https://arxiv.org/abs/2211.13227

License: Other

Python 99.72% Shell 0.28%
computer-vision deep-learning diffusion-models image-editing image-generation image-manipulation pytorch stable-diffusion paint-by-example

paint-by-example's Introduction

Paint by Example: Exemplar-based Image Editing with Diffusion Models

Teaser

Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen and Fang Wen.

Abstract

Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.

News

  • 2023-11-28 The recent work Asymmetric VQGAN improves the preservation of details in non-masked regions. For comprehensive details, please refer to the associated paper, github.
  • 2023-05-13 Release code for quantitative results.
  • 2023-03-03 Release test benchmark.
  • 2023-02-23 Non-official 3rd party apps support by ModelScope (the largest Model Community in Chinese).
  • 2022-12-07 Release a Gradio demo on Hugging Face Spaces.
  • 2022-11-29 Upload code.

Requirements

A suitable conda environment named Paint-by-Example can be created and activated with:

conda env create -f environment.yaml
conda activate Paint-by-Example

Pretrained Model

We provide the checkpoint (Google Drive | Hugging Face) that is trained on Open-Images for 40 epochs. By default, we assume that the pretrained model is downloaded and saved to the directory checkpoints.

Testing

To sample from our model, you can use scripts/inference.py. For example,

python scripts/inference.py \
--plms --outdir results \
--config configs/v1.yaml \
--ckpt checkpoints/model.ckpt \
--image_path examples/image/example_1.png \
--mask_path examples/mask/example_1.png \
--reference_path examples/reference/example_1.jpg \
--seed 321 \
--scale 5

or simply run:

sh test.sh

Visualization of inputs and output:

Training

Data preparing

  • Download separate packed files of Open-Images dataset from CVDF's site and unzip them to the directory dataset/open-images/images.
  • Download bbox annotations of Open-Images dataset from Open-Images official site and save them to the directory dataset/open-images/annotations.
  • Generate bbox annotations of each image in txt format.
    python scripts/read_bbox.py
    

The data structure is like this:

dataset
├── open-images
│  ├── annotations
│  │  ├── class-descriptions-boxable.csv
│  │  ├── oidv6-train-annotations-bbox.csv
│  │  ├── test-annotations-bbox.csv
│  │  ├── validation-annotations-bbox.csv
│  ├── images
│  │  ├── train_0
│  │  │  ├── xxx.jpg
│  │  │  ├── ...
│  │  ├── train_1
│  │  ├── ...
│  │  ├── validation
│  │  ├── test
│  ├── bbox
│  │  ├── train_0
│  │  │  ├── xxx.txt
│  │  │  ├── ...
│  │  ├── train_1
│  │  ├── ...
│  │  ├── validation
│  │  ├── test

Download the pretrained model of Stable Diffusion

We utilize the pretrained Stable Diffusion v1-4 as initialization, please download the pretrained models from Hugging Face and save the model to directory pretrained_models. Then run the following script to add zero-initialized weights for 5 additional input channels of the UNet (4 for the encoded masked-image and 1 for the mask itself).

python scripts/modify_checkpoints.py

Training Paint by Example

To train a new model on Open-Images, you can use main.py. For example,

python -u main.py \
--logdir models/Paint-by-Example \
--pretrained_model pretrained_models/sd-v1-4-modified-9channel.ckpt \
--base configs/v1.yaml \
--scale_lr False

or simply run:

sh train.sh

Test Benchmark

We build a test benchmark for quantitative analysis. Specifically, we manually select 3500 source images from MSCOCO validation set, each image contains only one bounding box. Then we manually retrieve a reference image patch from MSCOCO training set. The reference image usually shares a similar semantic with mask region to ensure the combination is reasonable. We named it as COCO Exemplar-based image Editing benchmark, abbreviated as COCOEE. This test benchmark can be downloaded from Google Drive.

Quantitative Results

By default, we assume that the COCOEE is downloaded and saved to the directory test_bench. To generate the results of test bench, you can use scripts/inference_test_bench.py. For example,

python scripts/inference_test_bench.py \
--plms \
--outdir results/test_bench \
--config configs/v1.yaml \
--ckpt checkpoints/model.ckpt \
--scale 5

or simply run:

bash inference_test_bench.sh

FID Score

By default, we assume that the test set of COCO2017 is downloaded and saved to the directory dataset. The data structure is like this:

dataset
├── coco
│  ├── test2017
│  │  ├── xxx.jpg
│  │  ├── xxx.jpg
│  │  ├── ...
│  │  ├── xxx.jpg

Then convert the images into square images with 512 solution.

python scripts/create_square_gt_for_fid.py

To calculate FID score, simply run:

python eval_tool/fid/fid_score.py --device cuda \
test_bench/test_set_GT \
results/test_bench/results

QS Score

Please download the model weights for QS score from Google Drive and save the model to directory eval_tool/gmm. To calculate QS score, simply run:

python eval_tool/gmm/gmm_score_coco.py results/test_bench/results \
--gmm_path eval_tool/gmm/coco2017_gmm_k20 \
--gpu 1

CLIP Score

To calculate CLIP score, simply run:

python eval_tool/clip_score/region_clip_score.py \
--result_dir results/test_bench/results

Citing Paint by Example

@article{yang2022paint,
  title={Paint by Example: Exemplar-based Image Editing with Diffusion Models},
  author={Binxin Yang and Shuyang Gu and Bo Zhang and Ting Zhang and Xuejin Chen and Xiaoyan Sun and Dong Chen and Fang Wen},
  journal={arXiv preprint arXiv:2211.13227},
  year={2022}
}

Acknowledgements

This code borrows heavily from Stable Diffusion. We also thank the contributors of OpenAI's ADM codebase and https://github.com/lucidrains/denoising-diffusion-pytorch.

Maintenance

Please open a GitHub issue for any help. If you have any questions regarding the technical details, feel free to contact us.

License

The codes and the pretrained model in this repository are under the CreativeML OpenRAIL M license as specified by the LICENSE file.

The test benchmark, COCOEE, belongs to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.

paint-by-example's People

Contributors

anmikh avatar binxinyang avatar chenbinghui1 avatar elin24 avatar eltociear avatar fantasy-studio avatar zhangmozhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paint-by-example's Issues

finetuning model

How to finetuning the model (with LoRa)? The pytorch_lightning pipeline is hard to understand and modify. Can you give some API or pipeline?

使用矩形(长宽不等)图片推理过程报错

使用矩形(长宽不等)图片推理过程报错:
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 14 but got size 13 for tensor number 1 in the list.
推理正方形图片时候,则不会出错。这个模型的输入只能是正方形图片吗?

Minimum GPU VRAM for inference?

Hi. I wonder what the minimum GPU VRAM is for running inference. I've tried running this on Google Colab but it's failed.

How to use for it perspective transformation?

How can one apply perspective transformation to an image mask seamlessly without creating a new image variation? I want to add an image onto a mask precisely, without generating any alterations. Any assistance on this matter would be greatly appreciated.

Generate effect issues when mask size is different

Hello, I conducted a small experiment: generating mud spots,
The experimental results are as follows due to different mask sizes:
1.
1
2.
2
3.
3

Problem:

  1. When mask1 to mask2, no effect is generated
  2. When using mask3, it seems that it has not taken effect

What is the reason for this

thanks

model saving checkpoints problem

I trained this model with my dataset. It is ok while training, but there are some strange problem when I want to use the saved checkpoints to inference.

The saved pt files have strange dict_key: dict_keys(['module', 'buffer_names', 'optimizer', 'param_shapes', 'lr_scheduler', 'data_sampler', 'random_ltd', 'sparse_tensor_module_names', 'skipped_steps', 'global_steps', 'global_samples', 'dp_world_size', 'mp_world_size', 'ds_config', 'ds_version', 'epoch', 'global_step', 'pytorch-lightning_version']) which is different from the model.ckpt

no 'state_dict' means the model weights can not load correctly while inferencing.

Could you tell me how to save the trained model and load it correctly, pls?

I have a question in infer stage.

The UNet predicts a noise, but why this noise do not substract by the input of UNet, that is, why the function of "predict_start_from_noise" isn't be called?

This doesn't suit the principle of DDPM?

Evaluation metrics

Noticed the code does not contain the scripts for evaluation. Would it be possible to include the code for evaluation so that we can reproduce the metrics from the paper?

How it works that the `self.learnable_vector` is learnable?

Nice work!
I'm wondering that how you optimize the unconditional vector in

self.params_with_white=params + list(self.learnable_vector)

I find the intention that the unconditional input self.learnable_vector is optimized by self.opt.params=self.params_with_white in line 927 but how it works? I've not found any instructions about the parameter params in class torch.optim.AdamW.

CUDA_OUT_OF MEMEORY

when i am inferring my own images with a size larger than 512, it gives me oom, how can i solve it?

Error running default command inference.py

Hi all,
Getting below error while running the command mentioned in the usage . Please help !!

Command
python scripts/inference.py --plms --outdir results --config configs/v1.yaml --ckpt checkpoints/model.ckpt --image_path examples/image/example_1.png --mask_path examples/mask/example_1.png --reference_path examples/reference/example_1.jpg --seed 321 --scale 5

Error
/Paint-by-Example/scripts/inference.py - - - - - - - - - - - - - - - - - - - - - eprint(line:60) :: Error when calling Cognitive Face API:
status_code: 401
code: 401
message: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.

[2024-05-04 18:01:36] /paintByEx/Paint-by-Example/scripts/inference.py - - - - - - - - - - - - - - - - - - - - - eprint(line:60) :: img_url:https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg
[2024-05-04 18:01:37] /paintByEx/Paint-by-Example/scripts/inference.py - - - - - - - - - - - - - - - - - - - - - eprint(line:60) :: Error when calling Cognitive Face API:
status_code: 401
code: 401
message: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.

[2024-05-04 18:01:37] /paintByEx/Paint-by-Example/scripts/inference.py - - - - - - - - - - - - - - - - - - - - - eprint(line:60) :: img_url:/data1/mingmingzhao/label/data_sets_teacher_1w/47017613_1510574400_out-video-jzc70f41fa6f7145b4b66738f81f082b65_f_1510574403268_t_1510575931221.flv_0001.jpg
Traceback (most recent call last):
File "//paintByEx/Paint-by-Example/scripts/inference.py", line 17, in
from ldm.util import instantiate_from_config
ModuleNotFoundError: No module named 'ldm.util'; 'ldm' is not a package

Note:
Installed already version ldm==0.1.3
why it invoking this image detection1.jpg
what subscription key we need to provide "Access denied due to invalid subscription key or"

Question about LDM.

Traceback (most recent call last):
File "inference.py", line 405, in
main()
File "inference.py", line 271, in main
model = load_model_from_config(config, f"{opt.ckpt}")
File "inference.py", line 63, in load_model_from_config
model = instantiate_from_config(config.model)
File "/home/yansan/projects/PaintExam/ldm/util.py", line 85, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/home/yansan/projects/PaintExam/ldm/models/diffusion/ddpm.py", line 448, in init
super().init(conditioning_key=conditioning_key, *args, **kwargs)
TypeError: init() got an unexpected keyword argument 'u_cond_percent'

training problem “pop from empty list”

Thanks for your excellent work.

@zhangmozhe hello,When I use my own data for training, “pop from empty list” appears during the training, but I haven’t found the reason yet. Have you ever encountered it and how to solve this situation?

Encountered an error

Hello, very interesting work! When I ran inference.py, I got Paint-by Example/ldm/models/diffusion/ddim.py", line 150, in ddim_sampling
img = torch.cat((img, kwargs['rest']), dim=1)
KeyError: 'rest'. Do you have any suggestions about this, thank you!

What is {:06} means

Thank you for your outstanding contribution to community!
I noticed that in main.py, line 621:
"filename": "{epoch:06}-{step:09}",
and line 553:
"filename": "{epoch:06}",
I wonder if it's missing "format", or what is the meaning of this {:06}?

Generates Black Image

I tried to infer on all three example images provided in the repository. It returns zero tensors and black images are saved as output.

The output print statements highlighted possible NSFW content in the images, so I removed the checkSafety function from the inference script. However, the issue still remains and the model returns only black images.

The conda environment setup had no problems and I can find no alternate solution to the current problem.

grid-example_3_5065
grid-example_2_5876
grid-example_1_321

Issue when running sh test.sh

This is my code segment in inference.py:
model = torch.load("checkpoints/model.ckpt")

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model = model.to(device)

When I run sh test.sh I get this error:
Global seed set to 321
Traceback (most recent call last):
File "scripts/inference.py", line 410, in
main()
File "scripts/inference.py", line 279, in main
model = model.to(device)
AttributeError: 'dict' object has no attribute 'to'
Global seed set to 5876
Traceback (most recent call last):
File "scripts/inference.py", line 410, in
main()
File "scripts/inference.py", line 279, in main
model = model.to(device)
AttributeError: 'dict' object has no attribute 'to'
Global seed set to 5065
Traceback (most recent call last):
File "scripts/inference.py", line 410, in
main()
File "scripts/inference.py", line 279, in main
model = model.to(device)
AttributeError: 'dict' object has no attribute 'to'

Washed out images

I downloaded the original sd-v1.4 from Huggingface and did some finetuning on my dataset, and later on modify the checkpoints accordingly using script/modify_checkpoints.py. I then do the inpainting training using main.py. But the images are washed out. I read something about VAE not included but that shouldn't be the case since we are loading the entire SD-v1.4 right?

Screenshot 2023-06-30 at 4 00 03 PM

image

Training Config

Really interesting work.

I have one question about "v1.yaml" in "configs". I noticed that you set parameter "cond_stage_trainable" to "true". Does it mean that you also finetuned CLIP during training?

Looking forward to your reply.

Problem about cross-attention during training

hello @Fantasy-Studio
I noticed something odd when trying to train the network with the upload code.
After training some iterations, I checked the parameter values of the cross-attention module of the saved model and found that only the parameters of the to_v network have changed, and the parameters of the to_k and to_q networks have not changed (no matter how many times they are trained). Therefore, I specifically recorded the backpropagation gradient value of the cross-attention model parameters, as shown in the following figure:
to_k:
image
to_v:
image
This situation is consistent with what I have observed so far. After debugging the code, I found that the CLIP used in the paper only extracts outputs.pooler_output as cond, which has a dimension of 1X1024. After passing the cross-attention network, the q vector is 4096X40, k and v vectors are 1X40.
According to the cross attention formula:
image
The result of the product of q and k is a vector of 4096X1. After softmax processing, because it is a one-dimensional vector, its vector value will all become 1. At this time, the mechanism of the attention module will fail, and the output result will be the v vector. It has nothing to do with k and q.
The above is my analysis and verification of this situation, but when I compared sd-v1-4.ckpt with the pre-trained model parameters of the paint-by-example uploaded by the author, I found that the to_k, to_q, and to_v of the cross-attention modules of the two are different, which makes me very confused. I would like to ask you if you have encountered the same problem, thank you very much for your reply!

Will this happen when researchers on related topics train this part of the code? Thank you for your answers

How to train

Excellent work!How do I train a model with my own dataset

Web demo is not working

===== Application Startup at 2023-07-08 23:27:50 =====

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().

0it [00:00, ?it/s]
0it [00:00, ?it/s]
Couldn't connect to the Hub: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/models/Fantasy-Studio/Paint-by-Example (Request ID: Root=1-64a9f0fc-6612745c1d91f5f31284905b)

Internal Error - We're working hard to fix this as soon as possible!.
Will try to load from local cache.
Traceback (most recent call last):
File "app.py", line 17, in
pipe = DiffusionPipeline.from_pretrained(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 882, in from_pretrained
cached_folder = cls.download(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1315, in download
cached_folder = snapshot_download(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/huggingface_hub/_snapshot_download.py", line 169, in snapshot_download
with open(ref_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/user/.cache/huggingface/hub/models--Fantasy-Studio--Paint-by-Example/refs/main'

After I open the web demo, it shows a Runtime error with message above.

question about training input

Nice work!
I'm wondering that why the input can be set this way during training?

image_GT????? inpaint_image inpaint_mask ref_imgs
img masked_img msk ref_img

In this work, I found that the inputs are gt(add noise), masked_img, mask and ref_img. As follows, the input x_start to unet is concatenated by z(encode on gt), z_inpaint(encode on masked_img) and mask_resize(downsampling mask):

z_new = torch.cat((z,z_inpaint,mask_resize),dim=1)  # x_start
def p_losses(self, x_start, cond, t, noise=None, ):
    if self.first_stage_key == 'inpaint':
        # x_start=x_start[:,:4,:,:]
        noise = default(noise, lambda: torch.randn_like(x_start[:,:4,:,:]))
        x_noisy = self.q_sample(x_start=x_start[:,:4,:,:], t=t, noise=noise)
        x_noisy = torch.cat((x_noisy,x_start[:,4:,:,:]),dim=1)
    ...
    model_output = self.apply_model(x_noisy, t, cond)
    ...

    if self.parameterization == "x0":
        target = x_start
    elif self.parameterization == "eps":
        target = noise
    else:
        raise NotImplementedError()

    loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
    ...

I am curious about why GT image can be input into the unet directly. Even though it has been added with noise, it is still visible to the unet.

Use the images above as an example: the input is car image, and the expected output is car image during training. And when comes for infering, the input is image unrelated to car(arbitrary object or just background), and the expected output is car image.

This is a little weird. On the one hand, model needs GT to be optimized, and it is often used as a target in other generative model, rather than as a direct input to the model. On the other hand, diffusion model usually do not predict pixels but Gaussian noise, there seems to be no other way for diffusion model to be constrained from gt. I don't know how to understand how model learns, I'd be grateful if anyone could give me advice

I get an error with EpollSelector.

(Paint-by-Example) C:\Users\username\Paint-by-Example-main>sh test.sh
Global seed set to 321
Loading model from checkpoints/model.ckpt
Global Step: 32613
Traceback (most recent call last):
  File "scripts/inference.py", line 444, in <module>
    main()
  File "scripts/inference.py", line 300, in main
    model = load_model_from_config(config, f"{opt.ckpt}")
  File "scripts/inference.py", line 65, in load_model_from_config
    model = instantiate_from_config(config.model)
  File "c:\users\username\paint-by-example-main\ldm\util.py", line 85, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "c:\users\username\paint-by-example-main\ldm\util.py", line 93, in get_obj_from_str
    return getattr(importlib.import_module(module, package=None), cls)
  File "C:\Users\username\anaconda3\envs\Paint-by-Example\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "c:\users\username\paint-by-example-main\ldm\models\diffusion\ddpm.py", line 9, in <module>
    from selectors import EpollSelector
ImportError: cannot import name 'EpollSelector' from 'selectors' (C:\Users\username\anaconda3\envs\Paint-by-Example\lib\selectors.py)

I'm using Windows 10.

测试示例输出全黑图像

我在进行示例测试时,遇到了输出是全黑图像的问题。我已经按要求下载了model.ckpt(11.7G)文件并放在项目的checkpoints目录下,但是还是没法解决问题,请问是为什么?

Training from scratch

What are the recommended settings (number of epochs, etc.) if we were to train from scratch?

huggingface space demo not working anymore

Thanks for the great paper!

Just wanna update the states on the huggingface space demo here in case the authors are not aware of: looks like the demo is not working anymore. Every time after clicking "paint", it will display "connection error".

Hope this helps!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.