janspiry / image-super-resolution-via-iterative-refinement Goto Github PK
View Code? Open in Web Editor NEWUnofficial implementation of Image Super-Resolution via Iterative Refinement by Pytorch
License: Apache License 2.0
Unofficial implementation of Image Super-Resolution via Iterative Refinement by Pytorch
License: Apache License 2.0
now i'm in tutorial. i downloaded FFHQ 512×512 dataset and i converted it 64_512 dataset to use prepare_data.py.
and then generate following code that python sr.py -p train -c config/sr_sr3_64_512.json
but i got this error code that cuda out of memory.
what kind of parameter do i have to fix for clear this error code?
The repo lacks a way of pulling all necessary dependencies before running. Something like dependencies.txt would be helpful.
I would like to upscale an image which is 512x512 to 1024x1024 or 2048x2048, is that possible with this model? Also will this work on natural images or just faces?
Hi
Thanks for your work. Can you please give advice or some tips and things to try to improve training process?
Loss is reducing but validation results are still poor. And quality isn't improving with new epoches. Despite dataset is quite big (10000 pics) and homogeneous . I'm trying to train 256 ->1024 sr3.
I would appreciate any tips or ideas. Thanks!
"model": { "which_model_G": "sr3", // use the ddpm or sr3 network structure "finetune_norm": false, "unet": { "in_channel": 6, "out_channel": 3, "inner_channel": 16, "norm_groups": 16, "channel_multiplier": [ 1, 2, 4, 8, // 8, // 16, 16, 32, 32, 32 ], "attn_res": [ // 16 ], "res_blocks": 1, "dropout": 0 },
Seeing Error(s) in loading state_dict for GaussianDiffusion
when trying to load the model I1560000_E91
.
I tried to use pytorch 1.7.1, python 3.7 to load the model. Was this also the version used to train it?
21-11-14 04:14:05.699 - INFO: Dataset [LRHRDataset - CelebaHQ] is created.
21-11-14 04:14:05.699 - INFO: Initial Dataset Finished
21-11-14 04:14:06.367 - INFO: Loading pretrained model for G [experiments/I1560000_E91] ...
Traceback (most recent call last):
File "infer.py", line 46, in <module>
diffusion = Model.create_model(opt)
File "/home/jotschi/workspaces/ml/Image-Super-Resolution-via-Iterative-Refinement/model/__init__.py", line 7, in create_model
m = M(opt)
File "/home/jotschi/workspaces/ml/Image-Super-Resolution-via-Iterative-Refinement/model/model.py", line 42, in __init__
self.load_network()
File "/home/jotschi/workspaces/ml/Image-Super-Resolution-via-Iterative-Refinement/model/model.py", line 158, in load_network
gen_path), strict=False)
File "/home/jotschi/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1052, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GaussianDiffusion:
size mismatch for denoise_fn.downs.0.weight: copying a param with shape torch.Size([64, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 6, 3, 3]).
size mismatch for denoise_fn.downs.5.res_block.noise_func.noise_func.0.weight: copying a param with shape torch.Size([128, 64]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for denoise_fn.downs.5.res_block.noise_func.noise_func.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
I tried upscaling a simple picture using the colab notebook but instead getting this "No such file.." error on running the last cell
21-11-13 09:59:49.904 - INFO: Dataset [LRHRDataset - CelebaHQ] is created.
21-11-13 09:59:49.904 - INFO: Initial Dataset Finished
21-11-13 10:00:06.355 - INFO: Loading pretrained model for G [I830000_E32] ...
Traceback (most recent call last):
File "infer.py", line 46, in
diffusion = Model.create_model(opt)
File "/content/Image-Super-Resolution-via-Iterative-Refinement/model/init.py", line 7, in create_model
m = M(opt)
File "/content/Image-Super-Resolution-via-Iterative-Refinement/model/model.py", line 42, in init
self.load_network()
File "/content/Image-Super-Resolution-via-Iterative-Refinement/model/model.py", line 158, in load_network
gen_path), strict=(not self.opt['model']['finetune_norm']))
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 594, in load
with _open_file_like(f, 'rb') as opened_file:
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 230, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 211, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'I830000_E32_gen.pth'
when running using following command:
python sr.py -p val -c config/sr_ddpm_16_128.json
I get following error:
(pytorch_env) admin@Admins-MacBook-Pro sr3 % python sr.py -p val -c config/sr_ddpm_16_128.json
export CUDA_VISIBLE_DEVICES=0
21-09-09 17:32:40.626 - INFO: name: sr_ffhq
phase: val
gpu_ids: [0]
path:[
log: experiments/sr_ffhq_210909_173240/logs
tb_logger: experiments/sr_ffhq_210909_173240/tb_logger
results: experiments/sr_ffhq_210909_173240/results
checkpoint: experiments/sr_ffhq_210909_173240/checkpoint
resume_state: Users/admin/sr3/pretrained/I640000_E37
experiments_root: experiments/sr_ffhq_210909_173240
]
datasets:[
train:[
name: FFHQ
mode: HR
dataroot: dataset/ffhq_16_128
datatype: img
l_resolution: 16
r_resolution: 128
batch_size: 12
num_workers: 8
use_shuffle: True
data_len: -1
]
val:[
name: CelebaHQ
mode: LRHR
dataroot: dataset/celebahq_16_128
datatype: img
l_resolution: 16
r_resolution: 128
data_len: 3
]
]
model:[
which_model_G: ddpm
finetune_norm: False
unet:[
in_channel: 6
out_channel: 3
inner_channel: 64
channel_multiplier: [1, 1, 2, 2, 4, 4]
attn_res: [16]
res_blocks: 2
dropout: 0.2
]
beta_schedule:[
train:[
schedule: linear
n_timestep: 2000
linear_start: 0.0001
linear_end: 0.02
]
val:[
schedule: linear
n_timestep: 100
linear_start: 0.0001
linear_end: 0.02
]
]
diffusion:[
image_size: 128
channels: 3
conditional: True
]
]
train:[
n_iter: 1000000
val_freq: 10000.0
save_checkpoint_freq: 10000.0
print_freq: 200
optimizer:[
type: adam
lr: 0.0001
]
ema_scheduler:[
step_start_ema: 5000
update_ema_every: 1
ema_decay: 0.9999
]
]
distributed: False
21-09-09 17:32:40.643 - INFO: Dataset [LRHRDataset - CelebaHQ] is created.
21-09-09 17:32:40.644 - INFO: Initial Dataset Finished
Traceback (most recent call last):
File "sr.py", line 51, in <module>
diffusion = Model.create_model(opt)
File "/Users/admin/sr3/model/__init__.py", line 7, in create_model
m = M(opt)
File "/Users/admin/sr3/model/model.py", line 16, in __init__
self.netG = self.set_device(networks.define_G(opt))
File "/Users/admin/sr3/model/networks.py", line 91, in define_G
model = unet.UNet(
File "/Users/admin/sr3/model/ddpm_modules/unet.py", line 218, in __init__
self.final_conv = Block(pre_channel, default(out_channel, in_channel), norm_groups=norm_groups)
TypeError: __init__() got an unexpected keyword argument 'norm_groups'
Hey @Janspiry, I was trying to infer on a few images to get their super-resolution variants.
Here's the result of the inference: https://wandb.ai/ayush-thakur/sr_ffhq/runs/9zfkm2zm
I have used the following checkpoint: 16×16 -> 128×128 on FFHQ-CelebaHQ
The output images (middle column) are definitely super-resolution but not preserving the semantics very well. Can you let me know what I might be missing?
I trained and when running inference using following command:
python sr.py -p val -c config/myver.json
i get following error:
PIL.UnidentifiedImageError: cannot identify image file
File "/content/ISR/data/LRHR_dataset.py", line 59, in getitem
img_LR = Image.open(BytesIO(lr_img_bytes)).convert("RGB")
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2896, in open
"cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7fab1d486890>
from model.base import BaseModule
from model.nn import WaveGradNN
is wrong*-*
21-09-13 21:50:38.132 - INFO: Loading pretrained model for G [experiments/sr_ffhq_210913_210552/checkpoint/I10000_E1] ...
Traceback (most recent call last):
File "sr.py", line 51, in <module>
diffusion = Model.create_model(opt)
File "/opt/github/Image-Super-Resolution-via-Iterative-Refinement/model/__init__.py", line 7, in create_model
m = M(opt)
File "/opt/github/Image-Super-Resolution-via-Iterative-Refinement/model/model.py", line 23, in __init__
self.load_network()
File "/opt/github/Image-Super-Resolution-via-Iterative-Refinement/model/model.py", line 162, in load_network
self.optG.load_state_dict(opt['optimizer'])
AttributeError: 'DDPM' object has no attribute 'optG'
Multi-gpu training support seems to be problematic~
When the loss type is "l1", l_pix
in line47 of model/model.py
will be a list. Thus, l_pix.backward()
will not run correctly.
When the loss type is "l1", line47 of model/model.py
should be l_pix = self.netG(self.data).mean()
?
I'm not sure if the change is correct.
RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 6.00 GiB total capacity; 4.05 GiB already allocated; 0 bytes free; 4.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
So, when running the command:
python data/prepare_data.py --path /content/Image-Super-Resolution-via-Iterative-Refinement/input/ --size 64,512 --out ./dataset/celebahq
I get the following output:
0/1 images processed
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py:405: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
0/1 images processed
I tried manually changing all instances of Image.BICUBIC
with InterpolationMode.BICUBIC
and it didn't work. In this case I got 0/1 images processed
and that was it.
Also tried running both in the Google Collab project, and also locally an Ubuntu, and got the same problem both times.
Any directions on what i could do to fix it? Thanks a lot!
I come from a materials science background, and am interested in applying this technique to a non-image task. Do you know of any applications of this method that don't involve images?
Can you give me an example , like I have a low resolution images picture ,how i run out a super resolution images?
python sr.py -p val -c config/sr_sr3_64_512.json and python eval.py not work
Would it be possible to create a collab in order to try it?
Thank you for the release of the pretrained model for faces.
For my particular use case, a model pretrained on imagenet could be more useful and I can't manage to train it on my dataset (apparently, 256 GB GPU is required even when I set the parameters to lower values, and I only have a 8GB GPU)
Could you also release the model pretrained on imagenet?
Thank you for your work.
Hello,
I just watched a video demonstrating this :
https://youtu.be/WCAF3PNEc_c
It seems like an amazing&incredible work!
Would it be possible to offer this as a library for mobile, such as Android? Could be great to import it to use on Java/Kotlin based apps...
Or it's already possible?
I requested it in the past:
https://issuetracker.google.com/issues/198311063
Also, I don't see the license information. OK to use for any purpose?
It is possible to do inference on single image to check accuracy? like infer.py ?
Hi
What is finetune_norm and attn_res parameters in the config? Why first is false and second is commented?
Hi @Janspiry !
The results of the models are quite impressive! I see you currently share your models with Google Drive. Would you be interested in sharing your models in the Hugging Face Hub? The Hub offers free hosting of over 25K models, and it would make your work more accessible and visible to the rest of the ML community.
Some of the benefits of sharing your models through the Hub would be:
Creating the repos and adding new models should be a relatively straightforward process if you've used Git before. This is a step-by-step guide explaining the process in case you're interested. Please let us know if you would be interested and if you have any questions.
Happy to hear your thoughts,
Omar and the Hugging Face team
tried running data/prepare_data.py on Ubuntu and received the following error.
Traceback (most recent call last):
File "data/prepare_data.py", line 10, in
from torchvision.transforms import InterpolationMode
ImportError: cannot import name 'InterpolationMode' from 'torchvision.transforms'
Hi, thank you for sharing all the codes!
I have a question about the implementation and the equations of the paper.
In the testing phase, using the SR3 model and the conditional input:
sqrt(1 - \alpha_t)
. In the same equation, why divide by 2 and take the exponential in this line?Thanks!
hello, i have a photo of a face, how upscaling face?
I have an issue when loading the dataset with file data/LRHR_dataset.py
self.data_len = min(self.data_len, self.dataset_len) TypeError: '<' not supported between instances of 'int' and 'NoneType'
Could you please share the python and torch versions?
Thanks in advance.
Hi. Do you plan to release 1024x1024 model?
Or it can be done modifying code to cascade all models?
Cause, I think it is better to not downscale images to 128Pixels.
Thank you very much in advance
Trained my models on images, got a PSNR of like ~19, loaded the same model using infer.py and the images were just noise. Tried using sr.py (val & train) and it's like the loaded model is brand new (even thought it is loaded) and the PSNR starts at ~12.
Checked the images outputted during the training phase and they look decent but the second the training stops and I use an other code it doesn't work.
Any reason why that could be happening?
(windows10 / python3.6 / conda:latest / all-other-packages:latest)
There is a memory leak somewhere in the multiprocessing section:
Running 8 workers on a system with 16GB of ram will lead to a crash within ~1-2mins and a nasty pagefile increase to 40GB! I re-wrote the script using the older Process() api and no shared variables between instances which fixed the issue so its probably something to do with data passing.
All of my tests were done on the full celebahq
dataset.
i was thinking if apply this on low quality text images will it improve the quality using existing weights?
Hi,
When reading and augment the data we call:
Image-Super-Resolution-via-Iterative-Refinement/data/util.py
Lines 64 to 71 in 0f31547
The bugs:
This line iterate over the first dimension of the numpy image (because the function augment(img, split)
called with single image)
the hflip
get the split
value. ( We should call it augment(img, split=split)
)
Suggested solution:
def transform_augment(img_list, split='val', min_max=(0, 1)):
imgs = [transform2numpy(img) for img in img_list]
imgs = augment(imgs, split=split)
ret_img = [transform2tensor(img, min_max) for img in imgs]
return ret_img
Please let me know if it's correct.
Hi,
I have access to a big server with 6 GPUs and 64 CPU's. I'm using three of the GPUs in a docker container to run some images through this.
Currently my config train file looks like:
"datasets": {
"train": {
"name": "FFHQ",
"mode": "HR", // whether need LR img
"dataroot": "dataset/ffhq_64_512",
"datatype": "img", //lmdb or img, path of img files
"l_resolution": 64, // low resolution need to super_resolution
"r_resolution": 512, // high resolution
"batch_size": 2,
"num_workers": 16,
"use_shuffle": true,
"data_len": -1 // -1 represents all data used in train
},
where num_workers
was changed from 8 to 16 in the default settings. However, training doesn't seem to noticeably have sped up (it's still about a minute per 50 iterations).
When I try to raise the batch_size, I start getting out-of-memory errors.
Is there a good way to speed up training by using the power of this server?
Thanks!
Hi everyone.
I tried to use this incredible tool with data that has been mentioned in the repository, but I face an error when I try to run [python3 sr.py -p train -config sr_sr3_16_128.json]
the edition in the sr_sr3_16_128.json:
"datasets": { "train|val": { "dataroot": "/content/drive/MyDrive/sr/dataset_16_128", "l_resolution": 16, // low resolution need to super_resolution "r_resolution": 128, // high resolution "datatype": "img" //lmdb or img, path of img files } },
the error:
export CUDA_VISIBLE_DEVICES=0 Traceback (most recent call last): File "/content/drive/MyDrive/sr/sr.py", line 23, in <module> opt = Logger.parse(args) File "/content/drive/MyDrive/sr/core/logger.py", line 73, in parse opt['datasets']['val']['data_len'] = 3 KeyError: 'val'
when I change the order to val like = [python3 sr.py -p val -config sr_sr3_16_128.json], the error is :
Traceback (most recent call last): File "/content/drive/MyDrive/sr/sr.py", line 144, in <module> for _, val_data in enumerate(val_loader): NameError: name 'val_loader' is not defined
Even when I change the order to = [python3 infer.py -config sr_sr3_16_128.json] as mentioned in the issues section, the error is :
Traceback (most recent call last): File "/content/drive/MyDrive/sr/infer.py", line 59, in <module> for _, val_data in enumerate(val_loader): NameError: name 'val_loader' is not defined
I don't understand why that errors .. thanks in advance for helping ❤️
/content/Image-Super-Resolution-via-Iterative-Refinement
0/2 images processed/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py:405: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py:405: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
0/2 images processed
Traceback (most recent call last):
File "infer.py", line 12, in
import wandb
ModuleNotFoundError: No module named 'wandb'
I have tried using my own images, but I have not got any difference between hr (which is just upscaled lr) and inf.
My photos are black and white though, I don't know if that would have any effect on it. I seems to me, this is only copying whatever is in the hr folder. Because it doesn't matter if I use actual ground truths or upscaled low res images, the inference looks the same as it.
Can I train and test on images that are not square (not 128x128), but something like 200x150?
I'm a little confused about training logic。Is the code below only used to control training one epoch?
# line 65-71 of sr.py
if opt['phase'] == 'train':
while current_step < n_iter:
current_epoch += 1
for _, train_data in enumerate(train_loader):
current_step += 1
if current_step > n_iter:
break
error while running sr3.py file
export CUDA_VISIBLE_DEVICES=0
21-08-16 15:31:59.624 - INFO: name: basic_sr_ffhq
phase: val
gpu_ids: [0]
path:[
log: experiments\basic_sr_ffhq_210816_153159\logs
tb_logger: experiments\basic_sr_ffhq_210816_153159\tb_logger
results: experiments\basic_sr_ffhq_210816_153159\results
checkpoint: experiments\basic_sr_ffhq_210816_153159\checkpoint
resume_state: model/pretrained/checkpoint/I640000_E37_gen.pth
experiments_root: experiments\basic_sr_ffhq_210816_153159
]
datasets:[
train:[
name: FFHQ
mode: HR
dataroot: dataset/ffhq
l_resolution: 16
r_resolution: 128
batch_size: 4
num_workers: 8
use_shuffle: True
data_len: -1
]
val:[
name: CelebaHQ
mode: LRHR
dataroot: dataset/celebahq
l_resolution: 16
r_resolution: 128
data_len: 50
]
]
model:[
which_model_G: sr3
finetune_norm: False
unet:[
in_channel: 6
out_channel: 3
inner_channel: 64
channel_multiplier: [1, 2, 4, 8, 8]
attn_res: [16]
res_blocks: 2
dropout: 0.2
]
beta_schedule:[
train:[
schedule: linear
n_timestep: 2000
linear_start: 1e-06
linear_end: 0.01
]
val:[
schedule: linear
n_timestep: 2000
linear_start: 1e-06
linear_end: 0.01
]
]
diffusion:[
image_size: 128
channels: 3
conditional: True
]
]
train:[
n_iter: 1000000
val_freq: 10000.0
save_checkpoint_freq: 10000.0
print_freq: 200
optimizer:[
type: adam
lr: 0.0001
]
ema_scheduler:[
step_start_ema: 5000
update_ema_every: 1
ema_decay: 0.9999
]
]
distributed: False
Traceback (most recent call last):
File "sr.py", line 45, in
val_set = Data.create_dataset(dataset_opt, phase)
File "D:\rp\Projects\SuperResolution\sr3\data_init_.py", line 33, in create_dataset
need_LR=(mode == 'LRHR')
File "D:\rp\Projects\SuperResolution\sr3\data\LRHR_dataset.py", line 14, in init
readahead=False, meminit=False)
lmdb.Error: dataset/celebahq: The system cannot find the path specified.
(facerec) D:\rp\Projects\SuperResolution\sr3>python eval.py -p boot_img
Traceback (most recent call last):
File "eval.py", line 36, in
avg_psnr = avg_psnr / idx
ZeroDivisionError: float division by zero
Please pardon my inexperience using SR GANs, I wondering if you could clarify what is the role of the data input folders:
l_resolution
/h_resolution
folders contain the same images (with different sizes)? I assume so for testing/validation steps.sr_16_64
contains initially?Thank you in advance for your answer and thank you for sharing your awesome work!
File "data/prepare_data.py", line 85, in prepare
UnboundLocalError: local variable 'env' referenced before assignment
probably making an "if lmdb_save" around the lines (85,86) ?
Have you plans training this network on ImageNet, COCO, or another datasets? Can model demonstrate good results in common objects photo (nature, city) after that?
Hi @Janspiry ,
Thanks for the great repo. Really nice work.
I was wondering is there any plan to extend this repo to more image-to-image tasks?
For example https://iterative-refinement.github.io/palette/
Traceback (most recent call last):
File "infer.py", line 40, in
val_set = Data.create_dataset(dataset_opt, phase)
File "/content/Image-Super-Resolution-via-Iterative-Refinement/data/init.py", line 34, in create_dataset
need_LR=(mode == 'LRHR')
File "/content/Image-Super-Resolution-via-Iterative-Refinement/data/LRHR_dataset.py", line 30, in init
'{}/sr_{}_{}'.format(dataroot, l_resolution, r_resolution))
File "/content/Image-Super-Resolution-via-Iterative-Refinement/data/util.py", line 16, in get_paths_from_images
assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
AssertionError: dataset/celebahq_64_512/sr_64_512 is not a valid directory
This may be a dumb question, but is there a way to change the amount of "diffusion steps" or iterations the model does when upscaling. Right now I think it is 2000. Thank you so much in advance! 👍 Amazing project might I add!
Hello, I'm currently having a problem while running the setup. I'm also quite inexperienced in Conda and Python. What am I doing wrong here?
$ conda env create -f core/environment.yml
Collecting package metadata (repodata.json): failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64/repodata.json>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
'https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64'
Specs:
5.10.60.1-microsoft-standard-WSL2
with Ubuntu
conda 4.11.0
Windows 10
NVIDIA GeForce RTX 2060 SUPER
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.