GithubHelp home page GithubHelp logo

mil-nature-medicine-2019's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mil-nature-medicine-2019's Issues

Input Tiles to RNN aren't ranked

I was cross referencing the publication with the code here and I ran into a bit of inconsistency. From the RNN-based slide integration section -

Given a slide and model f, we can obtain a list of the S most interesting tiles within the slide in terms of positive class probability. The ordered sequence of vector representations e = e1, e2,โ€ฆ, eS is the input to an RNN along with a state vector h.

However, in the code the tiles are fed at random (if shuffled) or otherwise in no particular ranked order of class probabilities
Line 236 of RNN_train.py

    if self.shuffle:
        grid = random.sample(grid,len(grid))
    out = []
    s = min(self.s, len(grid))
    for i in range(s):
        img = slide.read_region(grid[i], self.level, (self.size, self.size)).convert('RGB')

Just a little clarification would be greatly appreciated

Generate grids

Hi!
First, thanks for the great tool!
I was wondering how do you generate the grid tuple in the dictionary for a given image. I am struggling with coordinates and I guess I misunderstood how read_region works.
Any hints on this?

Dimension problem with RNN_train

I performed some MIL training using resnet101 instead of default resnet34. MIL training goes nicely and I could save .pth without problems. Also, I used --batch-size = 64 because of memory issues.

I changed the code to use resnet101 in RNN_train.py and the batch dimension in line 200 self.fc1 = nn.Linear(64, ndims). However, I'm getting a mismatch size issue:

Traceback (most recent call last):
  File "RNN_train.py", line 256, in <module>
    main()
  File "RNN_train.py", line 78, in main
    train_loss, train_fpr, train_fnr = train_single(epoch, embedder, rnn, train_loader, criterion, optimizer)
  File "RNN_train.py", line 110, in train_single
    output, state = rnn(input, state)
  File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "RNN_train.py", line 200, in forward
    input = self.fc1(input)
  File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
    return F.linear(input, self.weight, self.bias)
  File "/usr/local/lib64/python3.6/site-packages/torch/nn/functional.py", line 1370, in linear
    ret = torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch, m1: [64 x 2048], m2: [64 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290

I am not quite sure where the 2048 comes from. Any ideas where the problem could be? Also, I am not sure about what --ndims 128 means.
Thanks!

low volatile gpu-util in data loading

Congratulation on your work. We would like to adopt this method to a multi-outputs network to detect diseases in liver. We face a problem when loading dataset, maybe you had the same problem and could you share your experience.

I understand that openslide and DataLoader are used to load tiles. With multiple workers and data stored in local disk, the GPU usage is very high, while the volatile GPU-Util stays at 0 most of the time, jumps to 100 for a few seconds. This results in the training stage to be quite long. Any suggestion to solve this bottleneck? Thanks in advance.

RuntimeError: Dataloader worker is killed by signal: Killed

Hi,
I have just try to start with some learning and the MIL_train.py script fails with the title error. Searching a bit I found that seems to be related with multiprocess. If I set the --workers to 0 the error is not longer there. However, the learning time increases a lot. I guess is something related with openslide but I am not able to find anything related to how it handles multiprocessing.

Do you have any hints on this?
Thanks

Problem about dataset dimensions

In the RNN_train.py, assume s as 10, the out of getitem() in rnndata should be [10, 3, 224, 224] for each WSI.
image
In the dataloader, for the batch size of 128, the inputs in each loop of train_single() should be [128, 10, 3, 224, 224],
image

according to the code, the batch_size is 10 (it should be 128), and len(inputs) is 128 (it should be 10). These two variables are disordered. Just to verify this case.

A few questions on grid point sizes and GPU starvation

Congratulations on this significant contribution. We are experimenting with your method to detect cancer in soft tissues (Aperio slides). I have a few questions, not issues that I hope you can share with me.

I understand that the grid should contain the coordinates of a square patch within slides that you want to include. Are these patches intended to be the same size as the native model input (resnet34 224x224) or do you list larger patches, which are either downsized or scanned by the system?

I have noticed that with a simple model, like resnet34, when using small patch sizes I can't load data to the GPU fast enough to keep it busy. Perhaps this normal, but did you experience such an issue, perhaps you put everything in shared memory first.

Thanks in advance.

Memory leak while obtaining data from DataLoader

Hello, thanks for the great work.

There seems to be a memory issue when acquiring tile images from the DataSet. Memory increases linearly every iteration while eventually eats up the entire RAM. From the code it doesn't look like anything should be taking up that much memory. Any advice how to get around this? Seems to have to do with openslide.

How to get data?

I want to download data from "http://thomasfuchslab.org/data/", but I didn't find the data on this website. I hope I can get data to test the effectiveness of this method.
Could you give me a download link of google drive?

About ZeroDivisionError when I run MIL_train.py

"dataPrepare_for_CNN.py" is okay
but
when I run the "MIL_train.py",
there is a error:

return running_loss/len(loader.dataset)
ZeroDivisionError: float division by zero

So sad that I cannot solve this error, please help me

Error happened while downloading data using "download_dataset.py"!

Hello! @gabricampanella Thx for your code and data sharing! I'm very interested in your work on proposed deep learning framework, so I tried to download data with the file: "download_dataset.py". But such errors happened when I run the code:

downloading target.csv (file 1 of 132)
Client-Request-ID=1f339462-b26d-11e9-9582-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:37 GMT, Server-Request-ID=16869c09-d01e-0146-6379-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:16869c09-d01e-0146-6379-465ad4000000Time:2019-07-30T01:55:38.5826474Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:38 GMT]</AuthenticationErrorDetail></Error>.
failed to download target.csv on try 0 of 3
Client-Request-ID=1ff5e90c-b26d-11e9-b975-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:38 GMT, Server-Request-ID=16869dd8-d01e-0146-2079-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:16869dd8-d01e-0146-2079-465ad4000000Time:2019-07-30T01:55:39.0049312Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:39 GMT]</AuthenticationErrorDetail></Error>.
failed to download target.csv on try 1 of 3
Client-Request-ID=2019d41e-b26d-11e9-b734-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:38 GMT, Server-Request-ID=16869eca-d01e-0146-7879-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:16869eca-d01e-0146-7879-465ad4000000Time:2019-07-30T01:55:39.2390881Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:39 GMT]</AuthenticationErrorDetail></Error>.
failed to download target.csv on try 2 of 3
downloading README.txt (file 2 of 132)
failed to download README.txt on try 0 of 3
Client-Request-ID=203de5ac-b26d-11e9-b564-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:38 GMT, Server-Request-ID=16869fab-d01e-0146-5079-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:16869fab-d01e-0146-5079-465ad4000000Time:2019-07-30T01:55:39.4752477Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:39 GMT]</AuthenticationErrorDetail></Error>.
failed to download README.txt on try 1 of 3
Client-Request-ID=20621c78-b26d-11e9-b1a3-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:38 GMT, Server-Request-ID=1686a068-d01e-0146-8079-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:1686a068-d01e-0146-8079-465ad4000000Time:2019-07-30T01:55:39.7134089Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:39 GMT]</AuthenticationErrorDetail></Error>.
Client-Request-ID=2086a424-b26d-11e9-863d-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:39 GMT, Server-Request-ID=1686a12e-d01e-0146-3e79-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:1686a12e-d01e-0146-3e79-465ad4000000Time:2019-07-30T01:55:39.9555732Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:39 GMT]</AuthenticationErrorDetail></Error>.
failed to download README.txt on try 2 of 3
downloading HobI18-323369624610.svs (file 3 of 132)
Client-Request-ID=20ab4f74-b26d-11e9-adc5-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:39 GMT, Server-Request-ID=1686a219-d01e-0146-1579-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:1686a219-d01e-0146-1579-465ad4000000Time:2019-07-30T01:55:40.1927340Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:40 GMT]</AuthenticationErrorDetail></Error>.
failed to download HobI18-323369624610.svs on try 0 of 3
Client-Request-ID=20cfd574-b26d-11e9-bda3-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:39 GMT, Server-Request-ID=1686a2f4-d01e-0146-6379-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:1686a2f4-d01e-0146-6379-465ad4000000Time:2019-07-30T01:55:40.4308965Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:40 GMT]</AuthenticationErrorDetail></Error>.
failed to download HobI18-323369624610.svs on try 1 of 3
failed to download HobI18-323369624610.svs on try 2 of 3
Client-Request-ID=20f3ea2e-b26d-11e9-bad7-7085c27ccf92 Retry policy did not allow for a retry: Server-Timestamp=Tue, 30 Jul 2019 01:55:39 GMT, Server-Request-ID=1686a3f4-d01e-0146-5579-465ad4000000, HTTP status code=403, Exception=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.RequestId:1686a3f4-d01e-0146-5579-465ad4000000Time:2019-07-30T01:55:40.6670559Z</Message><AuthenticationErrorDetail>Signature not valid in the specified time frame: Start [Fri, 28 Jun 2019 00:52:42 GMT] - Expiry [Sun, 28 Jul 2019 08:52:42 GMT] - Current [Tue, 30 Jul 2019 01:55:40 GMT]</AuthenticationErrorDetail></Error>.
downloading HobI18-331819024579.svs (file 4 of 132)
......

So, may you help me this? Many thanks here!

Permission Error

Hi GabriCampanella, I'm working on running the MIL_train.py script on the WSI images from PANDA challenge Kaggle dataset. Convergence.csv file is created but it is unable to pickle the information onto it.
Errors are:
1)File "C:\Users\4472829\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
ValueError: ctypes objects containing pointers cannot be pickled
2)File "C:\Users\4472829\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 87, in steal_handle
_winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE)
PermissionError: [WinError 5] Access is denied.

Note:
I'm running the program as administrator and I do have admin rights on the PC I'm running.

Specifications:
OS - Windows
IDE - Pycharm with Python 3.7.7.

I came across few solutions stating that this is a bug in windows and Linux, please let me know if there is any alternative other than running it on Linux.

Thanks is advance.

Image normalization

Hello everyone!

I have a question regarding the preprocessing of the images that go into the CNN portion of this system. The CNN starts out as a pretrained resnet34 from torchvision.pytorch. Pytorch docs for these models provide the following info:

The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].

Literally while writing this issue I figured out that the range thing is being achieved via transforms.ToTensor (just in case anybody else reading this issue was wondering about that, it's fine). However I'm still a little confused about the normalization. In the code, mean = [0.5, 0.5, 0.5] and std = [0.1, 0.1, 0.1] are used instead of the values proposed by the docs. Is there some special reason for this? While researching this, I did find some people claiming that when your training data is very different from ImageNet data (which is certainly the case here), then using the mean and std of your own training data instead might be worthwhile. Is this what happened here? I would appreciate some clarification on the matter. :-)

Thank you and have a nice holiday!

Function group_argtopk() can't take top K tiles

I try to input some samples into function grop_argtopk() in MIL_train.py, it should return top-k tiles' index, but it return more than k index in some cases in fact. It might be have some mistakes due to line 148 index[:-k] = groups[k:] != groups[:-k]

Error while running MIL_train.py

Hello, I'm working on running the MIL_train script and it's throwing me the below mentioned error:

  • raise exception ctypes.ArgumentError: Caught ArgumentError in DataLoader worker process 0.
  • read_region(slide, buf, x, y, level, w, h), ctypes.ArgumentError: argument 3: <class 'TypeError'>: wrong type. I have tried multiple clues available online but none seems to work.

Any suggestions what's going on.

OS - Linux
Openslide-python version 1.1.2

the questions about the code

code:
{ parser.add_argument('--train_lib', type=str, default='', help='path to train MIL library binary')
}

code:
{
class MILdataset(data.Dataset):
def init(self, libraryfile='', transform=None):
lib = torch.load(libraryfile)
slides = []
for i,name in enumerate(lib['slides']):
sys.stdout.write('Opening SVS headers: [{}/{}]\r'.format(i+1, len(lib['slides'])))
sys.stdout.flush()
slides.append(openslide.OpenSlide(name))
print('')
}
question: the parameter of "train_lib" is tranfered to libraryfile, it is confused about the meaning of "train_lib" . By my understanding, it should be the path of data , but it looks like the path of trained model, if it is ,where is the model ?

Generate top tiles

Could you please tell me how do you extract the top tiles as you show in Fig.1 C in the paper? I also want to have a look at the top tile from my WSI, but I still can't figure it out. Thank you!

Confusion about RNN codes

Hi Gabriele! I have read your artical and codes of MIL works, there are some difference between our opinions. In your workflow, top-128 features generated by resnet were collected then wsi-level result would be predicted with RNN, which is achieved by only full connection layers and ReLU activation in your scripts, is not the recursive neural network at all, that shocked me a lot. Is somewhere I misunderstood with your work? I would appreciate that if it's conevenient for you reply me later, thanks.

ValueError: ctypes objects containing pointers cannot be pickled

I got error when iter Dataloader like following:

Traceback (most recent call last): File "D:\PyCharm\helpers\pydev\pydevd.py", line 1664, in <module> main() File "D:\PyCharm\helpers\pydev\pydevd.py", line 1658, in main globals = debugger.run(setup['file'], None, None, is_module) File "D:\PyCharm\helpers\pydev\pydevd.py", line 1068, in run pydev_imports.execfile(file, globals, locals) # execute the script File "D:\PyCharm\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:/Projects/Colon/Code/Samsung_Colon_MIL/MIL_train_V2.py", line 244, in <module> main() File "D:/Projects/Colon/Code/Samsung_Colon_MIL/MIL_train_V2.py", line 86, in main probs = inference(epoch, train_loader, model) File "D:/Projects/Colon/Code/Samsung_Colon_MIL/MIL_train_V2.py", line 134, in inference for input in loader: File "D:\Anaconda\envs\Colon_Metastasis\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__ return _DataLoaderIter(self) File "D:\Anaconda\envs\Colon_Metastasis\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__ w.start() File "D:\Anaconda\envs\Colon_Metastasis\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "D:\Anaconda\envs\Colon_Metastasis\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda\envs\Colon_Metastasis\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\Anaconda\envs\Colon_Metastasis\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "D:\Anaconda\envs\Colon_Metastasis\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) ValueError: ctypes objects containing pointers cannot be pickled Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\Anaconda\envs\Colon_Metastasis\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "D:\Anaconda\envs\Colon_Metastasis\lib\multiprocessing\spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input

The reason for the error seems like because openslide read WSI as pointer.
Any help?
Thanks in advance

A Question about the Best Accuracy in the codes

Congratulations on this significant contribution. We are experimenting with your method to detect cancer in Camelyon16.

I want to know the specific value of the best accuracy(best_acc) in MIL_train.py and RNN_train.py.

It's default value is 0 in your codes, and can you tell me the specific value?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.