GithubHelp home page GithubHelp logo

jewettaij / visfd Goto Github PK

View Code? Open in Web Editor NEW
6.0 2.0 2.0 2.88 MB

volumetric image toolkit for simple feature detection (segment tomograms)

License: MIT License

Makefile 1.70% C++ 94.51% Python 2.05% Shell 1.73%
image-processing tomography mrc-format tensor-voting electron-microscopy 3d-images segmentation cryo-em cryoem membrane-detection

visfd's Introduction

CircleCI CodeQL C++11 GitHub code size in bytes License DOI

visfd

Volumetric Image toolkit for Simple Feature Detection

VISFD is a small C++ template library for 3D image processing ("visfd.hpp") which is useful for extracting geometric shapes, volumes, and other features from 3-D volumetric images (eg. tomograms). VISFD also includes a basic C++ library for reading and writing MRC files. ("mrc_simple.hpp").

VISFD's most useful feature is probably its ability to segment cell volumes in Cryo-ET images, as well as membrane-bound organelles within cells.

VISFD is also a collection of stand-alone programs which use this library (including "filter_mrc", "combine_mrc", "voxelize_mesh", "sum_voxels", and "pval_mrc"). They are documented here. Multiprocessor support is implemented using OpenMP.

Segmenting cell volumes from 3D CryoEM tomograms

Machine-learning based detectors for membranes and large molecular complexes have been implemented in programs like EMAN2. VISFD compliments this software by providing tools for geometry extraction, curve and surface detection, signed normal determination, and robust connected-component analysis. In a typical workflow, features of interest (like membranes or filaments) would be detected and emphasized using programs like EMAN2. Then the coordinates of these geometric shapes would be extracted using VISFD and analyzed using 3rd-party tools like SSDRecon/PoissonRecon. This makes it possible to detect and close holes in incomplete membrane surfaces automatically. However, VISFD can detect curves and surfaces on it's own. (EMAN2 is recommended but not required.)

Alternatives to VISFD

Although VISFD has similiarities and features to general libraries for for 3-D image processing, such as scikit-image and scipy.ndimage, VISFD is no where near as general or well documented as these tools. However VISFD has additional features (such as tensor-voting) which are useful for extracting geometry from Cryo-EM tomograms of living cells.

Programs included with this repository:

After compilation, all programs will be located in the "bin/" subdirectory. Here is a brief description of some of them:

filter_mrc

filter_mrc is a stand-alone program which uses many of the features of the visfd library. This program was intended to be used for automatic detection and segmentation of closed membrane-bound compartments in Cryo-EM tomograms. Other features include filtering, annotation, scale-free blob-detection, morphological noise removal, connected component analysis, filament (curve) detection (planned), and edge detection (planned). Images can be segmented hierarchically into distinct contiguous objects, using a variety of strategies. This program currently only supports the .MRC (a.k.a. .REC or .MAP) image file format. As of 2021-9-13, this program does not have a graphical user interface.

Tutorials for using filter_mrc this are available here. A (long) reference manual for this program is available here. The source code for the VISFD filters used by this program is located here.

voxelize_mesh.py

voxelize_mesh.py is a program that finds the voxels in a volumetric image that lie within the interior of a closed surface mesh. It was intended for segmenting the interiors of membrane-bound compartments in tomograms of cells. The mesh files that this program reads are typically generated by filter_mrc (together with other tools). However it can read any standard PLY file containing a closed polyhedral mesh. This program currently only supports the .mrc/.rec image file format. Documentation for this program is located here. WARNING: This experimental program is very slow and currently requires a very large amount of RAM.

combine_mrc

combine_mrc is a program for combining two volumetric images (i.e. tomograms, both of identical size) into one image/tomogram, using a combination of addition, subtraction, multiplication, division, and thresholding operations. These features can be used perform binary operations between two images (which are similar to "and", "or", and "not" operations.) Documentation for this program is located here.

histogram_mrc.py

histogram_mrc.py is a graphical python program which displays the histogram of voxel intensities contained in an MRC file. It can be useful when deciding what thresholds to use with in the "filter_mrc" and "combine_mrc" programs. Voxels and regions in the image can be excluded from consideration by using the "-mask" and "-mask-select" arguments. This software requires the matplotlib and mrcfile python modules (both of which can be installed using pip). Documentation for this program is located here.

sum_voxels

sum_voxels is a program for estimating volumes. It is a simple program which reads an MRC (.REC) file as an argument and computes the sum of all the voxel intensities. (Typically the voxel intensities are either 1 or 0. The resulting sums can be converted into volumes either by multiplying by the volume-per-voxel, or by specifying the voxel width using the "-w" argument, and including the "-volume" argument.) For convenience, threshold operation can be applied (using the "-thresh", "-thresh2", and "-thresh4" arguments) so that the voxels intensities vary between 0 and 1 before the sum is calculated. The sum can be restricted to certain regions (by using the "-mask" and "-mask-select" arguments). Documentation for this program is located here.

pval_mrc

pval_mrc is a program for estimating the probability that a cloud of points in an image is distributed randomly. It looks for regions of high (or low) density in an image. (The user can specify a -mask argument to perform the analysis in small, confined, irregularly-shaped subvolumes from a larger image.) Documentation for this program is located here.

Development Status: alpha

As of 2023, development on this software has been postponed. If I have time to work on it again, some program names, command line arguments, file names, and function names (in the API) may be altered in the future.

Installation

The INSTALL.md file has instructions for installing VISFD and its dependencies.

Requirements

  • 16GB of RAM or higher. (For membrane detection, your RAM must exceed 11x-44x the size of the tomogram that you are analyzing. This does not include the memory needed by the OS, browser, or other programs you are running. The voxelize_mesh.py program requires even more memory. You can reduce the memory needed and computation time dramatically by cropping or binning your tomogram.)
  • A terminal (running BASH) where you can enter commands.
  • A C++ compiler
  • make
  • Software to visualize MRC/REC/MAP files (such as IMOD/3dmod)
  • python (version 3.0 or later)
  • The "numpy", "matplotlib", "mrcfile", and "pyvista" python modules (These are are installable via "pip3" or "pip".)
  • SSDRecon/PoissonRecon.
  • Software to visualize mesh files (such as meshlab).

Recommended:

  • A text editor. (Such as vi, emacs, atom, VisualStudio, Notepad++, ... Apple's TextEdit can be used if you save the file as plain text.)
  • A computer with at least 4 CPU cores (8 threads).
  • 32GB of RAM (depending on image size, this still might not be enough)
  • ChimeraX is useful for visualizing MRC files in 3-D and also includes simple graphical volume editing capability.

License

All of the code in this repository (except for code located in "lib/mrc_simple" and "lib/visfd/eigen_simple.hpp") is available under the terms of the terms of the MIT license. (See "LICENSE.md")

Additional license dependencies

MPL-2.0 licensed code (eigen_simple)

The "lib/visfd/eigen3_simple.hpp" file contains code from Eigen which requires the MPL-2.0 license.

GPLv2 licensed code (mrc_simple)

A small subset of the code in "lib/mrc_simple" was adapted from IMOD. The IMOD code uses the GPL license (version 2), which is more restrictive. License details for the "mrc_simple" library can be found in the COPYRIGHT.txt file located in that directory. If you write your own code using the "visfd" library to analyze 3D images (which you have loaded into memory by some other means), then you can ignore this notice.

Funding

VISFD was funded by NIH grant R01GM120604.

Citation

If you find this program useful, please cite DOI 10.5281/zenodo.5559243

visfd's People

Contributors

jewettaij avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

limzh00 strubbej

visfd's Issues

voxelize_mesh.py consumes too much time and memory

The RAM required by this program is 25-100 times larger than the size of the original 3D image (tomogram) that we used to extract the surface mesh. This occurs because I am using 3rd party tools (pyvista, vtk) to handle the computation, instead of writing a new program from scratch. The computation is also slow. Unfortunately fixing this is not a priority for me yet -A 2020-12-15

blob detection stopped working on 2021-10-18

Forgive me. I was being sloppy. I broke this feature tonight, but I will fix it tomorrow. (I thought the most recent commits were a trivial small change so I did not create a new branch. Instead they were a ~1000 lines of infrastructure changes which broke several features. I am too sleepy to fix the bug tonight but I'll correct this in the next day or two.)

If you run into this problem, try downloading a version from a previous commit, such as this one

masking is broken

I am sometimes noticing significant artifacts using filter_mrc with the -mask argument. (They appear as bright white blobs, or white streaks in the x direction, or xz plane, on the scale of the gaussian width.) I have noticed these artifacts when also using filter_mrc with the -dog argument, or the -fluct argument, however I'm sure this also manifests itself in nearly all of the other seperable filters used.

For now, avoid the -mask argument. I'll remove this issue once I have patched the bug.

PoissonRecon generated surfaces are sometimes wrong when using -must-link constraints

Sometimes when you use PoissonRecon to generate a smooth, closed surface, you will get something completely strange that appears to have been turned inside out. This happens rarely, but when it does, it's baffling. The resulting pointcloud mesh file created by the "-normals-file" argument will seem fine. But when you run PoissonRecon, the results will be terrifying. When this happens, it is almost always because you are also using the "-must-link" argument.

Background information

Sometimes when you attempt to detect a surface using filter_mrc with the "-membrane" and "-connect" arguments, it will only be able to detect little fragments of the surface. Typically when this happens, the "-connect" argument will fail to link all of these little surface fragments together. The "-must-link" argument is used (together with "-connect") to help merge these little surface fragments into larger surfaces.

Why this happens

The PoissonRecon program needs to know outward direction of the surface at each visible point on the surface. The filter_mrc program can detect the membrane surface, as well as the direction of its surface normal. But, in the case of a closed surface, it cannot tell whether this direction is pointing outward or inward. (In other words, there is an ambiguity in the sign of the surface normal direction.) Sometimes two different surface fragments will be assigned (by filter_mrc) to orientations which are not compatible with each other.
(The direction of one surface fragment points inward, the other one outward.) This makes it impossible for to determine which voxels are inside the surface and which are outside. This is why PoissonRecon fails.
This is more likely to happen if the disconnected surfaces are not nearby, and/or if they are pointing in radically different directions. (For example, if you have two perpendicular surfaces that form a T-junction, it's not clear which side of each surface should be on the inside or outside.)

Workaround

Fortunately, there is a way for the user to manually specify the orientation of each surface fragment. It is explained in the next post...

blob-detection is slow for large blobs

Optimization 1 (not implemented yet)

In order to reduce computation time, some blob-detection implementations will automatically down-sample the image when the blob-size (sigma parameter) reaches a certain width. This is an optimization I should add eventually. However blob-detection in its current form is efficient for detecting small blobs (such as ribosomes, nucleosomes and other small complexes in the cell.)

Optimization 2 (not implemented yet)

I could also reduce the computation time by a factor of 2 when the -dog-delta parameter δ satisfies
δ <= gratio-1
where "gratio" is one of the parameters to the "-blob" argument.

Perhaps later, I will add both optimizations.

poor signal-to-noise near mask or image boundaries

I've noticed that , when using the "-mask" argument, that the detection of objects near the boundary of the mask is less reliable. I think this is because the detectors are more susceptible to noise near the boundary of a mask. The technical reason for this is explained below.

On 2021-7-08, I posted a commit that significantly reduces this problem, but the problem has not been completely solved.

Workaround

Strategies to get around this problem have been posted in the documentation here (for surface and curve detection), and also here (for blob detection).

If your goal when using the filter_mrc program to generate a new filtered image (rather than detect surfaces or blobs), then you may have better luck running that program initially on the entire image (without the mask). Afterwards, you can set the brightness of the voxels in the new image outside the mask to zero. You can do that using the combine_mrc program. For example:

combine_mrc  filtered_image.rec  "*"  mask.rec  filtered_clipped_image.rec

Explanation and details (feel free to skip)

As a first step to many operations (such as blob detection, ridge detection, or edge detection), a Gaussian blur is applied to the image at the beginning. When applying a Guassian filters on an image, the contribution of the nearby voxels to the brightness at the target voxel is normalized by dividing by the total (Gaussian) weight of the nearby voxels which were eligible to contribute.

If a mask image is supplied by the user, then only voxels in the mask are eligible. If you are near the boundary of the mask, then a small number of voxels are eligible to contribute, and this amplifies the noise (since you are effectively averaging the brightness over fewer nearby voxels). (I can't recall, but I think I normalize tensor-voting results in a similar way. Perhaps I should make this kind of normalization optional. I'm probably trying to be too clever and it's backfiring.)

warning: incompatible with old versions of GCC

I encountered several perplexing problems when compiling this code with version 7.5.0 of gcc/g++. Those problems seem to have gone away since upgrading to version 9.3.0. If you are using an older version of the GCC compiler, please upgrade it or install CLANG.

-auto-thresh and -supervised fail if there are a large number of negative controls on both sides of the accepted data

The -auto-thresh and -supervised arguments to "filter_mrc" were intended to be used to determine the optimal score threshold. It was assumed that blobs with scores above this threshold will be blobs we want to keep, and blobs below this threshold are blobs we want to discard. If this is the case with your data, then don't worry. The -auto-thresh and -supervised arguments work fine. (It behaves just like a decision tree, ...but with only one decision. This is lame, but it works.)

The problem is this: Later I attempted to augment this by allowing for a score upper bound as well as a lower bound. (In other words, to make it possible to discard blobs if their score is either too low or too high.) After implementing that feature I noticed that the code I currently have fails if the total number of negative training examples with high scores exceeds the total number of positive training examples. In that case it will pick the lower and upper thresholds one at a time, independent of the other. This causes it to pick a threshold that discards ALL the training data (in order to prevent the large number of negative training examples from being misclassified as accepted) ...instead of hunting for the narrow interval in which the few positive training examples lie. (A decision tree would fail to get the optimal answer in that case as well, and for the same reason.)

This was some sloppy thinking on my part. I suppose I should fix this, however since I am anticipating that most users don't even have any negative training examples with high scores, I postpone dealing with it for now.

Note to self: One brute-force way to fix this is to use dynamic programming. (But this really feels like overkill.)

Sketch:
Assume the blobs have been sorted in order from low scores to high scores
n = the total number of training examples (positive plus negative)
n- = the number of negative training examples
n+ = the number of positive training examples (=n - n-)
s_i = the score of the ith blow (in sorted increasing order)
a_i = 1 if blob_i is accepted (ie. if it is in the positive training set), and 0 if it is in the negative training set
m_i,j = the number of mistakes using score_i as a lower bound, and score_j as an upper bound (inclusive)
recursion:
m_i,j = min(m_i-1,j + (1-a_i), m_i,j-1 + a_j)
base case:
m_1,n = n- (n- is the number of negative training examples)

I don't like using a O(n^2) algorithm, but this is certainly fast enough as long as there is no more than about 10000 training examples being supplied manually by the user. (On the other hand, if some other automatic program is supplying ~10^6 training examples, then we might have a problem, and I should probably find a less stupid way to do this in that case.)

I don't like to spend time on this because there are much more sophisticated ways to do supervised learning than the horrifically crude threshold method I am using here. Perhaps I should revert to the previous behavior, assuming that blobs with high scores are always accepted.

I don't have time to revisit this today. I'll deal with this later.

Most users hopefully won't even notice that this bug is present.

-Andrew

-connect (ClusterConnected()) behaves strangely when compiled without -g

"Heisenbug" ?
The "-connect* argument of "filter_mrc" (which invokes the "ClusterConnected()" function in filter3d.hpp), is behaving strangely, but only when compiled in gcc with optimizations and OpenMP enabled.

If you compile it using the settings located in "for_debugging_and_profiling/setup_gcc_linux_dbg.sh", (which uses the -g3 flag), then these problems go away. This is a serious bug, partly because running the code without OpenMP makes it almost intolerably slow. I will look into this soon.

avoid using the "-Ofast" compiler flag

If you compile "filter_mrc" using the "-Ofast" compiler flag, then the filter_mrc program will be mysteriously "Killed" when run with the -surface argument (usually when or after diagonalizing the Hessian located at every voxel). The problem might be in DiagonalizeHessianImage() (which is located in the lib/visfd/feature.hpp file), but I have not bothered to verify this is exactly where the error is occurring.

In any case, the problem can be avoided by using the clang++ compiler with the "-O3 -ffast-math" compiler flags. (...instead of using "-Ofast". See setup_clang.sh for compilation flags.)

Note also that the clang++ compiler is recommended over g++ (GCC), because even the -O3 flag fails when compiling with an old version of the GCC compiler. (See issue #2)

[Feature Request] add a tutorial

note-to-self:

I should add a tutorial explaining how to do at least two different tasks:

  1. Detect the outer membrane of a cell and use it together with PoissonRecon, meshlab, and voxelize_mesh.py to segment the interior of that cell. (Ie. to create a new image whose voxel brightnesses indicate whether the voxel lies inside the cell). (The existing sketch in example3 of the "filter_mrc" is a short version of this tutorial, but it is to short and lacks enough detail to explain the potential issues one could run into.)
  2. Detect all of the ribosomes in a cell (or other objects of similar size) using blob-detection. I could use the segmented cell created in step 1 as a mask to prevent the detection of blobs outside the cell.

Once I have written a paper that uses these features, I will have plenty of material to create this kind of tutorial.
-Andrew 2020-12-28

[Feature Request] improve signal-to-noise of surface geometry near edges

Note: This is a note to remind myself to implement a new feature. Feel free to ignore this rambling discussion.

Statement of the problem

Currently (as of v0.29.3), when detecting surface geometry to a file (eg. a PLY file), an attempt is made to make sure that the resulting surface is as thin and 2-dimensional as possible. To do that, when a voxel which resembles a point on the surface is detected, (ie, a voxel with high "salience"), it's position is shifted to the (estimated) nearest point on the surface before it is saved to the (PLY) file, instead of using the original position of the voxel. (If I don't do something like this, the resulting point cloud can be quite thick, being many voxels wide. This will make surface reconstruction difficult later on. The point cloud really should resemble a thin surface before surface-reconstruction is attempted.)

Details: This is done by diagonalizing the local Hessian of the salience at that voxel's location, and moving in the direction of the eigenvector with the largest eigenvalue until it reaches the ridge peak. This works fine in a region where the surface is clearly visible. But near the edge of a detected surface, it often fails. (By "edge", I mean a location where surface comes to an abrupt end, or there is a hole in the surface. In Cryo-EM tomograms, surface edges are often due to the "missing wedge" artifact.) The problem is: at surface edges, the Hessian is not very accurate. As a result, this strategy shifts the point to the wrong location, and the resulting surfaces tend to have flayed and tattered edges. Later, when we attempt to close these holes (using "surface reconstruction" methods), these tattered edges can cause big spikes in the reconstructed surface (or worse).

Suggested alternative strategy:

While there will never be a way to completely eliminate this problem, we can try to minimize it. As mentioned above, we cannot infer where the surface actually is by only considering the local Hessian at this voxel's location. A single voxel's Hessian is not trustworthy. But after looking at the results of tensor-voting, visually, I concluded that, overall, the voxels with the high surface salience are in the right place. We probably can get a much better idea where the actual surface lies by looking at all of the nearby surface voxel locations and averaging their position them somehow. Averaging many nearby voxel locations would give us an idea where the surface actually is and would be much less subject to noise. But how can we do this without sacrificing resolution?

One strategy would be to start from the current voxel's location, and move in the direction perpendicular (normal) to the surface, keeping track of which voxels were encountered along the way. As you move along the curve, the direction of the surface normal may change, so this will trace out a curve. Keep going in both directions along this curve until the salience drops below the threshold value. (See implementation details below.) Then either: 1) use the position of the voxel with the highest salience on this entire path, or 2) compute a weighted average of the positions of the points on this curve, and use that as the position of this point in the point cloud.

Implementation details:
I am inclined to prefer method 2 (average the voxel positions on the curve). Suggested implementation: Assign a weight to each of the points on this based on the salience at those locations. Use these weights to compute the average distance along this 1-D curve. Then find the 3-D location corresponding to that distance along the curve. And use the normal direction for the voxel at this point in the curve, not the original voxel's normal direction. The way, the resulting average position will be a point on this curve.)*

Either way, you can use the results of tensor voting to estimate the surface normal direction instead of the local Hessian of the salience. Even if you can't trust the direction of the surface normal, this curve-tracing strategy insures that the 3D location you pick will be in a region where the bulk of the salience is located, even if it's not the location on the surface closest to the original voxel's staring location.

tensor-voting is slow

Tensor-voting is slower than the implementation from Martinez-Sanchez, J.Struct.Biol. (‎2014).
This is for two reasons:

  1. I am using true 3D tensor voting. (Martinez-Sanchez et al. are applying 2D tensor voting to each XY slice.) Using 3D tensor voting presumably will improve the signal to noise ratio, although I have not tried comparing with the 2D version. 3D tensor voting also enables more accurate calculation of the surface normal directions. (These normal directions are very useful later on for clustering and surface reconstruction.)
  2. I am not using the fast "steerable filters" optimization. Steerable filters are a bit messier to implement in 3D compared to 2D. However, I was at least careful to choose an equation for tensor-voting which should be compatible with steerable filters. This hopefully means that the behavior of the program should not change should we choose to implement this speed optimization later on.

Using GPUs (once I get OpenAcc working) should also help.
I don't have plans to address this issue in the immediate future.

Andrew

The "-must-link" argument stopped working on 2021-10-18

The "-must-link" argument is typically used during membrane detection. It is a way to try and manually fuse disconnected membrane fragments together (when automatic membrane detection failed to do this automatically).

If you run into this problem, try downloading a version from a previous commit, such as this one

Forgive me. I was being sloppy. I broke this feature tonight, but I will fix it tomorrow. (I thought the most recent commits were a trivial small change so I did not create a new branch. Instead they were a ~1000 lines of infrastructure changes which broke several features. I am too sleepy to fix the bug tonight but I'll correct this in the next day or two. -A

[Feature Request] add "must-not-link" constraints

The ClusterConnected() function (in clustering.hpp) already allows users the ability to merge different clusters, even if they do not appear to be connected. These are called "must-link" constraints. These are very useful when automatic detection fails, and manual user intervention is needed. However, I have not yet added "must-not-link" constraints. This is also very important because it provids a way for users to use more sensitive thresholds (allowing them to see more of the object they want to see) without accidentally merging two different blobs together. To avoid this, the ClusterConnected() function should accept an additional argument containing an array of pairs voxels (triplets of integers) that should not be merged into the same cluster. (Those pairs of voxels can be specified manually by the user.)

Implementation details

This issue was put here to remind me to add this feature later. The rest of its description is a note-to-myself to be read at some future date when I decide to work on this.

Suppose the additional argument added to ClusterConnected() is implemented this way:

 const vector<vector<array<Coordinate, 3> > > *pMustNotLinkConstraints=nullptr

If pMustNotLinkConstraints != nullptr, then convert this into a lookup table. Create a new variable "mustnot_partners" (of type map<array<Coordinate, 3>, array<Coordinate, 3>>) which can be used to quickly lookup whether a voxel belongs to a must-not-link pair, and who it's paired voxel is. Each cluster should maintain a set of "mustnot_voxels" (of type set<array<Coordinate, 3>>). Before a voxel is added to a cluster, see if it belongs to another cluster already. If not, loop through the mustnot_partners of that voxel (if any) and see if they appear in the list of mustnot_voxels for this cluster, and if so, don't include this voxel. If it does belong to another cluster, then loop through all the mustnot_voxels in both that cluster and this cluster (a double-loop) and check whether they appear in the forbidden list of mustnot_partners. If not, merge the clusters (and the mustnot_voxels).

This implementation behaves reasonably regardless of what pairs of voxels are supplied. (I.E., It behaves gracefully even if the user is careless and the two voxels (in a must-not-link pair) clearly belong to the same blob (watershed basin).)

Running time

The running time is proportional to O(n m^2 log(m)), where n is the number of voxels and m is the number of must-not-link constraints. (More precisely, m is maximum number of must-not-link constraints that fall into the same cluster (watershed basin), and n is the number of voxels that fall into any cluster, which for membranes is typically about 1% of the number of voxels in the image. If O(n m^2 log(m)) is too slow, I can implement the mustnot_partners lookup table as a 3D array. But this will waste a lot of memory, and I'm not worried about speed. I don't expect m to be very large. Most of the time, these constraints are not necessary, so m=0. At other times I expect m=1 or m=2.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.