GithubHelp home page GithubHelp logo

cuhornet's Introduction

Hornet

NOTE: The cuhornet repository is a copy of https://github.com/hornet-gt/hornet that is being maintained by the RAPIDS cugraph team while we use it in our library. We currently only use headers to provide the ktruss implementation.

This library does not currently build. Since we only use headers, we are not maintaining the build processes for this library. We expect to drop support for this entirely in early 2024.

This repository provides the Hornet data structure and algorithms on sparse graphs and matrices.

Getting Started

The document is organized as follows:

Requirements

  • CUDA toolkit 9 or greater.
  • GCC or Clang host compiler with support for C++14. Note, the compiler must be compatible with the related CUDA toolkit version. For more information see CUDA Installation Guide.
  • CMake v3.8 or greater.
  • 64-bit Operating System (Ubuntu 16.04 or above suggested).

Quick start

The following basic steps are required to build and execute Hornet:

git clone --recursive https://github.com/hornet-gt/hornet
export CUDACXX=<path_to_CUDA_nvcc_compiler>
cd hornet/build
cmake ..
make -j

To build HornetsNest (algorithms directory):

cd hornetsnest/build
cmake ..
make -j

By default, the CUDA compiler nvcc uses gcc/g++ found in the current execution search path as host compiler (cc --version to get the default compiler on the actual system). To force a different host compiler for compiling C++ files (*.cpp) you need to set the following environment variables:

CC=<path_to_host_C_compiler>
CXX=<path_to_host_C++_compiler>

Note: host .cpp compiler and host side .cu compiler may be different. The host side compiler must be compatible with the current CUDA Toolkit version installed on the system (see CUDA Installation Guide).

The syntax and the input parameters of Hornet are explained in detail in docs/Syntax.txt. They can also be found by typing ./HornetTest --help.

Supported graph formats

Hornet supports the following graph input formats:

Hornet directly deduces the graph structure (directed/undirected) from the input file header.

Hornet allows reading the input graph by using a fixed binary format to speed up the file loading. The binary file is generated by Hornet with the --binary command line option.

Code Documentation

The code documentation is located in the docs directory (doxygen html format).

Notes

Reporting bugs and contributing

If you find any bugs please report them by using the repository (github issues panel). We are also ready to engage in improving and extending the framework if you request new features.

Hornet Algorithms

Algorithm Static Dynamic
(BFS) Breadth-first Search yes on-going
(SSSP) Single-Source Shortest Path yes on-going
(CC) Connected Components yes on-going
(SCC) Strongly Connected Components to-do to-do
(MST) Minimum Spanning Tree on-going to-do
(BC) Betweeness Centrality yes on-going
(PG) Page Rank yes yes
(TC) Triangle Counting yes on-going
(KC) Katz Centrality yes yes
(MIS) Maximal Independent Set on-going to-do
(MF) Maximum Flow to-do to-do
(CC) Clustering Coeffient yes to-do
(ST) St-Connectivity to-do to-do
(TC) Transitive Closure to-do to-do
Community Detection on-going to-do
Temporal Motif Finding on-going to-do
Sparse Vector-Matrix Multiplication yes to-do
Jaccard indices on-going to-do
Energy/Parity Game on-going to-do

Publications

  • F. Busato, O. Green, N. Bombieri, D. Bader, “Hornet: An Efficient Data Structure for Dynamic Sparse Graphs and Matrices”, IEEE High Performance Extreme Computing Conference (HPEC), Waltham, Massachusetts, 2018 link
  • Oded Green, David A. Bader, "cuSTINGER: Supporting dynamic graph algorithms for GPUs", IEEE High Performance Extreme Computing Conference (HPEC), 13-15 September, 2016, Waltham, MA, USA, pp. 1-6. link
  • Fox, O. Green, K. Gabert, X. An, D. Bader, “Fast and Adaptive List Intersections on the GPU”, IEEE High Performance Extreme Computing Conference (HPEC), Waltham, Massachusetts, 2018 **HPEC Graph Challenge Finalist **
  • O. Green, J. Fox, A. Tripathy, A. Watkins, K. Gabert, E. Kim, X. An, K. Aatish, D. Bader, “Logarithmic Radix Binning and Vectorized Triangle Counting”, IEEE High Performance Extreme Computing Conference (HPEC), Waltham, Massachusetts, 2018 (HPEC Graph Challenge Innovation Award)
  • A. van der Grinten, E. Bergamini, O. Green, H. Meyerhenke, D. Bader, “Scalable Katz Ranking Computation in Large Dynamic Graphs”, European Symposium on Algorithms, Helsinki, Finland, 2018
  • Oded Green, James Fox, Euna Kim, Federico Busato, Nicola Bombieri, Kartik Lakhotia, Shijie Zhou, Shreyas Singapura, Hanqing Zeng, Rajgopal Kannan, Viktor Prasanna, David A. Bader, "Quickly Finding a Truss in a Haystack", IEEE/Amazon/DARPA Graph Challenge, *Innovation Awards*.
  • Devavret Makkar, David A. Bader, Oded Green, Exact and Parallel Triangle Counting in Streaming Graphs, IEEE Conference on High Performance Computing, Data, and Analytics (HiPC), 18-21 December 2017, Jaipur, India, pp. 1-10.
  • A. Tripathy, F. Hohman, D.H Chau, O. Green, "Scalable K-Core Decomposition for Static Graphs Using a Dynamic Graph Data Structure", IEEE International Conference on Big Data, Seattle, Washington, 2018 link

If you find this software useful in academic work, please acknowledge Hornet.


Hornet Developers

  • Federico Busato, Ph.D. Student, University of Verona (Italy)
  • Oded Green, Researcher, Georgia Institute of Technology
  • Federico Busato, Ph.D. Student, University of Verona (Italy)
  • Oded Green, Researcher, Georgia Institute of Technology
  • James Fox, Ph.D. Student, Georgia Institute of Technology : Maximal Independent Set, Temporal Motif Finding
  • Devavret Makkar, Ph.D. Student, Georgia Institute of Technology : Triangle Counting
  • Elisabetta Bergamini, Ph.D. Student, Karlsruhe Institute of Technology (Germany) : Katz Centrality
  • Euna Kim, Ph.D. Student, Georgia Institute of Technology : Dynamic PageRank
  • ...

License

BSD 3-Clause License

Copyright (c) 2017, Hornet All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

cuhornet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cuhornet's Issues

Build Failure

I am unable to build, even in Release mode. These are the commands I used:

cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CUDA_ARCHITECTURES="70" ..
make

And then make throws this error:

[ 79%] Building CUDA object CMakeFiles/hornet_insert_weighted_test.dir/test/HornetInsertTestWeighted.cu.o
/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)2u, (unsigned int)4u> ::assign<int,  ::hornet::CSoAPtr<int, int, int, float > > ") is not allowed

/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)2u, (unsigned int)4u> ::assign<int,  ::hornet::CSoAPtr<int, int, int, float > > ") is not allowed

/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)2u, (unsigned int)4u> ::assign<int,  ::hornet::SoAPtr<int, int, int, float > > ") is not allowed

/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)2u, (unsigned int)4u> ::assign<int,  ::hornet::SoAPtr<int, int, int, float > > ") is not allowed

/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)3u, (unsigned int)4u> ::assign<int,  ::hornet::CSoAPtr<int, int, int, float > > ") is not allowed

/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)3u, (unsigned int)4u> ::assign<int,  ::hornet::CSoAPtr<int, int, int, float > > ") is not allowed

/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)3u, (unsigned int)4u> ::assign<int,  ::hornet::SoAPtr<int, int, int, float > > ") is not allowed

/Data/hornet/hornet/include/Core/HornetDevice/../SoA/impl/SoAPtr.i.cuh(251): error: calling a __host__ function("thrust::detail::vector_base<int,  ::thrust::device_allocator<int> > ::begin() const") from a __host__ __device__ function("hornet::RecursiveGather<(unsigned int)3u, (unsigned int)4u> ::assign<int,  ::hornet::SoAPtr<int, int, int, float > > ") is not allowed

8 errors detected in the compilation of "/Data/ullas/hornet/hornet/test/HornetInsertTestWeighted.cu".
make[2]: *** [CMakeFiles/hornet_insert_weighted_test.dir/build.make:77: CMakeFiles/hornet_insert_weighted_test.dir/test/HornetInsertTestWeighted.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:201: CMakeFiles/hornet_insert_weighted_test.dir/all] Error 2
make: *** [Makefile:91: all] Error 2

How do I fix this?

batch update and load balancing

Hi,

I downloaded the repo from the master and I'm encountering the following problems when using the bfs2 algorithm.

The first one is a problem with the binary search load balancing, in fact, when used to perform the BFS, each run return a different distance despite starting from the same root.
I solved the problem by changing the balancing to vertexbased, by I'm not sure what is causing this behavior.

The second and more big problem, arises when I try to use update batches.
I'm able to generate batches and correctly insert the batches into the graph (edge insertion for example) but when I try any operation on the graph, some updated edges are not present, or better, they are present, but they have an incorrect destination, in fact, they all terminate into node 0.

For example if I insert the batch (100, 200), the graph is updated with an edge starting from node 100 and going into node 200. But sometimes, when the batch is greater than a couple thousands (e.g. 5000 new edges), the new inserted edges start from the correct source but terminates into node 0, other times the behavior is inverted, all the already present edges, start from the correct source but terminates into 0 and all the new edges are correct.
This behavior is not common to all nodes updated in the batch, only some follow this strange behavior.

I've tried different batch updates, but I cannot figure it out why there are these problems.

Just to have more details, I include my machine specs:
GPU is GTX 1060 with 6gb RAM
toolkit is version V10.2.89
and NVIDIA drivers are version 440.82
on Ubuntu 18.04 OS

Thanks a lot for any answer.

Best,
Samuele

Facing Problems in Dynamic PageRank

Hi
I am currently working on Dynamic PageRank, but am encountering problems in using it, which are listed as follows:

  1. Initially I created a file PageRankDynamic.cu in the test folder of hornetsnest and added its details into the cmakelist in order to compile it into the build folder. However I was met with several errors.

  2. HornetGPU and BatchUpdate were undefined in /hornetsnest/src/Dynamic/PageRank.cu, which I defined in /hornetsnest/include/Dynamic/PageRank.cuh as shown below. I would like to know whether this is correct or not.
    image
    I also changed line number 112 of /hornetsnest/include/Dynamic/PageRank.cuh to
    image

  3. Currently in line number 210 and 213 of /hornetsnest/src/Dynamic/PageRank.cu, the forAllEdges function is called as below:
    image
    However on executing it the following errors arise:
    image

Could anyone help me in regards to this matter?

Q: Compute complete Betweenness Centrality

TL:DR: What is the difference between the three variants of BC and how can one compute the complete BC for a graph (like in Hybrid_BC)?

I have a few questions regarding computing the BC of a graph.
It seems there are three variants in the repository:

  • BCCentrality
  • ExactBC
  • ApproximateBC
    What is the difference between these three variants, a few of them are out-commented in the BCTest.cu, hence I'm not completely sure if they are working as of now?

Furthermore, as far as I can see, these algorithms seem to require a root node to be passed, does this mean that I'd have to sequentially call the algorithms for all nodes in a graph to actually compute the BC?
I guess internally, some form of Brandes is done, hence does this mean that by passing a root node, only a partial BC result is computed, looking at all paths from this given root node?
Could I then simply do something like this:

BCCentrality bc(hornet_graph);
for(auto i = 0; i < graph.nV(), ++i)
{
  bc.setRoot(i);
  bc.run();
}

or is there a canonical approach to compute the BC for a graph I'm missing?

Thanks very much! 😄

removed master

By removing master, now all old scripts from rapidsai break, FYI. One cannot use the build scripts or even make anymore.

Copy constructor

SCC deletes portions of the graph recursively. Since we don't want to modify the original graph, we need to make a copy. Hornet doesn't have a copy constructor right now. A copy can be created with batch operations, but that would not be efficient. We should create a raw copy of the underlying data directly.

Batch reuse and idempotence

The following do not work:

  1. Idempotence: G.delete(batch); G.delete(batch);
  2. Reuse: G1.insert(batch); G2.insert(batch);

If you want to delete some edges from a graph and insert them into a new graph, you should be able to reuse the same batch as G_new.insert(batch); G_orig.delete(batch);

The current workaround for this issue is to create a different batch from the same COO pointers. That works.

Unable to remove duplicated edges in batchUpdate

Thanks for your great work!
I read throw the batchUpdate functions.
I'm confused about the following facts:

  1. Does hornet keeps the nodes inside each chunk sorted? If so, why batch update will finish with out furthur sorting?

template <typename... EdgeMetaTypes,
typename vid_t, typename degree_t>
void
BATCH_UPDATE::
print(bool sort) noexcept {
auto ptr = in_edge().get_soa_ptr();
std::cout<<"src, dst : "<<size()<<"\n";
rmm::device_vector<vid_t> src(size());
rmm::device_vector<vid_t> dst(size());
thrust::copy(ptr.template get<0>(), ptr.template get<0>() + size(), src.begin());
thrust::copy(ptr.template get<1>(), ptr.template get<1>() + size(), dst.begin());
if (!sort) {
thrust::copy(src.begin(), src.end(), std::ostream_iterator<vid_t>(std::cout, " "));
std::cout<<"\n";
thrust::copy(dst.begin(), dst.end(), std::ostream_iterator<vid_t>(std::cout, " "));
std::cout<<"\n";
} else {
thrust::host_vector<vid_t> hSrc = src;
thrust::host_vector<vid_t> hDst = dst;
std::vector<std::pair<vid_t, vid_t>> e;
for (int i = 0; i < size(); ++i) {
e.push_back(std::make_pair(hSrc[i], hDst[i]));
}
std::sort(e.begin(), e.end());
for (unsigned i = 0; i < e.size(); ++i) { std::cout<<e[i].first<<" "; }
std::cout<<"\n";
for (unsigned i = 0; i < e.size(); ++i) { std::cout<<e[i].second<<" "; }
std::cout<<"\n";
}
}

In the above code, the writer seems to assume that the edges are not sorted.

https://github.com/rapidsai/cuhornet/blob/ab70d14a562bdaa950e820b412fe827c570c0ca3/hornet/include/Core/HornetOperations/HornetInsert.i.cuh#LL12C1-L27C2

The code do don't sort the edges after batch update.
2. If the edges are don't sorted, actually the remove_graph_duplicates function is wrong.

int found = xlib::lower_bound_left(
batch_dst_ids + start,
end - start,
dst);

It uses xlib::lower_bound_left, which requires the array to be sorted.

  1. More Importantly, the delete function also assumes the chunks are sorted, but they are not.

https://github.com/rapidsai/cuhornet/blob/ab70d14a562bdaa950e820b412fe827c570c0ca3/hornet/include/Core/BatchUpdate/BatchUpdateKernels.cuh#L293-L296C6

Failing tests

Hi,
I have downloaded the repo from master and i have noticed that multiple tests are not working in my system.
I am using an RTX3060 12GB, CUDA version 11.8, Ubuntu 20.04, Driver Version 531.41

hornet_insert_test and hornet_delete_test fail with all graphs (directed and undirected) and batch sizes.
Here I have used cage15.mtx as an example with a batch size of 10000. I get an error for all the edges.

Screenshot 2023-04-13 121302
Screenshot 2023-04-13 121415

The same test using hornet_insert_weighted_test seems to work a first

Screenshot 2023-04-13 122607

But I have noticed that if I remove the line responsible for inserting the update batch into hornet the test still passes.

Screenshot 2023-04-13 123709
Immagine 2023-04-13 124335

Am I missing something? Thanks a lot for any reply.

cuda-memcheck failures with batch ops

Hitting these sorts of errors in my SCC code, but I am able to reproduce it with kcore as well for a small graph.

Graph:

# Directed graph (each unordered pair of nodes is saved once): Slashdot0811.txt 
# Slashdot Zoo social network from Noveber 6 2008                               
# Nodes: 5 Edges: 5                                                             
# FromNodeId    ToNodeId                                                        
0       2                                                                       
0       3                                                                       
1       0                                                                       
2       1                                                                       
3       4 
cuda-memcheck ./kcore foo.txt 
========= CUDA-MEMCHECK

Graph File: foo                Size: 0 MB        format: (SNAP)

@File    V: 5             E: 5             Structure: Directed     avg. deg: 1.0
@User    V: 5             E: 10            Structure: Undirected   avg. deg: 2.0

   100%
Directed to Undirected: Removing duplicated edges...COO to CSR...	Complete!

ne: 10
========= Invalid __global__ read of size 4
=========     at 0x00000200 in void cub::DeviceScanKernel<cub::DispatchScan<int const *, int*, cub::Sum, int, int>::PtxAgentScanPolicy, int const *, int*, cub::ScanTileState<int, bool=1>, cub::Sum, int, int>(int*, cub::Sum, int, int, int, cub::DispatchScan<int const *, int*, cub::Sum, int, int>::PtxAgentScanPolicy, int const *)
=========     by thread (2,0,0) in block (0,0,0)
=========     Address 0x7fdcc4a04a08 is out of bounds
=========     Device Frame:void cub::DeviceScanKernel<cub::DispatchScan<int const *, int*, cub::Sum, int, int>::PtxAgentScanPolicy, int const *, int*, cub::ScanTileState<int, bool=1>, cub::Sum, int, int>(int*, cub::Sum, int, int, int, cub::DispatchScan<int const *, int*, cub::Sum, int, int>::PtxAgentScanPolicy, int const *) (void cub::DeviceScanKernel<cub::DispatchScan<int const *, int*, cub::Sum, int, int>::PtxAgentScanPolicy, int const *, int*, cub::ScanTileState<int, bool=1>, cub::Sum, int, int>(int*, cub::Sum, int, int, int, cub::DispatchScan<int const *, int*, cub::Sum, int, int>::PtxAgentScanPolicy, int const *) : 0x200)
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 (cuLaunchKernel + 0x2cd) [0x24f88d]
=========     Host Frame:./kcore [0x93852]
=========     Host Frame:./kcore [0x93a47]
=========     Host Frame:./kcore [0xc7e05]
=========     Host Frame:./kcore [0x50cd1]
=========     Host Frame:./kcore [0x4a241]
=========     Host Frame:./kcore [0x4ae65]
=========     Host Frame:./kcore [0x4b73a]
=========     Host Frame:./kcore [0x361ec]
=========     Host Frame:./kcore [0x716c]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xf0) [0x20830]
=========     Host Frame:./kcore [0x8919]
=========
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaMemcpyAsync. 
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 [0x357283]
=========     Host Frame:./kcore [0xcc4e3]
=========     Host Frame:./kcore [0x4a03c]
=========     Host Frame:./kcore [0x4ae65]
=========     Host Frame:./kcore [0x4b73a]
=========     Host Frame:./kcore [0x361ec]
=========     Host Frame:./kcore [0x716c]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xf0) [0x20830]
=========     Host Frame:./kcore [0x8919]
=========
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaStreamSynchronize. 
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 [0x357283]
=========     Host Frame:./kcore [0xc802e]
=========     Host Frame:./kcore [0x4a049]
=========     Host Frame:./kcore [0x4ae65]
=========     Host Frame:./kcore [0x4b73a]
=========     Host Frame:./kcore [0x361ec]
=========     Host Frame:./kcore [0x716c]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xf0) [0x20830]
=========     Host Frame:./kcore [0x8919]
=========
terminate called after throwing an instance of 'thrust::system::system_error'
  what():  trivial_device_copy D->H failed: unspecified launch failure
========= Error: process didn't terminate successfully
========= No CUDA-MEMCHECK results found

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.