GithubHelp home page GithubHelp logo

inducer / grudge Goto Github PK

View Code? Open in Web Editor NEW
11.0 9.0 17.0 2.42 MB

Grand Unified Discontinuous Galerkin Environment? A DG code in training.

C++ 2.75% Tcl 19.94% TypeScript 0.20% Python 76.98% Shell 0.12%

grudge's Introduction

grudge

Gitlab Build Status

Github Build Status

grudge helps you discretize discontinuous Galerkin operators, quickly and accurately.

It relies on

  • numpy for arrays
  • modepy for modes and nodes on simplices
  • meshmode for modes and nodes on simplices
  • loopy for fast array operations
  • pytest for automated testing

and, indirectly,

  • PyOpenCL as computational infrastructure

PyOpenCL is likely the only package you'll have to install by hand, all the others will be installed automatically.

image

Resources:

grudge's People

Contributors

alexfikl avatar dependabot[bot] avatar ellishg avatar inducer avatar kaushikcfd avatar majosm avatar matthiasdiener avatar mattwala avatar mtcam avatar nchristensen avatar thomasgibson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grudge's Issues

Reductions returning array containers

Mention by @majosm here: #154 (comment).

Based on a slack discussion, here is a proposal: nodal_{min, max, sum} and norm should return array containers (by computing reductions on each component of the underlying array container. We probably also want to have a flat_norm routine.

Feature request: N to M restart capability

The MIRGE-Com simulation application would like an N-to-M restart capability (i.e. where simulations originally run on N processors may be restarted on M processors) so that the simulations can adapt to changing resource availability.

This is an important capability for running production-scale problems on lab-based machines where resource availability is highly variable, and often includes dedicated pushes where we have temporary access to large portions of the machine.

The capability need not be fully integrated with the simulation code; it is OK if we need to run an "adapter" code in-between runs to serialize and then re-partition N-to-M.

Diff operator borken

I'm not sure this was as intended...

    @property                                                                                                                      
158    def diff(self):                                                                                                                
159        """A class:`~meshmode.dof_array.DOFArray` or                                                                               
160        :class:`~arraycontext.ArrayContainer` of them representing the                                                             
161        difference (exterior - interior) of the pair values.                                                                       
162        """                                                                                                                        
163        return self.ext - self.ext                                                                                                 
164

Eager distributed: Send array containers together

Right now, a separate send/receive round-trip is initiated for every component of a vector/array container. It would likely be more efficient to combine them and send them together.

grudge/grudge/trace_pair.py

Lines 266 to 379 in 1c44c4b

# {{{ Distributed-memory functionality
@memoize_on_first_arg
def connected_ranks(dcoll: DiscretizationCollection):
from meshmode.distributed import get_connected_partitions
return get_connected_partitions(dcoll._volume_discr.mesh)
class _RankBoundaryCommunication:
base_tag = 1273
def __init__(self, dcoll: DiscretizationCollection,
remote_rank, vol_field, tag=None):
self.tag = self.base_tag
if tag is not None:
self.tag += tag
self.dcoll = dcoll
self.array_context = vol_field.array_context
self.remote_btag = BTAG_PARTITION(remote_rank)
self.bdry_discr = dcoll.discr_from_dd(self.remote_btag)
from grudge.op import project
self.local_dof_array = project(dcoll, "vol", self.remote_btag, vol_field)
local_data = self.array_context.to_numpy(flatten(self.local_dof_array))
comm = self.dcoll.mpi_communicator
self.send_req = comm.Isend(local_data, remote_rank, tag=self.tag)
self.remote_data_host = np.empty_like(local_data)
self.recv_req = comm.Irecv(self.remote_data_host, remote_rank, self.tag)
def finish(self):
self.recv_req.Wait()
actx = self.array_context
remote_dof_array = unflatten(
self.array_context, self.bdry_discr,
actx.from_numpy(self.remote_data_host)
)
bdry_conn = self.dcoll.distributed_boundary_swap_connection(
dof_desc.as_dofdesc(dof_desc.DTAG_BOUNDARY(self.remote_btag))
)
swapped_remote_dof_array = bdry_conn(remote_dof_array)
self.send_req.Wait()
return TracePair(self.remote_btag,
interior=self.local_dof_array,
exterior=swapped_remote_dof_array)
def _cross_rank_trace_pairs_scalar_field(
dcoll: DiscretizationCollection, vec, tag=None) -> list:
if isinstance(vec, Number):
return [TracePair(BTAG_PARTITION(remote_rank), interior=vec, exterior=vec)
for remote_rank in connected_ranks(dcoll)]
else:
rbcomms = [_RankBoundaryCommunication(dcoll, remote_rank, vec, tag=tag)
for remote_rank in connected_ranks(dcoll)]
return [rbcomm.finish() for rbcomm in rbcomms]
def cross_rank_trace_pairs(
dcoll: DiscretizationCollection, ary, tag=None) -> list:
r"""Get a :class:`list` of *ary* trace pairs for each partition boundary.
For each partition boundary, the field data values in *ary* are
communicated to/from the neighboring partition. Presumably, this
communication is MPI (but strictly speaking, may not be, and this
routine is agnostic to the underlying communication).
For each face on each partition boundary, a
:class:`TracePair` is created with the locally, and
remotely owned partition boundary face data as the `internal`, and `external`
components, respectively. Each of the TracePair components are structured
like *ary*.
:arg ary: a single :class:`~meshmode.dof_array.DOFArray`, or an object
array of :class:`~meshmode.dof_array.DOFArray`\ s
of arbitrary shape.
:returns: a :class:`list` of :class:`TracePair` objects.
"""
if isinstance(ary, np.ndarray):
oshape = ary.shape
comm_vec = ary.flatten()
n, = comm_vec.shape
result = {}
# FIXME: Batch this communication rather than
# doing it in sequence.
for ivec in range(n):
for rank_tpair in _cross_rank_trace_pairs_scalar_field(
dcoll, comm_vec[ivec]):
assert isinstance(rank_tpair.dd.domain_tag, dof_desc.DTAG_BOUNDARY)
assert isinstance(rank_tpair.dd.domain_tag.tag, BTAG_PARTITION)
result[rank_tpair.dd.domain_tag.tag.part_nr, ivec] = rank_tpair
return [
TracePair(
dd=dof_desc.as_dofdesc(
dof_desc.DTAG_BOUNDARY(BTAG_PARTITION(remote_rank))),
interior=make_obj_array([
result[remote_rank, i].int for i in range(n)]).reshape(oshape),
exterior=make_obj_array([
result[remote_rank, i].ext for i in range(n)]).reshape(oshape)
) for remote_rank in connected_ranks(dcoll)
]
else:
return _cross_rank_trace_pairs_scalar_field(dcoll, ary, tag=tag)
# }}}

Eager distributed: Permit instrumentation

A simple thing we could do is to collect MPI wall time at:

  • post send
  • post receive
  • begin wait for {send, receive} completion
  • end wait for {send, receive} completion

grudge should not have a direct dependency on logpyle. Instead, aggregate data on this could be collected in the discretization collection.

cc @matthiasdiener

Operator interface

@alexfikl brought up a great point about the current operator interface: #172 (comment)

As already pointed out, grudge operators typically have a function signature like: operator(dcoll, *args), such as:

def weak_local_div(dcoll: DiscretizationCollection, *args):

The issue here is that we're not quite enforcing anything on *args. So, for example, if a user passes in (vec, dd) (instead of the expected (dd, vec) ordering), the code will break without actually catching this problem. Currently, we have the docstrings elaborate on *args, such as:

grudge/grudge/op.py

Lines 438 to 441 in 2af3528

r"""Return the element-local weak divergence of the vector volume function
represented by *vecs*.
May be called with ``(vecs)`` or ``(dd, vecs)``.

We could strengthen the checks inside each function. However, I can see this becoming a bit unwieldy. We could also revamp the interface and impose a fixed signature for each function (meaning we always require args: dcoll, dd, vec --- no more *args).

I thought I'd raise this issue so we can discuss this. I don't think this is a huge problem right now, but definitely worth noting.

New CI failures

I'm not sure what's going on exactly, but ever since #201 main is now failing CI. It's not clear at the moment what went wrong.

WADG should use overintegration

@thomasgibson pointed out that our WADG could/should be able to take advantage of overintegration, which it currently doesn't. Consider the matrix we're supposed to apply:

image

(From the paper) Consider the current implementation:

grudge/grudge/op.py

Lines 645 to 653 in 2af3528

group_data.append(
# Based on https://arxiv.org/pdf/1608.03836.pdf
# true_Minv ~ ref_Minv * ref_M * (1/jac_det) * ref_Minv
actx.einsum("ei,ij,ej->ei",
jac_inv,
ref_mass_inverse,
vec_i,
tagged=(FirstAxisIsElementsTag(),))
)

The ref_Minv * ref_M * (1/jac_det) * ref_Minv * u shold really be (in weird pseudocode):

ref_Minv * ref_M("overintegrated") * (1/project("overintegrated", jac_det) * project("overintegrated", ref_Minv * u)

cc @lukeolson

Fluxes get computed redundantly

Currently, we compute fluxes redundantly on both sides of the interface. To avoid that, we'd need an actual mortar discretization, i.e. something with one (surface) element per (facial) element interface. The closest we currently get is "two elements per interface". Of course, we would also need connections (likely in meshmode) to get onto and off of that mortar space.

cc @kaushikcfd @thomasgibson @lukeolson @MTCam @majosm

Grudge keeps a high number of active blocks

When I run the wave-eager case with a memory pool, I observe the number of active blocks in the memory pool can reach almost 250. Nearly every call to actx.empty or actx.from_numpy adds another active block. Assuming pn=3, for nel_1d=32 (~173,000 elements) and below there is sufficient device memory to execute. For nel_1d=64 (1.5 million elements) the allocations are ~0.25 GB and all of device memory is eventually consumed.

I presume there are a lot of old allocations hanging around that ought to be freed. Perhaps this is related to inducer/pyopencl#450.

`DiscretizationCollection._base_to_geoderiv_connection` is broken when using overintegration in lazy-mode

The relatively recent change for storing affine geometric terms as constants for the pytato array context is broken when using overintegration. The code currently assumes that the from_discr is interpolatory, which is just not true when you have terms defined on a quadrature grid.

I have attempted to circumvent this issue in #172 but I would really like to settle this once and for all. And by this, I mean actually taking advantage of affineness even when overintegrating. Especially since having working overintegration with lazy evaluation is a high priority right now.

@inducer --- I don't know if I'm being dense here, but I don't see this as a simple 2-line change in the method. The problem is getting meshmode to do the right thing by transferring a DOFArray defined on a quadrature grid to the appropriate shape for the geo_deriv_discr. Is there something we can do in grudge to spoon-feed meshmode something it can work with?

Expensive-looking oddities in CUDA profile

Running this benchmark based on wave-op-mpi.py on 1c44c4b with the command

PYTHONHASHSEED=17 PYOPENCL_TEST=port:nvid setarch -R numactl -C 2 -m 0 nvprof -f -o yoink.nvvp python -O wave-op-mpi.py --dim=3 --order=4   

on dunkel gives me the following profile in Nvidia's visual profiler:
grafik

There are at least two things wrong here (both circled):

  • There are a bunch of big gaps where nothing seems to be happening. Why?
  • Every now and then, a cuLaunchKernel seems to take a very long time. Why?

Curiously, there seem to be periods that don't suffer from this:
grafik

If we could fix these two types of stalls, I suspect our performance story would look quite a bit different.

cc @matthiasdiener @lukeolson

Other versions in use, for reproducibility:

QTAG_XXX is misnamed

With the introduction of QTAG_MODAL in #77, it's apparent that the QTAG_ prefixes (for "quadrature tag") are misnamed. "Modal" just isn't a form of quadrature, unless you twist your head in a very special way. "Discretization tag" would make sense, but the DTAG_ is taken ("domain tag"). DISCR_TAG_XXX?

cc @thomasgibson @majosm @alexfikl

Add interpolation-to-quadrature to interior/cross-rank trace pairs

This issue is a reminder for us to add in the interpolation steps to the trace pair routines. A clean way to do this is to make DiscretizationCollection.connection_from_dds do the right thing when from_dd is a base volume descriptor and to_dd is a face descriptor on the quadrature grid. This will likely need to use meshmode's chained connections to make this happen.

`_apply_stiffness_transpose_operator`: uses einsum in a way that causes high cost

Since our einsum is not smart (it doesn't identify possible common subexpressions), this:

grudge/grudge/op.py

Lines 296 to 306 in 1c44c4b

actx.einsum("dij,ej,ej,dej->ei",
reference_stiffness_transpose_matrix(
actx,
out_element_group=out_grp,
in_element_group=in_grp
),
ae_i,
vec_i,
inv_jac_t_i,
arg_names=("ref_stiffT_mat", "jac", "vec", "inv_jac_t"),
tagged=(FirstAxisIsElementsTag(),))

amounts to computing the rst derivatives once per dimension. (Together with #141, we're computing them 9 times in 3D, when once would suffice.) The problem is that the sum is computed once per d, when it is actually independent of d.

This is something I should have caught during review of #74.

cc @thomasgibson @lukeolson

Make a central utility for rank reductions

To replace this:

mpi_comm = dcoll.mpi_communicator
if mpi_comm is None:
return dt_factor * (1 / c)
return (1 / c) * mpi_comm.allreduce(dt_factor, op=MPI.MIN)

and implement the nodal reductions in a distributed setting. Initially, we'll just use MPI, but likely we'll need to use a different abstraction. That abstraction doesn't exist yet, hence the request for this stopgap. Once it exists, it'll be convenient if we can change that centrally.

Spotted while reviewing #108.

@thomasgibson Could you put that on your pile?

cc @matthiasdiener

Porting external functions (such as `bessel_j` / `bessel_y`) to New World Grudge

A reminder to convert these functions to the revamped Grudge framework. Currently, #74 still calls out to grudge.sym to handle certain external functions. In particular, the Bessel functions.

grudge/test/test_grudge.py

Lines 990 to 994 in a494ba0

# FIXME: Bessel functions need to brought out of the symbolic
# layer. Related issue: https://github.com/inducer/grudge/issues/93
def bessel_j(actx, n, r):
from grudge import sym, bind
return bind(dcoll, sym.bessel_j(n, sym.var("r")))(actx, r=r)

WADG mass-matrix inverse: correctness, excessive flop count

Missed this when reviewing #74.

grudge/grudge/op.py

Lines 767 to 785 in 30db06b

group_data.append(
# Based on https://arxiv.org/pdf/1608.03836.pdf
# true_Minv ~ ref_Minv * ref_M * (1/jac_det) * ref_Minv
actx.einsum("ik,km,em,mj,ej->ei",
# FIXME: Should we manually create a temporary for
# the mass inverse?
ref_mass_inverse,
reference_mass_matrix(
actx,
out_element_group=grp,
in_element_group=grp
),
jac_inv,
# FIXME: Should we manually create a temporary for
# the mass inverse?
ref_mass_inverse,
vec_i,
tagged=(HasElementwiseMatvecTag(),))
)

If Ndof is the number of DOFs, then the code above uses Ndof**4 flops per element, when Ndof**2 would suffice. What should happen here?

  • First of all, I'm not sure the implementation is correct. This computes inv(ref_M) @ ref_M as the leading term, which is an identity. To be fair, this came from the original implementation:
    return op.RefInverseMassOperator(dd_in, dd_out)(
    op.RefMassOperator(dd_in, dd_out)(
  • Then, any matrices that can be premultiplied, should be premultiplied.
  • Last, any einsums involved should at most have one reduction variable. (Our current einsum isn't smart enough to do anything else efficiently.)

cc @thomasgibson

`_apply_stiffness_transpose_operator`: Don't `stack`

This actx.np.stack shows up fairly prominently in profiles:

grudge/grudge/op.py

Lines 288 to 292 in 1c44c4b

inverse_jac_t = actx.np.stack(
[inverse_surface_metric_derivative(actx, dcoll,
rst_axis, xyz_axis, dd=dd_in)
for rst_axis in range(dcoll.dim)]
)

There's no reason to do this repeatedly (it could simply be memoized). But perhaps more importantly, there's no reason for this stack to exist at all:

  • If we write a loopy kernel to do the reference derivative and the geometric factor arithmetic, we can just operate directly on the separate arrays. (sep array tag, requires that the geometric factor loop be unrolled)
  • Or we could simply do the geometric factor math on the vectors after the fact, on the vectors.

This is something I should have caught during review of #74.

cc @thomasgibson @lukeolson

Timestep estimate might be unnecessarily conservative

This code:

max_lambda = self.max_characteristic_velocity(t, fields, dcoll)
dt_factor = \
(dt_non_geometric_factor(dcoll)
* op.nodal_min(dcoll, "vol", dt_geometric_factors(dcoll)))

takes the min of the geometric factors and the min of the non-geometric factors over all groups and then multiplies them. Technically, I think it would suffice to multiply them together (still obtaining a nodal quantity) and then taking the min. Reducing down to per-group quantities and then multiplying and then taking the min would also work. This is a bit hypothetical for now, since most use is single-group, but it might bite us in the future.

I missed this while reviewing #108.

cc @thomasgibson @MTCam

Interface for distributed reductions

I'm a bit worried about the interface we currently have for nodal reductions. There are FIXMEs in there that suggest that we'll be changing them to include the nodal reductions. At the same time, to write correct code against the current interface, they need to have a rank reduction wrapped around them. This situation is a bit untenable IMO. Suppose we change those functions to do reductions, and code gets written against them that also includes reductions. We'll end up double-reducing, which may be OK for max/min, but not for sums, and it's inefficient in either case. I think we ought to make up our mind pretty soon on this, to avoid pain down the road.

Here's what I'd propose as a solution: Rename all the current functions to include a _loc suffix. Create a rank-reducing wrapper (cf. #112) that, for now, raises an error if they're used in a distributed-memory setting. This will break all current distributed-memory uses of those functions. This is my favorite solution because we need to look at all the call sites for these functions anyhow to make sure we don't double-reduce. I also like this because I'm convinced we'll want the rank-local values to be accessible.

x-ref: #112

cc @thomasgibson @matthiasdiener @MTCam @anderson2981

Empty state mismatch

This appears to be the problem causing this MIRGE-Com issue. There is a reproducing snippet in this MIRGE-Com PR.

Here's the gist:

bnd_discr = discr.discr_from_dd(btag)
bnd_nodes = thaw(actx, bnd_discr.nodes())
bnd_normal = thaw(actx, discr.normal(btag))
result = bnd_nodes @ bnd_normal

The bnd_nodes @ bnd_normal operation fails when the boundary represented by btag has 0 points. (Seems like an odd thing - but this happens often in parallel runs as not all partitions own points on all boundaries). The failure seems due to this difference in the data structures:

bnd_nodes=array([DOFArray((cl.Array([], shape=(0, 3), dtype=float64),)),
       DOFArray((cl.Array([], shape=(0, 3), dtype=float64),)),
       DOFArray((cl.Array([], shape=(0, 3), dtype=float64),))],
      dtype=object)
bnd_normal=array([DOFArray(()), DOFArray(()), DOFArray(())], dtype=object)

Teach grudge about array containers

There is a work around for this, but currently this happens when calling cross_rank_trace_pairs with a mirgecom.fluid.ConservedVars array container.

  File "/Users/mtcampbe/CEESD/devel/emirge/grudge/grudge/op.py", line 554, in cross_rank_trace_pairs
    return _cross_rank_trace_pairs_scalar_field(dcoll, ary, tag=tag)
  File "/Users/mtcampbe/CEESD/devel/emirge/grudge/grudge/op.py", line 505, in _cross_rank_trace_pairs_scalar_field
    rbcomms = [_RankBoundaryCommunication(dcoll, remote_rank, vec, tag=tag)
  File "/Users/mtcampbe/CEESD/devel/emirge/grudge/grudge/op.py", line 505, in <listcomp>
    rbcomms = [_RankBoundaryCommunication(dcoll, remote_rank, vec, tag=tag)
  File "/Users/mtcampbe/CEESD/devel/emirge/grudge/grudge/op.py", line 472, in __init__
    local_data = self.array_context.to_numpy(flatten(self.local_dof_array))
  File "/Users/mtcampbe/CEESD/devel/emirge/arraycontext/arraycontext/impl/pyopencl.py", line 292, in to_numpy
    return array.get(queue=self.queue)
AttributeError: 'ConservedVars' object has no attribute 'get'

Eager distributed: support array containers

Currently, only object arrays are supported.

grudge/grudge/trace_pair.py

Lines 266 to 379 in 1c44c4b

# {{{ Distributed-memory functionality
@memoize_on_first_arg
def connected_ranks(dcoll: DiscretizationCollection):
from meshmode.distributed import get_connected_partitions
return get_connected_partitions(dcoll._volume_discr.mesh)
class _RankBoundaryCommunication:
base_tag = 1273
def __init__(self, dcoll: DiscretizationCollection,
remote_rank, vol_field, tag=None):
self.tag = self.base_tag
if tag is not None:
self.tag += tag
self.dcoll = dcoll
self.array_context = vol_field.array_context
self.remote_btag = BTAG_PARTITION(remote_rank)
self.bdry_discr = dcoll.discr_from_dd(self.remote_btag)
from grudge.op import project
self.local_dof_array = project(dcoll, "vol", self.remote_btag, vol_field)
local_data = self.array_context.to_numpy(flatten(self.local_dof_array))
comm = self.dcoll.mpi_communicator
self.send_req = comm.Isend(local_data, remote_rank, tag=self.tag)
self.remote_data_host = np.empty_like(local_data)
self.recv_req = comm.Irecv(self.remote_data_host, remote_rank, self.tag)
def finish(self):
self.recv_req.Wait()
actx = self.array_context
remote_dof_array = unflatten(
self.array_context, self.bdry_discr,
actx.from_numpy(self.remote_data_host)
)
bdry_conn = self.dcoll.distributed_boundary_swap_connection(
dof_desc.as_dofdesc(dof_desc.DTAG_BOUNDARY(self.remote_btag))
)
swapped_remote_dof_array = bdry_conn(remote_dof_array)
self.send_req.Wait()
return TracePair(self.remote_btag,
interior=self.local_dof_array,
exterior=swapped_remote_dof_array)
def _cross_rank_trace_pairs_scalar_field(
dcoll: DiscretizationCollection, vec, tag=None) -> list:
if isinstance(vec, Number):
return [TracePair(BTAG_PARTITION(remote_rank), interior=vec, exterior=vec)
for remote_rank in connected_ranks(dcoll)]
else:
rbcomms = [_RankBoundaryCommunication(dcoll, remote_rank, vec, tag=tag)
for remote_rank in connected_ranks(dcoll)]
return [rbcomm.finish() for rbcomm in rbcomms]
def cross_rank_trace_pairs(
dcoll: DiscretizationCollection, ary, tag=None) -> list:
r"""Get a :class:`list` of *ary* trace pairs for each partition boundary.
For each partition boundary, the field data values in *ary* are
communicated to/from the neighboring partition. Presumably, this
communication is MPI (but strictly speaking, may not be, and this
routine is agnostic to the underlying communication).
For each face on each partition boundary, a
:class:`TracePair` is created with the locally, and
remotely owned partition boundary face data as the `internal`, and `external`
components, respectively. Each of the TracePair components are structured
like *ary*.
:arg ary: a single :class:`~meshmode.dof_array.DOFArray`, or an object
array of :class:`~meshmode.dof_array.DOFArray`\ s
of arbitrary shape.
:returns: a :class:`list` of :class:`TracePair` objects.
"""
if isinstance(ary, np.ndarray):
oshape = ary.shape
comm_vec = ary.flatten()
n, = comm_vec.shape
result = {}
# FIXME: Batch this communication rather than
# doing it in sequence.
for ivec in range(n):
for rank_tpair in _cross_rank_trace_pairs_scalar_field(
dcoll, comm_vec[ivec]):
assert isinstance(rank_tpair.dd.domain_tag, dof_desc.DTAG_BOUNDARY)
assert isinstance(rank_tpair.dd.domain_tag.tag, BTAG_PARTITION)
result[rank_tpair.dd.domain_tag.tag.part_nr, ivec] = rank_tpair
return [
TracePair(
dd=dof_desc.as_dofdesc(
dof_desc.DTAG_BOUNDARY(BTAG_PARTITION(remote_rank))),
interior=make_obj_array([
result[remote_rank, i].int for i in range(n)]).reshape(oshape),
exterior=make_obj_array([
result[remote_rank, i].ext for i in range(n)]).reshape(oshape)
) for remote_rank in connected_ranks(dcoll)
]
else:
return _cross_rank_trace_pairs_scalar_field(dcoll, ary, tag=tag)
# }}}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.