GithubHelp home page GithubHelp logo

nianticlabs / ace Goto Github PK

View Code? Open in Web Editor NEW
338.0 12.0 32.0 19.94 MB

[CVPR 2023 - Highlight] Accelerated Coordinate Encoding (ACE): Learning to Relocalize in Minutes using RGB and Poses

Home Page: https://nianticlabs.github.io/ace

License: Other

Python 57.55% C++ 38.76% Shell 3.69%
camera-localization cvpr cvpr2023 dsac compute-vision machine-learning pose-estimation visual-localization

ace's Introduction

Accelerated Coordinate Encoding: Learning to Relocalize in Minutes using RGB and Poses


This repository contains the code associated to the ACE paper:

Accelerated Coordinate Encoding: Learning to Relocalize in Minutes using RGB and Poses

Eric Brachmann, Tommaso Cavallari, and Victor Adrian Prisacariu

CVPR 2023, Highlight

For further information please visit:

Table of contents:

Installation

This code uses PyTorch to train and evaluate the scene-specific coordinate prediction head networks. It has been tested on Ubuntu 20.04 with a T4 Nvidia GPU, although it should reasonably run with other Linux distributions and GPUs as well.

We provide a pre-configured conda environment containing all required dependencies necessary to run our code. You can re-create and activate the environment with:

conda env create -f environment.yml
conda activate ace

All the following commands in this file need to run in the ace environment.

The ACE network predicts dense 3D scene coordinates associated to the pixels of the input images. In order to estimate the 6DoF camera poses, it relies on the RANSAC implementation of the DSAC* paper (Brachmann and Rother, TPAMI 2021), which is written in C++. As such, you need to build and install the C++/Python bindings of those functions. You can do this with:

cd dsacstar
python setup.py install

(Optional) If you want to create videos of the training/evaluation process:

sudo apt install ffmpeg

Having done the steps above, you are ready to experiment with ACE!

Datasets

The ACE method has been evaluated using multiple published datasets:

We provide scripts in the datasets folder to automatically download and extract the data in a format that can be readily used by the ACE scripts. The format is the same used by the DSAC* codebase, see here for details.

Important: make sure you have checked the license terms of each dataset before using it.

{7, 12}-Scenes:

You can use the datasets/setup_{7,12}scenes.py scripts to download the data. As mentioned in the paper, we experimented with two variants of each of these datasets: one using the original D-SLAM ground truth camera poses, and one using Pseudo Ground Truth (PGT) camera poses obtained after running SfM on the scenes (see the ICCV 2021 paper , and associated code for details).

To download and prepare the datasets using the D-SLAM poses:

cd datasets
# Downloads the data to datasets/7scenes_{chess, fire, ...}
./setup_7scenes.py
# Downloads the data to datasets/12scenes_{apt1_kitchen, ...}
./setup_12scenes.py

To download and prepare the datasets using the PGT poses:

cd datasets
# Downloads the data to datasets/pgt_7scenes_{chess, fire, ...}
./setup_7scenes.py --poses pgt
# Downloads the data to datasets/pgt_12scenes_{apt1_kitchen, ...}
./setup_12scenes.py --poses pgt

Cambridge Landmarks / Niantic Wayspots:

We used a single variant of these datasets. Simply run:

cd datasets
# Downloads the data to datasets/Cambridge_{GreatCourt, KingsCollege, ...}
./setup_cambridge.py
# Downloads the data to datasets/wayspots_{bears, cubes, ...}
./setup_wayspots.py

Usage

We provide scripts to train and evaluate ACE scene coordinate regression networks. In the following sections we'll detail some of the main command line options that can be used to customize the behavior of both the training and the pose estimation script.

ACE Training

The ACE scene-specific coordinate regression head for a scene can be trained using the train_ace.py script. Basic usage:

./train_ace.py <scene path> <output map name>
# Example:
./train_ace.py datasets/7scenes_chess output/7scenes_chess.pt

The output map file contains just the weights of the scene-specific head network -- encoded as half-precision floating point -- for a size of ~4MB when using default options, as mentioned in the paper. The testing script will use these weights, together with the scene-agnostic pretrained encoder (ace_encoder_pretrained.pt) we provide, to estimate 6DoF poses for the query images.

Additional parameters that can be passed to the training script to alter its behavior:

  • --training_buffer_size: Changes the size of the training buffer containing decorrelated image features (see paper), that is created at the beginning of the training process. The default size is 8M.
  • --samples_per_image: How many features to sample from each image during the buffer generation phase. This affects the amount of time necessary to fill the training buffer, but also affects the amount of decorrelation in the features present in the buffer. The default is 1024 samples per image.
  • --epochs: How many full passes over the training buffer are performed during the training. This directly affects the training time. Default is 16.
  • --num_head_blocks: The depth of the head network. Specifically, the number of extra 3-layer residual blocks to add to the default head depth. Default value is 1, which results in a head network composed of 9 layers, for a total of 4MB weights.

Clustering parameters: these are used for the ensemble experiments (ACE Poker variant) we ran on the Cambridge Landmarks dataset. They are used to split the input scene into multiple independent clusters, and training the head network on one of them (see Section 4.2 of the main paper and Section 1.3 of the supplementary material for details).

  • --num_clusters: How many clusters to split the training scene in. Default None (disabled).
  • --cluster_idx: Selects a specific cluster for training.

Visualization parameters: these are used to generate the videos available in the project page (they actually generate individual frames that can be collated into a video later). Note: enabling them will significantly slow down the training.

  • --render_visualization: Set to True to enable generating frames showing the training process. Default False.
  • --render_target_path: Base folder where the frames will be saved. The script automatically appends the current map name to the folder. Default is renderings.

There are other options available, they can be discovered by running the script with the --help flag.

ACE Evaluation

The pose estimation for a testing scene can be performed using the test_ace.py script. Basic usage:

./test_ace.py <scene path> <output map name>
# Example:
./test_ace.py datasets/7scenes_chess output/7scenes_chess.pt

The script loads (a) the scene-specific ACE head network and (b) the pre-trained scene-agnostic encoder and, for each testing frame:

  • Computes its per-pixel 3D scene coordinates, resulting in a set of 2D-3D correspondences.
  • The correspondences are then passed to a RANSAC algorithm that is able to estimate a 6DoF camera pose.
  • The camera poses are compared with the ground truth, and various cumulative metrics are then computed and printed at the end of the script.

The metrics include: %-age of frames within certain translation/angle thresholds of the ground truth, median translation, median rotation error.

The script also creates a file containing per-frame results so that they can be parsed by other tools or analyzed separately. The output file is located alongside the head network and is named: poses_<map name>_<session>.txt.

Each line in the output file contains the results for an individual query frame, in this format:

file_name rot_quaternion_w rot_quaternion_x rot_quaternion_y rot_quaternion_z translation_x translation_y translation_z rot_err_deg tr_err_m inlier_count

There are some parameters that can be passed to the script to customize the RANSAC behavior:

  • --session: Custom suffix to append to the name of the file containing the estimated camera poses (see paragraph above).
  • --hypotheses: How many pose hypotheses to generate and evaluate (i.e. the number of RANSAC iterations). Default is 64.
  • --threshold: Inlier threshold (in pixels) to consider a 2D-3D correspondence as valid.
  • --render_visualization: Set to True to enable generating frames showing the evaluation process. Will slow down the testing significantly if enabled. Default False.
  • --render_target_path: Base folder where the frames will be saved. The script automatically appends the current map name to the folder. Default is renderings.

There are other options available, they can be discovered by running the script with the --help flag.

Ensemble Evaluation

To deploy the ensemble variants (such as the 4-cluster ACE Poker variant), we simply need to run the training script multiple times (once per cluster), thus training multiple head networks.

At localization time we run the testing script once for each trained head and save the per-frame poses_... files, passing the --session parameter to the test script to tag each file according to the network used to generate it.

We provide two more scripts:

  1. merge_ensemble_results.py: Merge multiple poses_ files -- choosing for each frame the pose that resulted in the best inlier count.
  2. eval_poses.py: Compute the overall accuracy metrics (%-age of poses within threshold and median errors) that we showed in the paper.

ACE Poker example for a scene in the Cambridge dataset:

mkdir -p output/Cambridge_GreatCourt

# Head training:
./train_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/0_4.pt --num_clusters 4 --cluster_idx 0
./train_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/1_4.pt --num_clusters 4 --cluster_idx 1
./train_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/2_4.pt --num_clusters 4 --cluster_idx 2
./train_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/3_4.pt --num_clusters 4 --cluster_idx 3

# Per-cluster evaluation:
./test_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/0_4.pt --session 0_4
./test_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/1_4.pt --session 1_4
./test_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/2_4.pt --session 2_4
./test_ace.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/3_4.pt --session 3_4

# Merging results and computing metrics.

# The merging script takes a --poses_suffix argument that's used to select only the 
# poses generated for the requested number of clusters. 
./merge_ensemble_results.py output/Cambridge_GreatCourt output/Cambridge_GreatCourt/merged_poses_4.txt --poses_suffix "_4.txt"

# The output poses output by the previous script are then evaluated against the scene ground truth data.
./eval_poses.py datasets/Cambridge_GreatCourt output/Cambridge_GreatCourt/merged_poses_4.txt

Complete training and evaluation scripts

We provide several scripts to run training and evaluation on the various datasets we tested our method with. These allow replicating the results we showcased in the paper. They are located under the scripts folder: scripts/train_*.sh.

In the same folder we also provide scripts to generate videos of the training/testing protocol, as can be seen in the project page. They are under scripts/viz_*.sh.

Pretrained ACE Networks

We also make available the set of pretrained ACE Heads we used for the experiments in the paper. Each head was trained for 5 minutes on one of the scenes in the various datasets, and was used to compute the accuracy metrics we showed in the main text.

Each network can be passed directly to the test_ace.py script, together with the path to its dataset scene, to run camera relocalization on the images of the testing split and compute the accuracy metrics, like this:

./test_ace.py datasets/7scenes_chess <Downloads>/7Scenes/7scenes_chess.pt

The data is available at this location.

Encoder Training

As mentioned above, in this repository we provide a set of weights for the pretrained feature extraction backbone (ace_encoder_pretrained.pt) that was used in our experiments. You are welcome to use them to experiment with ACE in novel scenes. Both indoor and outdoor environments perform reasonably well with this set of weights, as shown in the paper.

Unfortunately, we cannot provide the code to train the encoder as part of this release.

The feature extractor has been trained on 100 scenes from ScanNet, in parallel, for ~1 week, as described in Section 3.3 of the paper and Section 1.1 of the supplementary material. It is possible to reimplement the encoder training protocol following those instructions.

Publications

If you use ACE or parts of its code in your own work, please cite:

@inproceedings{brachmann2023ace,
    title={Accelerated Coordinate Encoding: Learning to Relocalize in Minutes using RGB and Poses},
    author={Brachmann, Eric and Cavallari, Tommaso and Prisacariu, Victor Adrian},
    booktitle={CVPR},
    year={2023},
}

This code builds on previous camera relocalization pipelines, namely DSAC, DSAC++, and DSAC*. Please consider citing:

@inproceedings{brachmann2017dsac,
  title={{DSAC}-{Differentiable RANSAC} for Camera Localization},
  author={Brachmann, Eric and Krull, Alexander and Nowozin, Sebastian and Shotton, Jamie and Michel, Frank and Gumhold, Stefan and Rother, Carsten},
  booktitle={CVPR},
  year={2017}
}

@inproceedings{brachmann2018lessmore,
  title={Learning less is more - {6D} camera localization via {3D} surface regression},
  author={Brachmann, Eric and Rother, Carsten},
  booktitle={CVPR},
  year={2018}
}

@article{brachmann2021dsacstar,
  title={Visual Camera Re-Localization from {RGB} and {RGB-D} Images Using {DSAC}},
  author={Brachmann, Eric and Rother, Carsten},
  journal={TPAMI},
  year={2021}
}

License

Copyright © Niantic, Inc. 2023. Patent Pending. All rights reserved. Please see the license file for terms.

ace's People

Contributors

ebrach avatar mickmcginnis avatar tcavallari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ace's Issues

Support in MacOS

Is there any support to build dsacstar in macos?
How can I specify opencv_inc_dir if using own python environment? Is it c++ opencv or python opencv? Thanks

render_visualization error

Hi I am getting this error after setting --render_visualization=True, any idea about that?
INFO:OpenGL.platform.ctypesloader:Failed to load library ( 'libEGL.so.0' ): libEGL.so.0: cannot open shared object file: No such file or directory
WARNING:ace_visualizer:Rendering failed, trying again!

Confusion about ACE coordinate frame

Hi! I'm coming back again because I tried with my new dataset and I feel so confused when I tried with this data. I'm aware if ACE use GT pose in form 4x4 camera-to-world form. But here is the problem.

  1. Suppose my robot system is like the picture below. When I perform the transformation between the lidar frame to camera frame, I assume the transformed 6DOF of Lidar(to-camera) will fulfill the requirement C2W matrix.
    frame

  2. With the assumption of 6DOF of camera frame (in world coordinate), I make that as my GT of my images dataset. I passed it to the ACE but the result is quite strange since the way camera move in x axis is mirrored. (please see the picture below)
    frame2

  3. If I multiplied the x value of my GT by -1, ACE will follows my trajectory in world coordinate well.

Could you elaborate this? Does the problem happened since the 1st step? Thank you so much!

Running inference on mobile

I was wondering if there would be any technical challenges that would make running inference on mobile impossible. The models themselves seem small but I'm not familiar enough to know if there are any other challenges/considerations.

Update 1

Currently I am on the following error when trying to convert the model to torchscript. Seems like it would be easy enough to fix but there would probably be more things that don't work afterwards as well.

Module 'Head' has no attribute 'res_blocks' (This attribute exists on the Python module, but we failed to convert Python type: 'list' to a TorchScript type. Could not infer type of list element: Cannot infer concrete type of torch.nn.Module. Its type was inferred; try adding a type annotation for the attribute.):
  File "/home/powerhorse/Desktop/daniel_tmp/benchmark/anchor/third_party/ace/ace_network.py", line 128
        res = self.head_skip(res) + x
    
        for res_block in self.res_blocks:
                         ~~~~~~~~~~~~~~~ <--- HERE
            x = F.relu(res_block[0](res))
            x = F.relu(res_block[1](x))

Question about sensitivity to ground truth pose accuracy

Hi,

We have been playing around with this approach trying to train models from data supplied by ArKit on iphones. Unfortunately, we have not been able to achieve good results by trying to localize 20 seconds worth of frames against 20 seconds worth of mapping frames (around 30fps). The data is recorded by physically aligning the phone from the same place. The median rotation error is usually above 30 degrees and the translation error is around 50-100 cm while indoors in a small room.

I'm wondering if you have any speculation on what the problem might be. It's clear that the poses provided by ArKit are not fully accurate as they are running their own black box SLAM system. However, they are generally consistent over time. Is it really possible that a few percentage points of error on the ground truth can blow up the model error by this much?

Thank you so much

Peformance vs hloc

1、I found it is low accuracy in low translation error vs hloc in my datasets, encoder net work down samples with 8,and that means every feature represent 8*8 grid,and target pixel position is the center of this grid,this position is not accuracy?
2、did you use unet network as encoder network for improving performance?and how can I train encoder network from scratch。very nice work and thanks a lot

Support principle point intrinsic parameters in training

    # Create the intrinsics matrix.
    intrinsics = torch.eye(3)
    intrinsics[0, 0] = focal_length
    intrinsics[1, 1] = focal_length
    # Hardcode the principal point to the centre of the image.
    intrinsics[0, 2] = image.shape[2] / 2
    intrinsics[1, 2] = image.shape[1] / 2

ace/dataset.py

Line 489 in 2507cdb

# Hardcode the principal point to the centre of the image.

Assuming I'm understanding this correctly, would be happy to make a PR for this. We are looking to train from Arkit IOS data where the optical center can vary from frame-to-frame as a result of some distortion correction that apple seems to be doing.

support fx and fy

Hi,
Thank you for the amazing paper and for sharing the code.
From this, I saw the code is using single focal length f.
Currently, my colmap result returned fx & fy.
I would like to ask if is it ok to run with only fx from your code.
Or can you support the fx and fy?

About _convert_cv_to_gl(pose)

Hi, I wonder why convert pose in opencv convention to opengl via using this matrix.

 @staticmethod
    def _convert_cv_to_gl(pose):
        """
        Convert a pose from OpenCV to OpenGL convention (and vice versa).

        @param pose: 4x4 camera pose.
        @return: 4x4 camera pose.
        """
        gl_to_cv = np.array([[1, -1, -1, 1], [-1, 1, 1, -1], [-1, 1, 1, -1], [1, 1, 1, 1]])
        return gl_to_cv * pose

On other places, you just apply

        scene_coordinates[:, 1] = -scene_coordinates[:, 1]
        scene_coordinates[:, 2] = -scene_coordinates[:, 2]

Thanks!

About 7_scenes rgb calibration

Hi, your work is great.
I notice that you obtain camera poses of 7scenes in setup_7scenes.py via

cam_pose = np.matmul(cam_pose, np.linalg.inv(d_to_rgb))

Why not

cam_pose = np.matmul(d_to_rgb, cam_pose)

And I found there is a line

transform from depth sensor to RGB sensor
eye_coords = np.matmul(d_to_rgb, eye_coords)

Can you help me to understand it? Thank you!

Encoder Training

Whether there are plans to publish the training encoder? And when will the code for the training encoder be available?

Inference code

Hello,
I would like to run the model to estimate the camera poses only after I trained the model with my customs dataset.
Which means I don't have the GT camera poses.
If I want to do that, I have to modify CamLocDataset to disable reading the pose_dir at https://github.com/nianticlabs/ace/blob/main/dataset.py#L123 and modify it to return pose as None in this https://github.com/nianticlabs/ace/blob/main/dataset.py#L496
And also, turn off evaluation from here https://github.com/nianticlabs/ace/blob/main/test_ace.py#L236
Would you mind checking for me?

Where does ACE freezes the backbone when training the head

Hi,

I can't seem to find where does ACE freeze the backbone during training, since during test it use the same pre-trained encoder, during training it's probably better to freeze the backbone such that the backbone doesn't change, right?

or is this on purpose to not freeze the backbone?

Need some clarification on what pose file mean

Hi,
I am trying to generate some training data directly to test ACE out, one question is about the pose file,
are poses under train/poses camera_to_world transform or world_to_camera transform?
given the definition of A_to_B transform means transforming a point from coordinate A to coordinate B
e.g.

point_B = A_to_B_transform * point_A

Thanks!

Does ACE not support large dataset?

After reproducing ACE with 7scenes, cambridge, and my own dataset, I found out the translation and the rotational error quite good. But the problem is, when I visualize the result (I set the --render_visualization to True) to my own dataset that quite large rather than cambridge dataset, the result quite strange.

My own dataset ( A big building)
image

The test result
image

The mapping result of ACE
image

Is it safe to say if ACE doesn't support a large dataset? Because the mapping in 7scenes and Cambridge dataset are pretty good and I can see the details. I'm just curious. Hope to hear from you soon!

Failed to Load Encoder

Hi, I tried to reproduce with the 7Scene chess scene but I got this error

INFO:ace_trainer:Loaded training scan from: datasets/7scenes_chess -- 4000 images, mean: 0.45 -0.74 0.30 Traceback (most recent call last): File "train_ace.py", line 126, in <module> trainer = TrainerACE(options) File "/home/nanda/dev/ace/ace_trainer.py", line 92, in __init__ encoder_state_dict = torch.load(self.options.encoder_path, map_location="cpu") File "/home/nanda/.conda/envs/ace/lib/python3.8/site-packages/torch/serialization.py", line 815, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/nanda/.conda/envs/ace/lib/python3.8/site-packages/torch/serialization.py", line 1033, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, 'v'.

Seems like I can't load the pre-trained encoder that you already shared. Do you have any idea about this? Thanks!

Cannot download encoder weights

Hi,

I tried to pull the code and got this error. Is it possible to upload the encoder weight somewhere else? THanks.

Downloading ace_encoder_pretrained.pt (22 MB)
Error downloading object: ace_encoder_pretrained.pt (c69ca93): Smudge error: Error downloading ace_encoder_pretrained.pt (c69ca934d82056c9f2e9be9582d85841bd53355ac04a007c11618b1760c00a8c): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.

Wayspot Datasets Visualization

Hi, I tried to reproduce with your Wayspot dataset. For statue dataset visualization, I still can't understand why it shows like this? Should I rotate your dataset first or what?

image

Hope to hear from you soon! Thanks!

Custom Dataset.

Thank you for releasing the code. Can you please guide, how to generate Output map file on custom dataset? I tried running the code on my custom dataset and it is showing following error. Once again thank you.

serialization.py", line 1033, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'

Question with regards to results on Indoor6 dataset

Hi, thank you for great work.
I try to use ACE on the Indoor6 dataset provided by the following: https://github.com/microsoft/SceneLandmarkLocalization. However, the results are not so good, the translation error reach 1 meter and the rotation error can be up to 100 degrees. Because indoor6 dataset is collected at different time and day, it contains high illumination variations. Could ACE work in such cases, are there any configuration that i missed?
01-frame000072
04-frame000022
15-frame000143

Reprojection Error

Could you please explain to me what the reprojection error shows on the right side of the video during mapping?

generate reconstructions

Hi

Thanks for your work. Just wondering if it's possible to generate the point cloud of the scene with the trained weights as shown in the demo video?

Thanks

Pretrain Feature Extractor Backbone Network from ScanNet

Thank you very much for the code on Git Hub. In paper 3.3 Backbone Training, you said "Instead of training the backbone with one regression head for a single scene, we train it with N regression heads for N scenes, in parallel." I'd like to know exactly how many scenarios this N represents using for parallel training. In addition to that, would you mind sharing how to select 100 scenarios in ScanNet? Very much looking forward to your reply.

Consistency in pose format for train/validation

It seems to me that the training data wants poses in a 4x4 rotation/translation matrix and the evaluation wants the pose rotation in a quaternion format. For the sake of consistency, would it make sense for these two formats to be aligned?

fisheye camera case

Thank you for this great relocalization framework!
I found out that the demo results with the dataset you provided are awesome in my machine.

And I have a simple question while trying to apply this framework on my project.
Could it be applied to fisheye camera case if we have correct extrinsic camera poses?
I guess the reprojection error need to be changed to appropriate form considering the lens distortion..
Could I get some advices?

Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.