GithubHelp home page GithubHelp logo

mljejucamp2017 / drl_based_selfdrivingcarcontrol Goto Github PK

View Code? Open in Web Editor NEW
293.0 25.0 96.0 493.46 MB

Deep Reinforcement Learning (DQN) based Self Driving Car Control with Vehicle Simulator

Jupyter Notebook 13.87% Python 4.49% C# 68.48% Objective-C 0.27% CSS 0.61% ASP 12.29%
deep-reinforcement-learning drl vehicle-simulator dqn self-driving-car

drl_based_selfdrivingcarcontrol's Introduction

DRL Based Self Driving Car Control

Version 1.8

Version information of this project

Unity Project is provided! Please refer to the simulator section.


Introduction

[2018. 10. 22] Paper of this Project is accepted to Intelligent Vehicle Symposium 2018!! 😄

[2019. 05. 28] Paper of this Project is accepted to IEEE Transactions on Intelligent Vehicles!! 😄

IV2018 PPT

Link of IV2018 Paper

Link of IEEE Transactions on Intelligent Vehicles Paper

IV2018

IV2018


This repository is for Deep Reinforcement Learning Based Self Driving Car Control project in ML Jeju Camp 2017

There are 2 main goals for this project.

  • Making vehicle simulator with Unity ML-Agents.

  • Control self driving car in the simulator with some safety systems.

    As a self driving car engineer, I used lots of vehicle sensors(e.g. RADAR, LIDAR, ...) to perceive environments around host vehicle. Also, There are a lot of Advanced Driver Assistant Systems (ADAS) which are already commercialized. I wanted to combine these things with my deep reinforcement learning algorithms to control self driving car.

Simple overview of my project is as follows.

Snesor data plotting

I will use sensor data and camera image as inputs of DRL algorithm. DRL algorithm decides action according to the inputs. If the action may cause dangerous situation, ADAS controls the vehicle to avoid collision.


Software and Hardware configuration of this project

Software

  • Windows10 (64bit)
  • Python 3.6.5
  • Anaconda 5.2.0
  • Tensorflow 1.8.0

Hardware

  • CPU: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHZ

  • GPU: GeForce GTX 1080 Ti

  • Memory: 8GB


How to Run this Project

  1. download the github repo
  2. open the ipynb file in the RL_algorithm folder
  3. Set the environment path and run it!

Environment Path Setting

Env path

You should select os between Windows, Mac and Linux.

Also, for using this environment in Linux, you should chmod to change the access mode of a .x86_64 file. Example code of chmod is as follows.

chmod -R 775 *.x86_64


Description of files

  • ***_Image.ipynb: Network using only image of vehicle.
  • ***_sensor.ipynb: Network using only sensor data of vehicle.
  • ***_image_sensor.ipynb: Network using both image and sensor of vehicle

I also upload the other DQN based algorithms which I tested with the games that I made. Check out my DRL github repo

This is my PPT file of final presentation(Jeju Camp)


Specific explanation of my simulator and model is as follows.


Simulator

Snesor data plotting

I made this simulator to test my DRL algorithms. Also, to test my algorithms, I need sensor data and Camera images as inputs, but there was no driving simulators which provides both sensor data and camera images. Therefore, I tried to make one by myself.

The simulator is made by Unity ML-agents


Unity Project of the Simulator is Provided without paid assets!!. Please open the Unity_SDK folder with Unity. Therefore, you should buy and import the following assets in the project to use it.

SimpleSky

SimpleTown

SimpleRacer


Inputs

As, I mentioned simulator provides 2 inputs to DRL algorithm. Forward camera, Sensor data. The example of those inputs are as follows.

Front Camera Image Sensor data Plotting
Snesor data plotting Snesor data plotting

Also, vehicles of this simulator have some safety functions. This functions are applied to the other vehicles and host vehicle of ADAS version. The sensor overview is as follows.

Snesor data plotting

The safety functions are as follows.

  • Forward warning
    • Control the velocity of host vehicle equal to velocity of the vehicle at the front.
    • If distance between two vehicles is too close, rapidly drop the velocity to the lowest velocity
  • Side warning: No lane change
  • Lane keeping: If vehicle is not in the center of the lane, move vehicle to the center of the lane.

Vector Observation information

In this simulator, size of vector observation is 373.

0 ~ 359: LIDAR Data (1 particle for 1 degree)

360 ~ 362: Left warning, Right Warning, Forward Warning (0: False, 1: True)

363: Normalized forward distance

364: Forward vehicle Speed

365: Host Vehicle Speed

0 ~ 365 are used as input data for sensor

366 ~ 372 are used for sending information

366: Number of Overtake in a episode

367: Number of lane change in a episode

368 ~ 372: Longitudinal reward, Lateral reward, Overtake reward, Violation reward, collision reward

(Specific information of rewards are as follows)


Actions

The action of the vehicle is as follows.

  • Do nothing
  • Acceleration
  • Deceleration
  • Lane change to left lane
  • Lane change to right lane

Rewards

In this simulator, 5 different kinds of rewards are used.

Longitudinal reward: ((vehicle_speed - vehicle_speed_min) / (vehicle_speed_max - vehicle_speed_min));

  • 0: Minimum speed, 1: Maximum speed

Lateral reward: - 0.5

  • During the lane change it continuously get lateral reward

Overtake reward: 0.5* (num_overtake - num_overtake_old)

  • 0.5 / overtake

Violation reward: -0.1

  • example: If vehicle do left lane change at left warning, it gets violation reward (Front and right warning also)

Collision reward: -10

  • If collision happens, it gets collision reward

Sum of these 5 rewards is final reward of this simulator


Sliders

Slider

You can change some parameters with the Slider on the left side of simulator

  • Number of Vehicles (0 ~ 32) : Change the number of other vehicles
  • Random Action (0 ~ 6): Change the random action level of other vehicles (Higher value, more random action)

Additional Options

Foggy Weather

If you change the Foggy Weather dropdown menu to on, there will be fog to disturb camera image as follows.

Foggy Option

The Driver View images of the foggy weather are as follows.

Foggy Examples

Sensor Noise

sensor Noise

Sensor noise can be applied!!

If you set the Sensor Noise dropdown to On, you can control the Noise Weight using Slider. The equation of the adding noise to parameter a is as follows.

a = a + (noise_weight * Random.Range(-a, a))


DRL Model

For this project, I read papers as follows.

  1. Human-level Control Through Deep Reinforcement Learning

  2. Deep Reinforcement Learning with Double Q-Learning

  3. Prioritized Experience Replay

  4. Dueling Network Architecture for Deep Reinforcement Learning

You can find the code of those algorithms at my DRL github.

I applied algorithms 1 ~ 4 to my DRL model. The network model is as follows.

Snesor data plotting


Result

Graphs

Average Speed Average # of Lane Change Average # of Overtake
Graph(Lane Change) Graph(Lane Change) Graph(Lane Change)
Input Configuration Speed (km/h) Number of Lane Change Number of Overtaking
Camera Only 71.0776 15 35.2667
LIDAR Only 71.3758 14.2667 38.0667
Multi-Input 75.0212 19.4 44.8

Before Training

Result(Before Training)

After Training

Result(After Training)

After Training (with fog)

Result(After Training_fog)

After training, host vehicle drives mush faster (almost at the maximum speed!!!) with little lane change!! Yeah! :happy:

Citation

@inproceedings{min2018deep,
  title={Deep Q Learning Based High Level Driving Policy Determination},
  author={Min, Kyushik and Kim, Hayoung},
  booktitle={2018 IEEE Intelligent Vehicles Symposium (IV)},
  pages={226--231},
  year={2018},
  organization={IEEE}
}
@article{min2019deep,
  title={Deep Distributional Reinforcement Learning Based High Level Driving Policy Determination},
  author={Min, Kyushik and Kim, Hayoung and Huh, Kunsoo},
  journal={IEEE Transactions on Intelligent Vehicles},
  year={2019},
  publisher={IEEE}
}

drl_based_selfdrivingcarcontrol's People

Contributors

kyushik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drl_based_selfdrivingcarcontrol's Issues

Mac version

Hey I plan to use this simulator for my class project and I was wondering if the Mac version is going to be released any time soon?

Thanks for this!

The use of simulator for Ubuntu16.04!

Thanks for your wonderful code!

My issue:
( I follow your work in Ubuntu16.04 2080Ti)
When I input 'jupyter notebook' in the terminal , then run the file 'Double_Dueling_image.ipynb' it will apper the Initial interface of Unity.
Then nothing like this :
image

Simulating the Training for a Semi Autonomous Car(Level 2 Autonomy)

@Kyushik

Good Day!

Can you please clarify my questionns?

  1. Can you please let me know, If I can train "DRL_based_SelfDrivingCarControl" for a Semi Autonomous Car with a level 2 autonomy?

I am looking for training the following scenario in Indian Road(Please review the screen shots attached) - the scenario is similar to application of ADAS in Cars like Volvo XC90

Scenario 1 - Car A and Car B are in the same side of the road(lane concept is not included), Car B is stopped in front of Car A

Unity_Simulation_1

Scenario 2 - Car A moves to a particular distance(behind Car B), alerted by Radar / LIDAR(Cube's color changed from black to red) - Car A is manually driven by a human driver

Unity_Simulation_2

Scenario 3 - Realizing the Radar alert, Car A switches to autonomous mode, takes control from the driver and steers the Car A sideways to avoid crash

Unity_Simulation_3

Can you please let me know, if I can simulate a training scenario in DRL, I am confused about how to drive the car manually during the DRL training.

Can you please help me.

Thanks
Guru

question about saved_networks

I run the program and simulator according to your method, set it to training mode, but the trained model cannot be saved. Num_training is 1M.But on my computer, when the number of steps is 10K, the program runs slowly.If I need to reset Num_training?The problem is that it cannot generate ckpt file.The file type it produces is different from ckpt file, such as "events.out.tfevents.1527528894" .I would like to ask if you have any good suggestions or have a trained network that i can use.Thank you so much for helping me.

question about project

I'm sorry for disturbing you.I'm a beginner for DRL and interested in your project.I download your project and the simulator.But I don't know how to run it successfully.Hope to get your help.Thank you.

How run Simulator?

Hello @Kyushik,
I want to get the code in the thigh environment, but I get this error and the simulator closes after a few moments that do not move,

UnityTimeOutException Traceback (most recent call last)
in
----> 1 env = UnityEnvironment(file_name=env_name)
2
3 # Examine environment parameters
4 print(str(env))
5

~\anaconda3\lib\site-packages\mlagents_envs\environment.py in init(self, file_name, worker_id, base_port, seed, no_graphics, timeout_wait, additional_args, side_channels, log_folder)
215 )
216 try:
--> 217 aca_output = self._send_academy_parameters(rl_init_parameters_in)
218 aca_params = aca_output.rl_initialization_output
219 except UnityTimeOutException:

~\anaconda3\lib\site-packages\mlagents_envs\environment.py in _send_academy_parameters(self, init_parameters)
459 inputs = UnityInputProto()
460 inputs.rl_initialization_input.CopyFrom(init_parameters)
--> 461 return self._communicator.initialize(inputs)
462
463 @staticmethod

~\anaconda3\lib\site-packages\mlagents_envs\rpc_communicator.py in initialize(self, inputs)
102
103 def initialize(self, inputs: UnityInputProto) -> UnityOutputProto:
--> 104 self.poll_for_timeout()
105 aca_param = self.unity_to_external.parent_conn.recv().unity_output
106 message = UnityMessageProto()

~\anaconda3\lib\site-packages\mlagents_envs\rpc_communicator.py in poll_for_timeout(self)
94 """
95 if not self.unity_to_external.parent_conn.poll(self.timeout_wait):
---> 96 raise UnityTimeOutException(
97 "The Unity environment took too long to respond. Make sure that :\n"
98 "\t The environment does not need user interaction to launch\n"

UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Agents' Behavior Parameters > Behavior Type is set to "Default"
The environment and the Python interface have compatible versions.

How can I fix this problem?

Clarification about sensor input

Hi @Kyushik
Can you please clarify me the following questions

  1. Is LIDAR data(env_info.vector_observations[0]) is received from jeju_camp.x86_64 file?
  2. how the distance /speed is calculated by LIDAR(as its not physically present)?

Thanks in advance

Questions about project

Hello,
Is this an academic project? If so, were the results published elsewhere and where can I find the corresponding paper?

All the best,
Eduardo

Reg- unable to run code

Hi ,

I am trying to run your code through jupyter notebook connecting from anaconda prompt by the method you described earlier like unzipping etc. getting few issues like:

ModuleNotFoundError Traceback (most recent call last)
in ()
8 import time
9
---> 10 from unityagents import UnityEnvironment
11
12 get_ipython().run_line_magic('matplotlib', 'inline')

ModuleNotFoundError: No module named 'unityagents'

Name Error: name " UnityEnvironment " is not defined


NameError Traceback (most recent call last)
in ()
----> 1 env = UnityEnvironment(file_name=env_name)
2
3 # Examine environment parameters
4 print(str(env))
5

NameError: name 'UnityEnvironment' is not defined


error Traceback (most recent call last)
in ()
17
18 # Get information for update
---> 19 env_info = env.step(action_in)[default_brain]
20
21 next_observation_stack, observation_set, next_state_stack, state_set = resize_input(env_info, observation_set, state_set)

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in step(self, vector_action, memory, text_action)
469 self._conn.send(b"STEP")
470 self._send_action(vector_action, memory, text_action)
--> 471 return self._get_state()
472 elif not self._loaded:
473 raise UnityEnvironmentException("No Unity environment is loaded.")

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in _get_state(self)
285 self._data = {}
286 while True:
--> 287 state_dict, end_of_message = self._get_state_dict()
288 if end_of_message is not None:
289 self._global_done = end_of_message

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in _get_state_dict(self)
241 :return:
242 """
--> 243 state = self._recv_bytes().decode('utf-8')
244 if state[:14] == "END_OF_MESSAGE":
245 return {}, state[15:] == 'True'

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in _recv_bytes(self)
217 try:
218 s = self._conn.recv(self._buffer_size)
--> 219 message_length = struct.unpack("I", bytearray(s[:4]))[0]
220 s = s[4:]
221 while len(s) != message_length:

error: unpack requires a bytes object of length 4

once i open the simulator app, I get following exception found at end in log file. I cannot file .cs file. please help me in this regard.

Mono path[0] = 'C:/DL/adas/jas/DRL_based_SelfDrivingCarControl-master (1)/DRL_based_SelfDrivingCarControl-master/environment/jeju_camp_Data/Managed'
Mono config path = 'C:/DL/adas/jas/DRL_based_SelfDrivingCarControl-master (1)/DRL_based_SelfDrivingCarControl-master/environment/jeju_camp_Data/MonoBleedingEdge/etc'
PlayerConnection initialized from C:/DL/adas/jas/DRL_based_SelfDrivingCarControl-master (1)/DRL_based_SelfDrivingCarControl-master/environment/jeju_camp_Data (debug = 0)
PlayerConnection initialized network socket : 0.0.0.0 55335
Multi-casting "[IP] 10.0.3.178 [Port] 55335 [Flags] 2 [Guid] 3442019743 [EditorId] 2618453742 [Version] 1048832 [Id] WindowsPlayer(LAPTOP-MDH1861V) [Debug] 0" to [225.0.0.222:54997]...
Started listening to [0.0.0.0:55335]
PlayerConnection already initialized - listening to [0.0.0.0:55335]
Initialize engine version: 2017.3.1f1 (fc1d3344e6ea)
GfxDevice: creating device client; threaded=1
Direct3D:
Version: Direct3D 11.0 [level 11.1]
Renderer: Intel(R) HD Graphics 5500 (ID=0x1616)
Vendor: Intel
VRAM: 1130 MB
Driver: 20.19.15.4642
Begin MonoManager ReloadAssembly

  • Completed reload, in 7.260 seconds
    Initializing input.

XInput1_3.dll not found. Trying XInput9_1_0.dll instead...
Input initialized.

Initialized touch support.

Setting up 2 worker threads for Enlighten.
Thread -> id: 3714 -> priority: 1
Thread -> id: 342c -> priority: 1
UnloadTime: 4.280462 ms
UnityAgentsException: The brain Brain was set to External mode but Unity was unable to read the arguments passed at launch.
at CoreBrainExternal.InitializeCoreBrain (Communicator communicator) [0x00059] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\CoreBrainExternal.cs:37
at Brain.InitializeBrain (Academy aca, Communicator communicator) [0x0000e] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\Brain.cs:209
at Academy.InitializeEnvironment () [0x00056] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:230
at Academy.Awake () [0x00002] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:208

(Filename: D:/UnityGames/ML_Agent_Jejucamp2017/Assets/Scripts/CoreBrainExternal.cs Line: 37)

Can you please suggest me step by step procedure to execute as i am beginer to RL concepts... Thank you.

Could you tell me the code execution process for each ipynb files in RL algorithms folder.

hello, some questions about the simulator

thanks for providing the DRL algorithm and the unity environment. i doubt how the agent(self-driving car) in the unity environment communicate with other vehicles, how the self-driving car' state transform, and how the reward define in the the code of unity environment. Because unity environment is a executable program, we can't know the detail information, is there some reference material for the detail information about the internal code of unity environment .thanks very much!

Clarification on working with the project in unity hub

Hi @Kyushik
My system specifications are,
Ubuntu 18.04(64 bit)
Processor Intel® Core™ i3-8109U CPU @ 3.00GHz × 4
Graphics Intel® HD Graphics (Coffeelake 3x8 GT3)
Memory 7.7 GiB

I would like to modify your code using Unity hub, after opening the Unity SDK, while using play option I got the following error stating that "The communicator was unable to connect. Please make sure the external process is ready to accept communication with unity" as below.

Screenshot from 2020-01-10 09-47-15

As I am new to unity development, can you please suggest me how to run your car simulator as like any other games in unity in order to check and modify as I wish.

Thanks in advance.

The Float values equal to Meters in distance?

Hi,
In the DRL Simulator distance between Agent and other obstacles calculated in float values.
Is this float values equal to Meters ? If not, then what this float value represent?

Please help me to understand this?

Thanks in Advance
Malathi K

About training

Hi @Kyushik , I'm using Ubuntu 16.04. I have run the file Double_Dueling_image.ipynb. The Num_training = 1000000 is done but still the training is not stopped.

When will the training stop or how can I stop the training?

Thanks in Advance

the simulation screen is stuck

Hello, I encountered a strange problem during the development process, the simulation screen is stuck, my operating system is win7 64 bit, may I ask where the problem may occur? If I load the emulator of the previous version or use emv_name = "../environment/Planning/Windows/Planning", the picture will not freeze

Lidar data plotting

How can you plot the lidar data in xy coordinate? i mean how to convert it

Running Simulator

image

do i need to install other dependencies, like unity3d, to run this simulator? because the simulator always got crash and not responding when the progress is still observing. And also do you have any idea to make the simulator running smoothly on my PC?
btw I run this simulator with:
CPU : AMD Ryzen 5 4600H @3.00 GHz
GPU : NVIDIA Geforce GTX 1650
Memory : 8GB

The simulator!

I am glad to work on your codes. While working, the simulator is not loading. Can you describe the simulator connections? Is it uses Socket? or it starts automatically with the python code that you've provided?

Thanks,
Fayjie

Unity simulator crashing

The unity simulator opens correctly manually the first time. But when I run it with the python code (DQN_image) it shows the unity splash screen logo and crashes. After that when I try to open it manually it crashes too. I have tried this with windows 10 and ubuntu 20.04.

Additionally, I am not able to run the .app of the simulator in mac Catalina.

Thanks,
Alex

SocketException: Unable to connect because the target computer actively refused.

hello,Can you help me solve this problem?Thanks very much.
I really don't know where this path(D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts) comes from.

SocketException: Unable to connect because the target computer actively refused.

at System.Net.Sockets.Socket.Connect (System.Net.IPAddress[] addresses, System.Int32 port) [0x000c3] in <4b9f316768174388be8ae5baf2e6cc02>:0
at System.Net.Sockets.Socket.Connect (System.String host, System.Int32 port) [0x00007] in <4b9f316768174388be8ae5baf2e6cc02>:0
at ExternalCommunicator.InitializeCommunicator () [0x000b2] in D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts\ExternalCommunicator.cs:128
at Academy.InitializeEnvironment () [0x00094] in D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:235
at Academy.Awake () [0x00002] in D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:208

(Filename: <4b9f316768174388be8ae5baf2e6cc02> Line: 0)

Question about ".ckpt" files

Hi,

I have downloaded the github repo and started the training process. May i know where the models are getting dumped?
I could not find any ".ckpt" files in that directory while the training was going on.

Permission Error

@Kyushik I'm using Ubuntu 16.04. While I run the Double_Dueling_image.ipynb file I got the permission error.

env = UnityEnvironment(file_name=env_name)

# Examine environment parameters
print(str(env))

# Set the default brain to work with
default_brain = env.brain_names[0]
brain = env.brains[default_brain]

Screenshot from 2019-11-04 15-14-10

can you please help me with this

Thanks in Advance

Clarification about ML Agent

Hi,
I am new to this project.I would like to know did you use ML Agents of unity in this project. If so, can you please guide me with the tutorials/links to learn more about it.
I would like to recreate this for my college project on my own can you please guide me.

With curiosity,
Ranjani N

ipynb file

Which ipynb file shoud we run in ml algorithms folder??
sry.. i am a beginner

Skip Frame =4

I would like to ask that, is this skip frame necessary? since the agent does not act according to pixel(frame) but act according to sensor and camera features?

About the sensors and simulator scripts

Hi, congratulate won the camp, and the paper was accepted to iv 2018!

In the README.md , you have mentioned " I used lots of vehicle sensors(e.g. RADAR, LIDAR, ...) to perceive environments around host vehicle. Also, There are a lot of Advanced Driver Assistant Systems (ADAS) which are already commercialized."

  • I'm wondering if you can tell us something about where I can find those sensors and how should i use them.

  • And because of the environment made by purchased models, if you can upload the agents, brain and academy scripts?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.