GithubHelp home page GithubHelp logo

utiasdsl / gym-pybullet-drones Goto Github PK

View Code? Open in Web Editor NEW
1.1K 1.1K 333.0 118.74 MB

PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control

Home Page: https://utiasDSL.github.io/gym-pybullet-drones/

License: MIT License

Python 98.88% Shell 1.12%
betaflight control crazyflie gym gymnasium multi-agent pybullet quadcopter quadrotor reinforcement-learning robotics sitl stable-baselines3 uav

gym-pybullet-drones's People

Contributors

ana-lys avatar beomyeolyu avatar bttner avatar dippa-1 avatar hany606 avatar hehuizheng avatar hidal00p avatar jacopopan avatar jamesjrxu avatar olmpya12 avatar siqizhou avatar spencerteetaert avatar umutkavakli avatar ykeuter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gym-pybullet-drones's Issues

Google colab support

Will this repo work on Google colab, as colab supports both gym and pybullet? I actually don't have any GPU available for RL training. Any work around with that?

How to modify singleagent.py with a pid controller to move in xyz space?

I tried running singleagent.py with a 'pid' argument for the action space and 'hover' for the environment, but I am getting this AttributeError: 'HoverAviary' object has no attribute 'DRONE_MODEL'. I've been trying to modify the code, but am fairly stuck in trying to adapt this for a pid instead of one input. Any suggestions on this? Thanks!

Question about _computeReward in Multi Agent environment

First of all, thank you for your work on this simulator. I have a question, why is it using index 0 on getDroneStateVector instead of i. Is it a typo or maybe there is another meaning behind it?

  rewards = {}
  states = np.array([self._getDroneStateVector(0) for i in range(self.NUM_DRONES)])
  rewards[0] = -1 * np.linalg.norm(np.array([1, 1, 1]) - states[0, 0:3])**2
  for i in range(1, self.NUM_DRONES):
      rewards[i] = -1 * np.linalg.norm(states[i-1, 0:3] - states[i, 0:3])**2
  return rewards

This is from LeaderFollowerAviary.py.

Change of documentation format

This is not a feature request, but more like a suggestion. Until now the codes are nicely documented, but the format is not very conventional, as seen below.


   #### Initialize the environment ####################################################################
   ####################################################################################################
   #### Arguments #####################################################################################
   #### - drone_model (DroneModel)         desired drone type (associated to an .urdf file) ###########
   #### - num_drones (int)                 desired number of drones in the aviary #####################
   #### - neighbourhood_radius (float)     used to compute the drones' adjacency matrix, in meters ####
   #### - initial_xyzs ((3,1) array)       initial XYZ position of the drones #########################
   #### - initial_rpys ((3,1) array)       initial orientations of the drones (radians) ###############
   #### - physics (Physics)                desired implementation of physics/dynamics #################
   #### - freq (int)                       the frequency (Hz) at which the physics engine advances ####
   #### - aggregate_phy_steps (int)        number of physics updates within one call of .step() #######
   #### - gui (bool)                       whether to use PyBullet's GUI ##############################
   #### - record (bool)                    whether to save a video of the simulation ##################
   #### - obstacles (bool)                 whether to add obstacles to the simulation #################
   #### - user_debug_gui (bool)            whether to draw the drones' axes and the GUI sliders #######
   ####################################################################################################

I would suggest a more commonly used doctrings format for python (used in OpenAI gym repo), which has support in multiple IDEs in terms of snippets or giving extra information when hovering a function or class. This kind of formatting is well documented: https://www.python.org/dev/peps/pep-0257/

Awesome library!


   '''
       Parameters
       ----------
       action : TYPE
           DESCRIPTION.

       Returns
       -------
       clipped_action : TYPE
           DESCRIPTION.
 '''

Reversing the Reward Function

Hi,

First of all, great work!!!!

I wanted to reverse the reward function (basically use +ve reward) but doing so the network end the training by saying that "Stopping training because the mean reward 0.00 is above the threshold 0"

Is there a way I can increase this threshold? I am working over HoverAviary and a2c . Also do I need to make any other changes other than the reward function.

P.S. - Also is there a way I can I increase the episode duration? From What I read from other issues it has been set to 5 seconds. right?

Thanks.

Collision detection

Hey,

I am trying to find out if there is any way to determine collision with obstacles? Do you know if there is any such feature in this implementation or py-bullet in general?

For context, I am planning to add a cost function that can give high costs to collision with objects in the environment.

Thanks

Alter the Reward function used in learn.py

Hi Jacopo, could you please share with us the reward function you have used in learn.py. Also could you please suggest how can I alter the already used reward function in learn.py ?
Also how much time does it take to train and reach satisfactory results in learn.py ?
Best Wishes.

Additional action spaces

Hi,

in furtherance to my earlier question about RL with vision. I wanted to be able to see how a 3D action space is implemented through the flythrugate aviary in the single agent script. Is that a feature currently implemented into the example? I cant seem to find the option anywhere.

How to move drone instantaneously?

Is there a way to set the drone's position, RPY, and velocities instantaneously without having to use physics?

Also, I'm having a hard time learning about the features of this library. Is there any documentation besides the readme?

[Question] Frequency

What does the freq parameter do because when I set it from 240 to 1 it makes no difference in the amount of time it takes to complete a step. Also in what scenario would you want to change aggregate_phy_steps from 1?

Action explain

Can someone explain for me what is each element of action? I didnt find any wiki for this project!

Cannot connect to X server

Describe the bug
When running in either a ubuntu18 or in Google Colab, I consistently receive the following error when using learn.py: Cannot connect to X server.
To Reproduce
Steps to reproduce the behavior:

  1. Run script examples/learn.py in Google colab.

Expected behavior
I would expect to be able to see the videos of the drone flying, but I'm not able to.

fly.py is slow with vision = True (VisionAviary)

Hi,
I'd like to get depth image from the drones and have a few questions.
I try fly.py with VisionAviary object by setting vision = True. However, the GUI shows that the drones move very slowly, even thought the camera image size is 64x48 only. Is this expected? Is there any way to speed this up? And is there a laser scanner available so that I can get the depth measurements faster?

Thank you so much.

Subprocess issue

Hi there,
Thank's for your amazing contribution and putting together such an awesome project,

I have been testing all of your codes and algorithmes which all works fine for me exept singleagent.py and multi-agent.py where i had this error, hopefully u can help me with :

Traceback (most recent call last):
File "C:\Users\yacine\Desktop\gym-pybullet-drones-master\gym-pybullet-drones-master\experiments\learning\multiagent.py", line 157, in
git_commit = subprocess.check_output(["git", "describe", "--tags"]).strip()
File "C:\Users\yacine\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 419, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\yacine\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 500, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\yacine\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 946, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\yacine\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 1413, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] Le fichier spécifié est introuvable

Ps : I tried to comment :

git_commit = subprocess.check_output(["git", "describe", "--tags"]).strip()
with open(filename+'/git_commit.txt', 'w+') as f:
f.write(str(git_commit))

and it works for the single agent but didnt for multi_agent,

Thank's

Possibility of software-in-the-loop (SIL/SITL) testing

Thanks for this cool project - I'm sure it will push research a step forward!

I came across it while working with Crazyflies and was excited that you use the Crazyflie as your default quadrotor. Please let me know if I understand the details correctly: you use only the dynamical properties of the Crazyflie for this default quadrotor, but the project does not involve the Crazyflie firmware at all, right? If that is correct, software-in-the-loop (SIL/SITL) testing for the Crazyflie is something your approach has not tackled (yet).

Thanks for your help! Best,
Dominik

ValueError in _updateAndStoreKinematicInfo

I have a fresh install of the repo.

I tried running the examples/fly.py file. The GUI opened, everything initialized, then got an error in _updateAndStoreKinematicInfo during the first step.

I am running on Windows.

self.pos[i], self.quat[i] = p.getBasePositionAndOrientation(self.DRONE_IDS[i], physicsClientId=self.CLIENT)
ValueError: setting an array element with a sequence.
numActiveThreads = 0
stopping threads
Thread with taskId 0 with handle 0000000000000D44 exiting
Thread TERMINATED
finished
numActiveThreads = 0
btShutDownExampleBrowser stopping threads
Thread with taskId 0 with handle 00000000000002A4 exiting
Thread TERMINATED

GUI rendering does not work properly

Describe the bug
GUI rendering does not work properly

However, backend looks fine

jobs@gdnpc:~/sim/gym-pybullet-drones/examples$ python3 fly.py
pybullet build time: Apr 25 2021 07:44:46
[INFO] BaseAviary.init() loaded parameters from the drone's .urdf:
[INFO] m 0.027000, L 0.039700,
[INFO] ixx 0.000014, iyy 0.000014, izz 0.000022,
[INFO] kf 0.000000, km 0.000000,
[INFO] t2w 2.250000, max_speed_kmh 30.000000,
[INFO] gnd_eff_coeff 11.368590, prop_radius 0.023135,
[INFO] drag_xy_coeff 0.000001, drag_z_coeff 0.000001,
[INFO] dw_coeff_1 2267.180000, dw_coeff_2 0.160000, dw_coeff_3 -0.110000
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Created GL 3.3 context
Direct GLX rendering context obtained
Making context current
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce RTX 2080 Ti/PCIe/SSE2
GL_VERSION=3.3.0 NVIDIA 460.73.01
GL_SHADING_LANGUAGE_VERSION=3.30 NVIDIA via Cg compiler
pthread_getconcurrency()=0
Version = 3.3.0 NVIDIA 460.73.01
Vendor = NVIDIA Corporation
Renderer = GeForce RTX 2080 Ti/PCIe/SSE2
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
viewMatrix (-0.8660253882408142, -0.2499999701976776, 0.4330126941204071, 0.0, 0.0, 0.8660253286361694, 0.4999999701976776, 0.0, -0.4999999701976776, 0.4330126643180847, -0.75, 0.0, -0.0, 5.960464477539063e-08, -2.999999761581421, 1.0)
projectionMatrix (1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, -1.0000200271606445, -1.0, 0.0, 0.0, -0.02000020071864128, 0.0)
/home/geesara/.local/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
ven = NVIDIA Corporation
ven = NVIDIA Corporation

...
To Reproduce
Just trying out python3 fly.py example

Expected behavior
Not clear, why GUI is working properly

Screenshots
This is the kind of output I am getting

Selection_211

Desktop (please complete the following information):

  • OS: Ubuntu 18,
  • Software Python 3.6

setStepTime and simulation frequency

Hey,
thank you so much for this great repository, I really enjoy it!!

From lectures in signal theory I remember the Shannon Theorem, which says that you should sample with twice the frequency of the highest frequency component in the signal.
Basically, we are setting this sampling rate by choosing the simulation frequency in each environment class. So that for each step(..) call the physic engine advances its simulation by 1/frequency. The default rate is set to 240 Hz.

My question is, how did you choose this value? Cause I guess that the signal components of the IMU measurement are higher than 240 Hz / 2 = 120 Hz (Shannon Theorem).

Thank you so much for your help in advance.
Best regards from Germany - Simon

DynAviary nnlsRPM bug

Describe the bug
There's a bug when I try to use DynAviary on fly.py

To Reproduce
Steps to reproduce the behavior:

  1. To fly.py, import DynAviary e.g. on line 33:
    from gym_pybullet_drones.envs.DynAviary import DynAviary

  2. Comment out the VisionAviary and CtrlAvary envs

  3. Add

                    num_drones=ARGS.num_drones,
                    initial_xyzs=INIT_XYZS,
                    initial_rpys=INIT_RPYS,
                    physics=ARGS.physics,
                    neighbourhood_radius=10,
                    freq=ARGS.simulation_freq_hz,
                    aggregate_phy_steps=AGGR_PHY_STEPS,
                    gui=ARGS.gui,
                    record=ARGS.record_video,
                    obstacles=ARGS.obstacles,
                    user_debug_gui=ARGS.user_debug_gui
                    )

in place of the if ARGS.vision if-else statements
4. run python fly.py --physics dyn

Expected behavior
I'm expecting the drones to fly similar to CtrlAviary. Instead, the drones immediately fly out of view on the 2nd step of the simulation.

The warnings are:

[WARNING] iter 5 in utils.nnlsRPM(), unfeasible thrust 17106.24 outside range [0, 0.60]
[WARNING] iter 5 in utils.nnlsRPM(), unfeasible roll torque 17106.24 outside range [-0.01, 0.01]
[WARNING] iter 5 in utils.nnlsRPM(), unfeasible pitch torque 17106.24 outside range [-0.01, 0.01]
[WARNING] iter 5 in utils.nnlsRPM(), unfeasible yaw torque 17106.24 outside range [-0.01, 0.01]
[WARNING] iter 5 in utils.nnlsRPM(), unfeasible squared rotor speeds, using NNLS
Negative sq. rotor speeds:       [-525076195938723.56, 1516331668969410.00, -525076195938723.62, -412045601889035.19]           Normalized: [-0.30, 0.87, -0.30, -0.24]
NNLS:                            [0.00, 1378983135006398.25, 0.00, 0.00]                        Normalized: [0.00, 1.00, 0.00, 0.00]            Residual: 1630457287724757.50
[WARNING] iter 5 in utils.nnlsRPM(), unfeasible thrust 17965.44 outside range [0, 0.60]

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):
Ubuntu 18.04, Python 3.7 (everything else works as normal)

Thanks for your time!

Support for external radio controllers [Enhancement]

Considering that drones are controlled using a radio controller in real life, it would be a great addition to add support for an external radio controller to control drones in the pybullet simulator. Connection shouldn't be an issue since these controllers come with a wired connection to a PC. (Similar to ones used in FPV racing simulators. Example: https://www.velocidrone.com/)

  • It would also be great to add binds for additional switches on the radio controller to enable/disable a custom control script such as when one wants to trigger an RL control algorithm to take over the manual control while in flight.

Failed to make context current (eglError: 12288)

Hello, @JacopoPan.
I tried to look at the result of learning, but get an error.(Failed to make context current (eglError: 12288))

Steps to reproduce the behavior:
learning commands:
conda activate env
python singleagent.py --env takeoff --algo ppo --pol mlp --input rpm
test command: python test_singleagent.py --file save-takeoff-ppo-mlp-rpm-10.28.2020_09.57.45
This is result of test command:

pybullet build time: Oct 8 2020 00:10:46
[INFO] BaseAviary.init() loaded parameters from the drone's .urdf:
[INFO] m 0.027000, L 0.039700,
[INFO] ixx 0.000014, iyy 0.000014, izz 0.000022,
[INFO] kf 0.000000, km 0.000000,
[INFO] t2w 2.250000, max_speed_kmh 30.000000,
[INFO] gnd_eff_coeff 11.368590, prop_radius 0.023135,
[INFO] drag_xy_coeff 0.000001, drag_z_coeff 0.000001,
[INFO] dw_coeff_1 2267.180000, dw_coeff_2 0.160000, dw_coeff_3 -0.110000
argv[0]=--opengl2
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=3
argv[0] = --unused
argv[1] = --opengl2
argv[2] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
4

visual 0x24 selected

GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce GTX 1050/PCIe/SSE2
GL_VERSION=4.6.0 NVIDIA 455.28
GL_SHADING_LANGUAGE_VERSION=4.60 NVIDIA
pthread_getconcurrency()=0
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
viewMatrix (-0.8660253882408142, -0.2499999701976776, 0.4330126941204071, 0.0, 0.0, 0.8660253286361694, 0.4999999701976776, 0.0, -0.4999999701976776, 0.4330126643180847, -0.75, 0.0, -0.0, 5.960464477539063e-08, -2.999999761581421, 1.0)
projectionMatrix (1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, -1.0000200271606445, -1.0, 0.0, 0.0, -0.02000020071864128, 0.0)
/home/yoko/anaconda3/envs/UAV/lib/python3.7/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
[INFO] BaseAviary.init() loaded parameters from the drone's .urdf:
[INFO] m 0.027000, L 0.039700,
[INFO] ixx 0.000014, iyy 0.000014, izz 0.000022,
[INFO] kf 0.000000, km 0.000000,
[INFO] t2w 2.250000, max_speed_kmh 30.000000,
[INFO] gnd_eff_coeff 11.368590, prop_radius 0.023135,
[INFO] drag_xy_coeff 0.000001, drag_z_coeff 0.000001,
[INFO] dw_coeff_1 2267.180000, dw_coeff_2 0.160000, dw_coeff_3 -0.110000
Loaded EGL 1.5 after reload.
Failed to make context current (eglError: 12288)

Simple question

Firstly, I'm sorry to disturb you! I want to know whether the model is an accurate model of Crazyfile2. I mean I don't know wherher the aerodynamics is considered. If I implement a low-level controller and it perform will in gym-pybullet-drones. Will the controller perform well in a hardware Crazyfile2? Because I want to transplant a RL attitude controller from simulation environment to real platforma. I don't konw if this simulation can do it. Thanks.

[QUESTION] Training complex scenarios

When using hoverAviary with singleagent I can perfectly simulate the training of missionpoints by parameters like MAX_Z (max height to reach) or MAX_LIN_VEL_Z (how fast to move up).

I can see the RPM curves beeing perfectly smooth for my mission goal with less than 10min of training, awesome!

Now I try to understand how to proceed from here to more complex scenarios. I introduced the concept of wind, therefore I applied a constant Y directed force of 0.04 for every step in BaseAviary with p.applyExternalForce.

While this seems to do the job for the wind simulation when running fly.py, but when training on this I cannot see any RPM variations beeing tried by the training

image

The UAV consequently "moves with the wind"

image

My question is: how can I add variation to the Motor RPM in the training steps so I will reach a wind compensating controller like described in this example.

When running my training with ppo I get a variation in RPM, but all synced on all rotors, so no force and momentum is created to counteract the wind.

image

P.S. this toolset gives one of the best experiences on working with UAV sim, I can manipulate my UAV, see how it flies with fly.py, I can train a controller and see it's progress while it's training with test_singleagent.py and all your code is well structured and documented, big thanks

EDIT:

I might mess up the wind idea, I've added

        # wind
        p.applyExternalForce(self.DRONE_IDS[nth_drone],
                             -1,
                             forceObj=[0, 0.10, 0],
                             posObj=[0, 0, 0],
                             flags=p.LINK_FRAME,
                             physicsClientId=self.CLIENT
                             )

inside BaseAviary just before the propeller forces are added

EDIT2:
I made sure to constrain the mission by setting

MAX_XY=0.5
MAX_LIN_VEL_XY=0.4
MAX_Z = 1

Question on drone dynamics

Hi Jacopo,
Thank you for working on this simulator. I have a question on the dynamics of the drones. In gym-pybullet-drones/gym_pybullet_drones/envs/BaseAviary.py, line 851:
rpy = rpy + self.TIMESTEP * ang_v

Please correct me if I'm wrong here. I think the angular velocity is generally not the same as roll, pitch and yaw rates so I wonder if this is a bug? To calculate the new rotation matrix after 1 TIMESTEP, I think it should be something like:
next_rotation = matmul(rotation,expm(skew(self.TIMESTEP * ang_v)))
And then the new roll, pitch, yaw can be calculated from this new rotation matrix.
Best.

Installation of the conda environment for windows

Is your feature request related to a problem? Please describe.
The file/conda_req_list is setup for OSx

Describe the solution you'd like
is it possible to upload a files/conda_req_list for win x64 ? I

Ray dependency outdated

Describe the bug
When installing as described and running python examples/learn.py --rllib True I'm faced with an ImportError: libIlmImf-2_3.so.24: cannot open shared object file: No such file or directory (Arch Linux).

When looking at /usr/lib/ I can see that pip install ray[rllib] (or something different) installed libIlmImf-2_5.so.25 which seems to be coming from openexr 2.5.3-6

To Reproduce
Steps to reproduce the behavior:
python examples/learn.py --rllib True

Expected behavior
the correct version of

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • Arch (up2date)
  • python 3.7
  • conda environment
  • Software and Packages Versions [e.g. Python 3.6, Rllib 0.8.4]

Additional context
Add any other context about the problem here.

EDIT: symlinking the files (it's 4 or 5 dependencies) from 2.5.so.25 to 2.3.so.24 does not work and will result in an undefined symbol error

has anyone successfully trained multi agent problem : flockaviary?

First of all, thankyou for your work on this simulator.

after training flockaviary environment using multiagent.py for 200 iteration
3

then testing it using test_multiagent.py, the result is not as expected
2

as you can see, z position is above 100 even though from _computereward function we can see that the goal is to make multiple drones hover at z position =1
4

after looking at progress.csv, the reward is already steady (around -29) , which i assume that increasing the number of iteration will not affect the performance.

is this a problem with environment (flockaviary.py) or the algorithm (multiagent.py) or something else?

How can I change the collision set of the cf2p

I use your cf2p drone model to do the collision avoidance. However, I meet a problem that the model collide in vision only with the main body but ignore the prop links. So that all the links just get through the wall without any collision happens. So what can I do to solve this problem so that collision happens when prop links hit the wall rather than the base body?
Wait for your response, thank you!

Navigation with a Learned Policy

Hi,

Firstly, thank you for making this amazing piece of work available. I've learned a lot from your code.

I have mainly been interested in the singleagent.py problems. I've worked with all of the different aviaries and the agent does not seem to be able to solve any of the control problems. So for now, I have just been working on the simplest problem, which is the hoveraviary.py where the agent learns to move the drone from its initial conditions to a position [0, 0, 1] and continue to hover at that position. I've done quite a couple of different things here and there to try and get the agent to solve the problem, and what I've been mostly looking at is different approaches to the reward function _computeReward() and termination conditions _computeDone().

One of the many iterations of the reward function for the hoveraviary.py, which I tried before in many different forms, resembles the following:

def _computeReward(self):

       state = self._getDroneStateVector(0)
    
       pos = state[0:3]
       att = state[7;10]
       vel = state[10:13]
       ang_vel = state[13:16]

        targer_pos = np.array([0, 0, 1)]
        pos_err = np.linalg.norm(target_pos - pos)

        att_err = np.linalg.norm(att)
        vel_err = np.linalg.norm(vel)
        ang_vel_err = np.linalg.norm(ang_vel)

        W_pos_err = 1
        W__att = 0
        W_vel = 0
        W_ang_vel = 0

        reward = ( -1 * W_pos_err * pos_err ) + ( -1 * W_att * att ) + ( -1 * W_vel * vel ) + ( -1 * W_ang_vel * ang_vel )

        if pos_err < 0.0001:
            reward += 1

        return reward

where W_pos_err, W_att, W_vel and W_ang_vel are weightings applied to different state errors from the desired set-point values. I've tested out a few different weightings, but none have worked out so far. In the above block of code, I just set the weightings such that the reward function more closely resembles the original. The function includes a conditional reward bonus as well if the drone is able to stay within a 0.0001 error of the set-point.

I also tried completely different approaches to the reward function entirely. Unfortunately, none of the iterations of this function (including the one above) worked out even after training for over 30 million time steps. It seems Between the 4-dimensional continuous action space; the complexity of learning the drone dynamics through exploring this massive action space; and the challenging task of simultaneously learning these dynamics and using them to move the drone from one point to a completely different point in space in a stable fashion; the problem is just far too complicated to learn within a reasonable time.

So I had this simple idea of how the problem could be better guide learning, and the results were very promising:
I initialised the drone such that it would already be located at the desired set-point. The objective of the agent was now to only try and keep the drone at this set-point.

Of course, the intuitive reasoning for why this work so much better is because the problem was much simpler than trying to move from point A to point B, but I think another way of understanding it is that now the agent could focus on learning the basic dynamics of the drone, instead of trying to learn how to move the drone around and at the same time solve the complex problem of learning how to manipulate these dynamics to move from one point to another.

The idea is that once the model is trained, it can be used as a basis for learning much more complex problems by continuing to train the same model.

But even with this much simpler problem, after more than 20 million time steps, the agent would only manage to get the drone to hover at the set-point for about 1 second before beginning to oscillate around and crash.

So, another change which I made was to the termination conditions in _computeDone(). If the drone began to drift too far away from the desired set-point, I would terminate the episode immediately so that the agent does not spend any more time exploring an action space that was already a lost cause. This immensely improved the learning rate, and the agent was able to figure out how to stabilise the drone in about 5 million time steps! The termination condition function was as follows:

    def _computeDone(self):

        state = self._getDroneStateVector(0)

        curr_pos = state[0:3]
        curr_att = state[7:10]
        curr_vel = state[10:13]
        curr_ang_vel = state[13:16]

        target_pos = np.array([0, 0, 1])
        pos_err = np.linalg.norm(target_pos - curr_pos)

        done = False

        if self.step_counter/self.SIM_FREQ > self.EPISODE_LEN_SEC:
            done = True

        if pos_err > 0.01:
            done = True

        return done

This was something I could not do as easily with the original hover aviary problem description (point A to point B) because terminating early would create inconsistent episode rewards. Recall that the original reward function was really a "cost function" where the agent receives penalties for increasing state errors. What would happen, at times, is that the agent would notice that if it went extremely off course from reaching point B the episode would terminate earlier and they would receive fewer penalties. As a result, the agent would try to intentional fly off course as soon as possible to terminate the episode and receive a lesser penalty (i.e. receive a higher reward). The agent is so greedy smh - it only wants one thing.
I guess the issue could be fixed by having a positive valued reward function instead of penalties so that the agent continues to try and reach the set-point by collecting the most reward instead of avoiding the most penalty. Another option was to give the drone a massive negative reward if the episode terminated early. These are just some of the options I briefly tried out.

But anyways for this new hover problem, another useful feature of having the episode terminate early when the drone looks like it is about to crash, is that now, during training, the length of the episode ep_len_mean which is printed in the terminal was a direct indicator of whether the agent had converged at a valid solution or not. So now, instead of monitoring the average reward ep_rew_mean which has some unknown value at the converged solution, you could just look at the length of the episode, because the drone would have only been able to solve the stabilisation problem if it could keep the drone at the set-point for the entire duration of the episode.

Hopefully this is helpful to anyone who is trying to get, at the least, a trained agent that works. As mentioned earlier, this hover model could be used as a base to train other more complicated tasks. However, for now, my hope is that with this simple problem which the agent was able to solve, the agent will be able to generalise other more complicated navigational problems - which brings me to my actual question:

How can I change the set-point which a trained model is trying to reach? In all the aviaries, the objective position is specified internally within _computeReward(). What if I wanted to feed a stream of new set-points to the agent to have it track a trajectory instead of training the agent to follow a specific trajectory?

What I've thought of so far, is that maybe the desired setpoint can be an additional argument to the environment class. Something resembling the following:

class BaseAviary(gym.Env):

    def __init__(self,
                 drone_model: DroneModel=DroneModel.CF2X,
                 num_drones: int=1,
                 neighbourhood_radius: float=np.inf,
                 initial_xyzs=None,
                 initial_rpys=None,
                 physics: Physics=Physics.PYB,
                 freq: int=240,
                 aggregate_phy_steps: int=1,
                 gui=False,
                 record=False,
                 obstacles=False,
                 user_debug_gui=True,
                 vision_attributes=False,
                 dynamics_attributes=False
                 set_point = None ########## Additional set_point agreement ##########
                 ):

and from there, the set-pont will be parsed to _computeReward() and _computeDone() like this:

def _computeReward(set_point):
        ...
        return reward

def _computeDone(set_point):
        ...
        return done

_compteReward() would use the set-point to calculate the reward as the agent tries to reduce the state error, and _computeDone() would tell the agent whether it has reached the set-point and can now move on to the next. So, there will be some sort of an outer loop that moves between set-points and creates an instance of the environment with the current set-point as an argument to the environment. The environment is then parsed to the trained model to predict the next action. The state and the required action for that state are calculated continuously with an inner loop that will only terminate when _computeDone() returns True, which tells us that the drone has reached the set-point. From there, we go back to the outer loop which steps to the next set-point and a new instance of the environment is created. The new set-point is parsed to the new environment, and the loop continues until the entire trajectory has been tracked. Something like this:

target_pos = np.array( [x_1, y_1, z_1] , [x_2, y_2, z_2], ....)

for i in range target_pos:

	env = HoverAviary( initial_xyzs = None,
                                       initial_rpys = None,
                                       obs = ObersvationTpe.KIN,
                                       act = ActionType.RPM,
                                       target_pos = target_pos ########## Additional set_point agreement ##########
                                      ) 

	model = PPO.load( "Name_of_Model.zip", env = env )

	obs - env.reset()

	done = False
	while done == False:
        	action, _states = model.predict(obs)
        	obs, rewards, done, info = env.step(action)
        	env.render()

My issue is that I don't think this works. Does an already trained model still use the reward function at all or is the specific problem the agent was trained to solve something which is internal to the model (i.e. parsing a set-point to the reward function does nothing for an already trained model).

If this is the case, then how would I go about testing the ability of a trained model to solve other navigational problems other than the one it was trained for?

Thanks again for sharing this amazing project.

Episode vs timesteps

Hi there,
Thank you again for you amazing contribution. I learned a lot from this project.

  1. how this project defines an EPISODE ?

  2. is EPISODE equivalent to one flight ?

  3. is EPISODE defines as few flights until a reward coverage ?

  4. in singleagent.py the model.learn method receive total_timesteps as parameter.
    also, there's a stopTrainingRewardThreshold callback with constant reward threshold of -0

  5. can you clarify how this model learn in units of steps/episode/timesteps ?

  6. there's any way to review the learning process in episodes iterations and not time timesteps ?

Thanks,

setRealTimeSimulation

Hi!

Is there a way to apply controls in a way that allows for p.setRealTimeSimulation(1)? I'd like to eventually port the controls to a real crazyflie and would like to see the drone running in real time first.

Currently, BaseAviary uses p.applyExternalForce, but it seems PyBullet doesn't support real time simulation when we use applyExternalForce according to PyBullet QuickStart Guide.

Or, would you have any advice on how to convert the current method of applying controls to something that works with pybullet's real time simulation?

Thank you!

[EDITED] `singleagent.py` with `cpu > 1`

Hi, I tried to run python3 singleagent.py --algo ppo as mentioned in your paper. This runs 1-dimensional hover stabilizing starting from height 0m to 1m. In the paper you mention "all algorithms succeed albeit with very different learning curves" with accompanying plot where x-axis spans till 3.5*1e4. I executed with same default settings but on runnning test_singleagent.py, the drone seems to hover off past 2.5m. So, I tried with more steps - 1e5, and increased eval time in test_singleagent.py form 6 secs to 20 secs. Then drone oscilates around z-axis and then stops around 0.8m. Even on increasing steps to 1e6 the loss didn't seemed to drop a lot. From your past benchmarks, Can you suggest after how many timesteps, you were able to hover successfully? I don't know how I will be able to hover 3D dynamics when I can't even succeed on 1D case.

The link to the Paper you used for implementing PID Control

Hey Jacob!

I'm quite new to drones and I saw your controller but I was a little confused, So I wanted to replicate your PID COntroller on my Model. I tried using your gains, but the X and Y did not settle. So can you maybe reference the paper you used to implement the PID controller or Maybe the architecture that you have used.. Since it is a cascaded control architecture, I don't know what is the frequency at which your attitude controller works and the frequency at which the position controller works

Occupancy grid

Thanks for the great project and codebase! Is there a way to get a 3D occupancy grid with which I can do obstacle avoidance trajectory generation?

Simulation in the presence of a wind field.

Hi. Thank you for this amazing project. I have a question: Given a wind field f: R^3 x t -> R^3, is it possible to simulate how the robots would behave in it? Thanks. Meaning that the wind will drag the robots in the simulation.

Envs difference

Hi. Could someone explain what is difference between env "BaseAviary" with the ("takeoff-aviary-v0", "hover-aviary-v0", and "flythrugate-aviary-v0")?

env.step(env.action_space.sample()) vs env.pos

Hello,
I am very new to this. A want to use TensorFlow-agents to train models as I don't know much about stable-baselines3. I had generated environment using TakeoffAviary.py.
`

env=TakeoffAviary(gui=False)
env.step(env.action_space.sample())
(array([-4.51520228e-08, -3.96662917e-08, 2.25000240e-02, 3.55740654e-04,
-6.93952996e-04, -6.59155518e-05, -3.26110701e-05, -2.43708535e-05,
-4.18719402e-05, 4.62792489e-01, -8.66369966e-01, -1.87686425e-01]), -0.8888879398403151, False, {'answer': 42})
env.pos
array([[-6.77280342e-07, -5.94994376e-07, 1.12500120e-01]])
env.rpy
array([[ 0.00111759, -0.00218012, -0.00020708]])
env.ang_v
array([[ 0.06450283, -0.12075242, -0.02615925]])
env.vel
array([[-9.78332104e-05, -7.31125605e-05, -1.25615821e-04]])
`

as obs, reward, done, info = env.step(env.action_space.sample()). So why obs[:3] != env.pos[0] or obs[3:6]!=env.rpy[0] similarly for velocity and angular velocity also ? (obs-space is Box(12,) -> x,y,z,r,p,y,vx,vy,vz,wx,wy,wz receptively (in world frame)? (Is i am correct?)). What does env.pos or env.rpy or env.ang_v or env.vel returns ?

Different formula for computing RPM

Hi, Jacopo
Is the matrix A below is used to compute RPM given the torque and force? If it is, why for the calculation of torque x and y, it is divided by 2 not square root of 2. As I know for the X drone model, you divide the total RPM by square root of two

if self.DYNAMICS_ATTR:
if self.DRONE_MODEL == DroneModel.CF2X:
self.A = np.array([ [1, 1, 1, 1], [.5, .5, -.5, -.5], [-.5, .5, .5, -.5], [-1, 1, -1, 1] ])
elif self.DRONE_MODEL in [DroneModel.CF2P, DroneModel.HB]:
self.A = np.array([ [1, 1, 1, 1], [0, 1, 0, -1], [-1, 0, 1, 0], [-1, 1, -1, 1] ])

Different result with explicit dynamics and Pybullet

I tried to build my own PID Controller (Despite the one given because I could not find documentation behind this). I created a single drone environment like:

INIT_XYZS = np.array([[0,0,0]]);
INIT_RPYS = np.array([[0,0,0]]);
env = CtrlAviary(drone_model=DroneModel.CF2X, # See DroneModel Enum class for other quadcopter models
num_drones=1, # Number of drones
initial_xyzs=INIT_XYZS, # Initial XYZ position of the drone
neighbourhood_radius=np.inf, # Distance at which drones are considered neighbors, only used for multiple drones
physics=Physics.DYN, # Choice of (PyBullet) physics implementation
freq=500, # Stepping frequency of the simulation
aggregate_phy_steps=1, # Number of physics updates within each call to BaseAviary.step()
gui=True, # Whether to display PyBullet's GUI, only use this for debbuging
record=False, # Whether to save a .mp4 video (if gui=True) or .png frames (if gui=False) in gym-pybullet-drones/files/, see script /files/ffmpeg_png2mp4.sh for encoding
obstacles= False, # Whether to add obstacles to the environment
user_debug_gui=False, # Whether to use addUserDebugLine and addUserDebugParameter calls (it can slow down the GUI)
initial_rpys=np.array([0,0,0]).reshape(1,3))
The PID Controller's architecture and design is similar to Design of a trajectory tracking controller for a nanoquadcopter - Luis.C 2016. The controller works well in the model used in this paper and me (NL statespace with RK4 solver (Timestep = 0.002). (This is the reason I used 500 Hz as simulation frequency). The drones stabilizes almost stabilizes at a point in Pysics.DYN but Physics.PYB gives a bad result. Can you guide me to the changes that I would need to bring or to the right files I need to check?

record the learning process

Hi there,
Very impressive work !

when I run the learn.py I can see the learning process of the quadcopter attempts to fly, however,
not all attempts are available , only few.
is there anyway to see all attempts?
I wish to preview the learning process visually.

Also - is it available on singleagent.py as well ?

Thanks,

PyBullet GUI problem on Ubuntu 18 with NVIDIA driver

Describe the bug
Same as #41.
Installation steps are applied properly.

Trying to run fly.py.

stavrako@stavrako:~/gym-pybullet-drones/examples$ python3 fly.py
pybullet build time: May 14 2021 14:07:06
[INFO] BaseAviary.init() loaded parameters from the drone's .urdf:
[INFO] m 0.027000, L 0.039700,
[INFO] ixx 0.000014, iyy 0.000014, izz 0.000022,
[INFO] kf 0.000000, km 0.000000,
[INFO] t2w 2.250000, max_speed_kmh 30.000000,
[INFO] gnd_eff_coeff 11.368590, prop_radius 0.023135,
[INFO] drag_xy_coeff 0.000001, drag_z_coeff 0.000001,
[INFO] dw_coeff_1 2267.180000, dw_coeff_2 0.160000, dw_coeff_3 -0.110000
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Created GL 3.3 context
Direct GLX rendering context obtained
Making context current
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce MX110/PCIe/SSE2
GL_VERSION=3.3.0 NVIDIA 460.73.01
GL_SHADING_LANGUAGE_VERSION=3.30 NVIDIA via Cg compiler
pthread_getconcurrency()=0
Version = 3.3.0 NVIDIA 460.73.01
Vendor = NVIDIA Corporation
Renderer = GeForce MX110/PCIe/SSE2
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
viewMatrix (-0.8660253882408142, -0.2499999701976776, 0.4330126941204071, 0.0, 0.0, 0.8660253286361694, 0.4999999701976776, 0.0, -0.4999999701976776, 0.4330126643180847, -0.75, 0.0, -0.0, 5.960464477539063e-08, -2.999999761581421, 1.0)
projectionMatrix (1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, -1.0000200271606445, -1.0, 0.0, 0.0, -0.02000020071864128, 0.0)
/home/stavrako/.local/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
ven = NVIDIA Corporation
ven = NVIDIA Corporation

[INFO] BaseAviary.render() ——— it 0005 ——— wall-clock time 1.5s, simulation time 0.0s@240Hz (0.01x)
[INFO] BaseAviary.render() ——— drone 0 ——— x +00.00, y +00.00, z +00.10 ——— velocity +00.00, +00.00, +00.00 ——— roll +00.00, pitch -00.00, yaw +00.00 ——— angular velocity +0.0000, +0.0000, +0.0000 ———
[INFO] BaseAviary.render() ——— drone 1 ——— x -00.26, y -00.15, z +00.15 ——— velocity +00.00, +00.00, +00.00 ——— roll -00.00, pitch -00.00, yaw +30.00 ——— angular velocity +0.0000, +0.0000, +0.0000 ———
[INFO] BaseAviary.render() ——— drone 2 ——— x -00.26, y -00.45, z +00.20 ——— velocity +00.00, +00.00, +00.00 ——— roll -00.00, pitch -00.00, yaw +60.00 ——— angular velocity +0.0000, +0.0000, +0.0000 ———

...

numActiveThreads = 0
stopping threads
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
finished
numActiveThreads = 0
btShutDownExampleBrowser stopping threads
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed

image

To Reproduce
Steps to reproduce the behavior:
Running fly.py

Expected behavior
1.

I run a minimal example that has been referred in #41

import pybullet as p
import pybullet_data
from time import sleep

def main():
    PYB_CLIENT = p.connect(p.GUI)
    p.setGravity(0, 0, -9.8, physicsClientId=PYB_CLIENT)
    p.setRealTimeSimulation(0, physicsClientId=PYB_CLIENT)
    p.setTimeStep(1/240, physicsClientId=PYB_CLIENT)
    p.setAdditionalSearchPath(pybullet_data.getDataPath(), physicsClientId=PYB_CLIENT)
    p.loadURDF("plane.urdf", physicsClientId=PYB_CLIENT)
    p.loadURDF("duck_vhacd.urdf", [0, 0, 1], physicsClientId=PYB_CLIENT)
    for _ in range(240*10):
        p.stepSimulation(physicsClientId=PYB_CLIENT)
        sleep(1/(240*10))

if __name__ == "__main__":
    main()

image

  1. "replace p.connect(p.GUI) with p.connect(p.GUI, options="--opengl2")" didn't work as well.

Desktop (please complete the following information):

  • OS: Ubuntu 18.04
  • Software Python 3.6
 stavrako@stavrako:~$ glxinfo | grep "OpenGL version"
OpenGL version string: 4.6.0 NVIDIA 460.73.01
stavrako@stavrako:~$ glxinfo | grep 'version'
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 4.6.0 NVIDIA 460.73.01
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL version string: 4.6.0 NVIDIA 460.73.01
OpenGL shading language version string: 4.60 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 460.73.01
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    GL_EXT_shader_group_vote, GL_EXT_shader_implicit_conversions, 
stavrako@stavrako:~$ nvidia-smi
Tue May 18 19:18:46 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.73.01    Driver Version: 460.73.01    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce MX110       Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   49C    P0    N/A /  N/A |    136MiB /  2004MiB |      4%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      5480      G   /usr/lib/xorg/Xorg                 88MiB |
|    0   N/A  N/A      5642      G   /usr/bin/gnome-shell               44MiB |
+-----------------------------------------------------------------------------+

Additional context
I have tried Windows and it works fine there.

accessing camera for RL

Hi. I was trying to figure out how I can access the depth camera stream for some RL applications.

is there any documentation on how I can access this information? I noticed older iterations had a __dev.py folder with a list of features but the newest master doesn't seem to contain this.

Thanks!

Questions about Downwash

Hi,

I am not a low-level control guy, according to my understanding, the _downwash() in BaseAviary.py is for automatically avoiding potential collisions with other drones.

But, it seems only working for the drones flying at different height. For example, if I set the initial z=1 for both two drones in your examples/downwash.py, the two drones will collide or one of the drone suddenly going somewhere.

I was wondering:

  1. Is it possible to achieve the downwash when the drones flying at same height, and how?
  2. How to set the downwash distance?
  3. I modified the downwash.py to be with 3 drones (with initial z = [0.99, 0.999, 1.0] in the following video, I didn't get how the mechanism decides which drone to downwash, could you explain it to me? It seems not always the lower one to perform downwash, right?
    dw

Thank you!

voxel_grid

Describe the bug
i wanna to test script as mentioned at the end of link

when i run this script :

import airsim

c = airsim.VehicleClient()
center = airsim.Vector3r(0, 0, 0)
output_path = os.path.join(os.getcwd(), "map.binvox")
c.simCreateVoxelGrid(center, 100, 100, 100, 0.5, output_path)

python said: VehicleClient has no attribute simCreateVoxelGrid
did anything need to write before or after?

Desktop (please complete the following information):

  • OS: win10
  • Software and Packages Versions Python 3.9 & 3.8

Inverse Reinforcement Learning

Hi,

Firstly, thanks for putting together such an awesome project!

I've been playing around with the singleagent.py problems recently and was wondering if there is any way to incorporate demonstration learning / inverse reinforcement learning into these tasks. Instead of hard-coding the reward functions in _computeReward, I wanted to try learning a reward function from expert demonstrations. (See: Let's do IRL). I'd appreciate if you could give me some pointers on how to modify the _computeReward and the learning process to include an IRL_reward.

Cheers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.