GithubHelp home page GithubHelp logo

Comments (40)

mgschwan avatar mgschwan commented on July 22, 2024 1

The blog entry to this video is here: http://www.blensor.org/blog_entry_20180323.html
We can do a more sophisticated simulation if necessary. This one was just to show that the person can be easily picked up from the LIDAR data

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024 1

I now made a more extensive simulation
50m approach with some surrounding terrain

http://www.blensor.org/lidar_accident.html

from ossdc-visionbasedacc.

bradtem avatar bradtem commented on July 22, 2024 1

To model this accident, you would want to consider the following:

a) Velodyne range starts at a little over 100m depending on reflectivity. At that point there is some curve to the road but the left lanes should be visible.
b) There is a slight grade up to the road, it just went under an underpass.
c) Car travels at 17 m/s according to reports
d) At that speed, stopping distance is just under 25m. Thus the target must be decently sensed (though it need not be classified) by then. Generally, this involves seeing in in 3-4 frames of the Velodyne, which sweeps at 10hz. This adds perhaps 8m of distance
e) In particular, you want enough frames to not just detect the obstacle but track its movement and velocity so you can model a vector of its future trajectory.
f) An unsophisticated system which does not have a vector may only get concerned when it sees the obstacle in its own lane. The dashcam video shows the bicycle already in the lane when it appears from the darkness at 1.4 seconds prior to impact. What is lane width? Would guess 3.5m? For pedestrian at 3mph this suggests enters lane 2 to 2.5 seconds prior to impact.
g) For radar, it is much harder to get the horizontal velocity vector, so you want a clear signal the target will incur into your lane in a simple system. However, she crossed 3.5 full lanes which suggests there should have been an ability to get this vector from radar as well.

from ossdc-visionbasedacc.

bradtem avatar bradtem commented on July 22, 2024 1

Looks good, and confirms what has been said by Velodyne itself and everybody else. No way this LIDAR doesn't readily spot the pedestrian. Now we have to wait for more data to see what failed.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Would be super if we could also get full dataset from different SDC prototypes (including from Udacity, Waterloo, MIT and other universities) driving in similar conditions to see if current sensor technology is enough (I assume it is) to detect a moving target (person with a bike) at 50m while driving at 60km/h, then notify the safety driver that an obstacle is in range (FCW), automatically switch to long beam range to allow long range cameras to see better and slow down while approaching the target to increase the chances to stop or avoid impact.

We should also try to see how recent ADAS system behave in those conditions, in Tesla, Volkswagen, etc.

If anyone has access to this kind of cars (SDC or advanced ADAS) please share your investigations and tests.

This is the minimum that should have happened in the accident scenario, would be good to see how cars (as many as possible) with this feature activated would have behaved:

Forward Collision Warning (FCW) Systems
http://brainonboard.ca/safety_features/active_safety_features_fcw.php

"However, if an obstacle like a pedestrian or vehicle approaches quickly from the side, for example, the FCW system will have less time to assess the situation and therefore a driver relying exclusively on this safety feature to provide a warning of a collision will have less time to react."

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

A nice SDC related dataset from a Velodyne HDL 64E S2 LIDAR:

Project page:
http://www.pges.fr/2rt3d/

LIDAR scans:
http://www.pges.fr/2rt3d/downloads.php

Videos:
https://m.youtube.com/channel/UCQgt3-fe79kJhrR-tuK0M5Q

Lets see how visible is a person (or similarly shaped object) at around 50m and beyond.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

To make it easier for everyone to understand the accident scenario, would be good to produce videos like these from real and simulated LIDAR scan:

Virtual scan of a pedestrian with a bicycle
https://youtu.be/zauRVtAcXUc

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

See the comments on this video, looks like Volvo XC90 has the best illumination between the 3 cars tested:

Volvo XC90, MB GLC & Audi Q7, night drive

https://youtu.be/9UCz5tm0cKo

We'll need to find if Uber car had active/dynamic beam mode on, that should have improved a lot the illumination range.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

@mgschwan would be possible to convert some of the scans from 2rt3d dataset from above, to view them in your tool?

Also would be good to have similar videos with simulations at 25m and 50m for easy comparison.

Please add a bounding box with dimensions around the person+bike so we get a better feel of the scale and compare easier the point cloud.

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024

@mslavescu the data is generated by our sensor simulation software. It can export to PCD point cloud data and be imported into any software that can read PCD files.

The visualization was done with Blender itself, no specialized tool was used. The colored pointcloud was generated with pcl_viewer from the pointcloud library.

I can do a scan at 25m and 50m, no problem, but I was wondering if there is some more information about the scene to make it a bit closer to reality?

  • Lane width
  • Location of the accident to look it up in google streetview

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

This shows how bad is the video from Uber dash cam, here it is compared with another video in the same road spot, similar night conditions:
https://twitter.com/emilyabell/status/977064719419879424

https://video.twimg.com/tweet_video/DY868_BXUAAfxt6.mp4

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

@mgschwan here is a picture with the Uber crash location, I added the Google StreetView link to the issue description:

https://twitter.com/Rashomon2/status/975808319200796672

https://pbs.twimg.com/media/DYrD3hoVoAAMci5?format=jpg

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

National Transportation Safety Board Office of Public Affairs (NTSB) current report (3/21/2018):

https://www.ntsb.gov/news/press-releases/Pages/NR20180320.aspx

Developments on Tuesday include:

Meeting with representatives of Uber, the National Highway Traffic Safety Administration and the Tempe Police Department
Beginning examination of the accident vehicle, a Volvo XC90
Beginning examination of the accident site
Viewing a copy of a video that captured the crash from an aftermarket dash camera mounted in the test vehicle
Gathering information about the technology on the test vehicle
Collecting information about the pedestrian and the safety driver
Beginning collection of any and all electronic data stored on the test vehicle or transmitted to Uber

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Velodyne HDL 64E S2 dataset (contains 152 pedestrians and 8 bicycles):

http://www-personal.acfr.usyd.edu.au/a.quadros/objects4.html

This object dataset was collected from the robot Shrimp (left), which has a Velodyne LIDAR. It spins around at 20Hz to produce a dynamic 3D point cloud of the scene. The dataset consists of 631 object instances segmented from Velodyne scans of the Sydney CBD. A few instances are shown below.

Sample object classes

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

VeloView

Performs real-time visualization and processing of live captured 3D LiDAR data from Velodyne's HDL sensors (HDL-64E, HDL-32E, VLP-32, VLP-16, Puck, Puck Lite, Puck HiRes). An introduction to features of VeloView is available as a video

https://www.paraview.org/Wiki/VeloView

Code and datasets:

https://github.com/Kitware/VeloView

https://midas3.kitware.com/midas/community/29

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Velodyne sample datasets (including from HDL 64E)

http://velodynelidar.com/downloads.html

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

The Ford Campus Vision and Lidar Data Set:

http://robots.engin.umich.edu/SoftwareData/Ford

The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne HDL 64E 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system.

Current data sets
were collected during November-December 2009, in the future
we plan to host more data sets corresponding to some other
times of the year.

Paper:
http://robots.engin.umich.edu/uploads/SoftwareData/Ford/ijrr2011.pdf

Videos:

https://youtube.com/watch?v=igG5-vCF2ac

https://youtube.com/watch?v=JtkwUl-lCf0

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

The University of Michigan North Campus Long-Term Vision and LIDAR Dataset (contains pedestrians too)

http://robots.engin.umich.edu/nclt/
http://robots.engin.umich.edu/SoftwareData/NCLT

Sensors:

Velodyne HDL-32E LIDAR
Ladybug3 Omnidirectional Vision
Microstrain GX3 IMU
KVH Fiber Optic Gyro
Wheel Odometry
Standard and RTK GPS

Paper:
http://robots.engin.umich.edu/nclt/nclt.pdf

Video:
4D Mapping with Segway Robot
https://youtube.com/watch?v=rD7VvAIbeag

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Video with 2 cameras:
Stereoscopic Reference for UBER accident location - Northbound on Mill approaching Curry
https://youtu.be/8p0lpe-puOI

Here is a review of the camera used:
https://www.cnet.com/products/drift-stealth-2/review/

It has a 1/3" 3 Mpix Aptina CMOS sensor with 2.0V/Lux-sec Low Light Sensitivity
https://driftinnovation.com/products/drift-stealth2

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Thanks @bradtem and @mgschwan !

If they used 64E I think the full scan rate is 20Hz.

We can use simulated data for now, hopefully they will release the raw data at some point.

Beside breaking in this situation I think by turning a bit to the left would have completely avoided the impact, which happened on the right side of the car, while the pedestrian was moving towards right side.

A human driver may have reacted differently, may have turned right instead, but software should be able to the calculate the optimum path considering all the surrounding objects (static or mobile).

We'll need to find an accurate way to simulate different scenarios.

CARLA simulator may be an option, not sure how good is their car model and physics engine though.

If you know better options (open source if possible) please list them here.

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024

I think the 20 Hz is only available in the latest revision of the HDL-64e, please correct me if I am wrong.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

HDL-64E S2 from 2010, has 5-15Hz scan rate:
http://velodynelidar.com/lidar/products/brochure/HDL-64E%20S2%20datasheet_2010_lowres.pdf

HDL-64E S3 from 2013, has 5-20Hz scan rate:
https://www.autonomoustuff.com/wp-content/uploads/2016/08/hdl-64E.pdf

Not sure which one was mounted on Uber car, and at what scan rate were they computing.

@mgschwan could you please do a video at 15Hz and 20Hz also.

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024

I did a simulatio at 20Hz which reduces the angular resolution from the default 0.1728 degrees (at 10Hz) to 0.3456 degrees.

At a distance of 50 meters this makes a difference, at 25 meters it is still quite noticeable but definitely enough points to detect it. At 12.5 meters there are enough points to descern the bike from the person.

50 Meters - 20Hz
20hz_50meters

50 Meters - 10Hz
10hz_50meters

25 Meters - 20Hz
20hz_25meters

25 Meters - 10Hz
10hz_25meters

12.5 Meters - 20Hz
20hz_12_5meters

12.5 Meters - 10 Hz
10hz_12_5meters

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Thanks @mgschwan!

How does BlenSor compare with CARLA LIDAR sensor, which seems it is supports at least HDL-32E (since yesterday):
https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-080
https://carla.readthedocs.io/en/latest/cameras_and_sensors/

This is HDL 64E S3 manual:
http://velodynelidar.com/lidar/products/manual/HDL-64E%20S3%20manual.pdf

And here is the list of angular resolutions at different scan rates supported by HDL 64E S3 (at least since 2013):
image

An interesting discussion about angular resolution of LIDAR:

LIDAR also has limitations on angular resolution just as a function of how the sensor works. It's entirely possible that the size of the person/bike on LIDAR was just too small until it was too late to stop.
https://news.ycombinator.com/item?id=16645627

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Comparison between Velodyne Models - VLP-16, HDL-32, HDL-64:
http://velodynelidar.com/docs/guides/LiDAR%20Comparison%20chart_Rev-A_2_Web.pdf

VLS-128 seems to be much better but it was just released (last year), I couldn't find the manual or specs, but these articles are representative:

Velodyne’s latest LIDAR lets driverless cars handle high-speed situations
Discerning a butterfly from a broken tire at 70mph
https://www.theverge.com/2017/11/29/16705674/velodyne-lidar-128-autonomous-vehicles-driverless-cars

128 Lasers on the Car Go Round and Round: David Hall on Velodyne’s New Sensor
http://velodynelidar.com/blog/128-lasers-car-go-round-round-david-hall-velodynes-new-sensor/

from ossdc-visionbasedacc.

bradtem avatar bradtem commented on July 22, 2024

A proper recreation also should involve the slight upward grade of the road, which may affect the line density on the target. Scans at 25 meters are interesting but not too valuable, in that this is the stopping distance at 40mph, and so an obstacle must be clearly visible well before that. In fact, it is generally desired to begin breaking even earlier to avoid the need for a hard full brake. This makes 50m a good distance to consider. At 10hz, you get one sweep every 1.7m, and you want a few sweeps for solid perception. As such, early detection by 35m is good and 50m is even better.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

This is a nice video with DARPA 2007 challenge data:

Visualization of LIDAR data
https://youtu.be/nXlqv_k4P8Q

@mgschwan how can we generate a 360° video with LIDAR cloud points (colored by distance) from car level in BlenSor? Even live with possible interaction to be viewed in VR systems (Oculus, Vive or Google Cardboard like).
While the car moves from 50m to 5m towards the person.

@bradtem to summarise a few points:

  • from 50m we have about 3s till the impact, while driving with 17m/s (approx 61Km/h)
  • that means we get about 30 LIDAR scans in those 3s.
  • the closer we get to the person the more points we see as part of the person
  • the bike is like a non deformable object, so we should get consistent​ position of the points on the bike, but it is not static as the person moved
  • the person was close to the lane where the car was going, and the road makes a slightly right turn there, so the person was kind of in the way for last part before impact
  • I would say at least last 30m, unless they have a very accurate 3D map of that road and very accurate localization to see the person outside of the lane
  • even if the person(+bike) was not identified as a person it should have been identified as an obstacle (static or mobile) positioned in the way, not just some light flying thing (like a big paper bag)
  • based on the short beam range in the video, it would be interesting to see how they did lane keeping (I assume using camera input), considering the lane is a bit curved in that area
  • the short beam range and the fact that the person was not looking towards the car (maybe there was wind from same direction), she may have not even noticed the car coming

This last point it makes very important to have adjustable beams and use them on long range and reduce the range/brightness​ and lower the angle while getting closer to an opposite car, in such a way that the combined beam would be enough.

Same applies to animals, especially when very silent electric cars will be deployed. Or the wind covers the car noise.

We need also elevated long range beams to see stop signs or other traffic signs.

from ossdc-visionbasedacc.

bradtem avatar bradtem commented on July 22, 2024

Yes, you have 3 seconds to impact from 50m. However, generally impact is not your goal! To avoid impact, you must hard brake before 25m. To avoid hard braking it would be nice to start much earlier, like 40m. Which is fine, you get 6 sweeps at 10hz from 50m to 40m, and then you actuate the brakes.

However, for the pure emergency situation, you can consider starting at 35.5m, get 4 sweeps on your way to 25m, and then hard brake on dry pavement. So with a 400ms perception/analysis time, you really want to perceive before 36m.

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024

@bradterm Do we have any info about the upward slope. I cant get that info from google maps. But if the slope remains constant up to the pedestrian it would hardly change the output as the car would already follow the slope.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

@bradtem I agree, for full stop before the impact the car needs to act (start to break) at around 25m.
Also by breaking at 25m we give the person more time to get out of the way, we should see how far the person will get in that fraction of second.
Would be good to simulate different braking + steering (left or right) scenarios, and compare the impact (direct or lateral) force.

@mgschwan I found this topo map, may help a bit:
https://www.topozone.com/arizona/maricopa-az/park/papago-park-tempe/
https://www.topozone.com/map-print/?lat=33.4422671&lon=-111.9420887&title=Papago%20Park-Tempe%20Topo%20Map%20in%20Maricopa%20County%20Arizona
image
Here is the satellite view, the accident was a bit north east to Marque Theatre sign:
image

And this one:
http://en-gb.topographic-map.com/maps/33.4354265,-111.9445283,33.4381245,-111.9418303/?Name=642+N+Mill+Ave%2C+Tempe%2C+AZ+85281%2C+USA
image

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

The Tempe weather on March 18, accident happened at 10PM:
https://www.timeanddate.com/weather/usa/tempe/historic?month=3&year=2017#
image

There was no wind around 10PM, temp 15C, humidity 25%, visibility 16KM:
Tempe Weather History for March 18, 2018
https://www.timeanddate.com/weather/usa/tempe/historic?hd=20180318
image

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024

Here is a scan from a distance of 34 meters
10Hz scanning speed
Angular resolution: 0.1728 degrees
Speed of car: 17m/s

https://skfb.ly/6xMCz

I can't do an animation in VR, but Sketchfab has a VR mode. So if you have a headset you can step into the scan

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

@mgschwan thanks!
That is super cool!
I'll try it tonight.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

A very nice video with object detection:

Motion-based Detection and Tracking in 3D LiDAR Scans
https://youtu.be/cyufiAyTLE0

I'll try to see how we can run this kind of algorithms in BlenSor and CARLA Simulator.

Beside of understanding and reproducing the Uber accident scenario (more accurate when we get the car onboard sensors data), this exercise would help us build tests and simulate different (edge case) scenarios for OSSDC Initiative.

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024

This look quite promising as well

https://www.youtube.com/watch?v=0Q1_nYwLhD0

But I think 3 year old papers are already out of date in this fast moving field.

If you got working implementations of those algorithms, getting simulated data from BlenSor into them probably works through a series of PCD or PLY files. Alternatively I think I can create a ROS Bag file to replay them for algorithms that work directly as a ROS node

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

@mgschwan how hard would be to write a ROS LIDAR/camera publisher in BlenSor?

That way we could easily create ROS bags and use it in live scenarios also.

Take a look at CARLA (https://github.com/carla-simulator/carla) and TrueVision (https://www.truevision.ai/) simulators, both based on Unity, if we could build similar features in a fully open source simulator, based on Blender (with BlenSor as base) would be super!

Here is a more recent work with neural nets based detection:

Vote3Deep: Fast Object Detection in 3D Point Clouds Using Efficient Convolutional Neural Networks
https://youtu.be/WUOSmAfeXIw

We will try to integrate something like this in Colaboratory, like we did with other CNN based image detection/segmentation methods:

Try live: SSD object detection, Mask R-CNN object detection and instance segmentation, SfMLearner depth and ego motion estimation, directly from your browser!
https://medium.com/@mslavescu/try-live-ssd-object-detection-mask-r-cnn-object-detection-and-instance-segmentation-sfmlearner-df62bdc97d52

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Thanks @bradtem for this article:

How does a robocar see a pedestrian and how might Uber have gone wrong?
http://ideas.4brad.com/how-does-robocar-see-pedestrian-and-how-might-uber-have-gone-wrong

from ossdc-visionbasedacc.

mgschwan avatar mgschwan commented on July 22, 2024

Here is the link to a bag file with the 20Hz simulation.

http://www.blensor.org/pedestrian_approach.html

Can anyone with a running obstacle detection algorithm check if the bag file works. I created it inside ros docker container, so I can't run a GUI to check if the data makes sense

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

A great update by @bradtem:
https://www.forbes.com/sites/bradtempleton/2019/11/06/new-ntsb-reports-on-uber-fatality-reveal-major-errors-by-uber/

image

At 2.6 seconds out, the classifier thought she was a bicyclist. She was indeed walking a bicycle. Again, with no history, her path was unknown. Oddly, after another LIDAR frame she was classed as moving along the lane to the left of the Uber car. This may be because the system doesn’t expect bicycles to be going sideways in the middle of a traffic lane, which would be another error. Either way, it isn’t until 1.5 seconds out that the system (switching to Unknown again) realizes she is coming into the Uber’s lane. Correctly, it plots to swerve around her.

The fatal moment comes at 1.2 seconds out. She is reclassified as a bicyclist and in the path of the vehicle. The swerving plan no longer can help. It’s time to do an emergency braking. It’s really time.

from ossdc-visionbasedacc.

mslavescu avatar mslavescu commented on July 22, 2024

Some extra details here:
https://www.azcentral.com/story/money/business/tech/2019/11/06/new-details-emerge-fatal-tempe-self-driving-uber-crash-pedestrian-elaine-herzberg/2508011001/

The new report offers the following timeline:

At 5.6 seconds before impact, the car’s radar detected Herzberg, classified her as a car and estimated her speed.
At 5.2 seconds the lidar system detected her and classified her as an unknown object with a static path. Unknown objects were not considered obstacles unless they were in the path of the car, which she wasn't at this time.
At 4.2 seconds before impact the lidar again detected her but reclassified her as a vehicle, predicting the “vehicle” would continue to travel in the same direction the Uber was heading because it was in the left travel lane.
In the next seconds the lidar reclassified her several times as either “vehicle” or “other.”
At 2.6 seconds before impact, lidar classified the woman as a bicycle. The new classification restarted the tracking history, and the car predicted her path as "static."
At 2.5 seconds it again classified her as a bike that, like the vehicle classification earlier, would continue to travel in the same direction as the Uber.
At 1.5 seconds before impact, the lidar reclassified her again to an unknown object, but now being partially in the Uber’s travel lane. The Uber planned to steer right to avoid the object.
At 1.2 seconds the lidar classified her as a bike moving into the path of the Uber, and determined the prior plan to steer around her as no longer possible.

At this point, the Uber went into its 1-second “action suppression” mode.

It came out of the suppression mode 0.2 seconds before impact. The car could not safely brake and swerve to avoid impact, so it began a “controlled slowdown” and gave the operator an auditory alert. The Uber was traveling 39 mph at this instant.

The vehicle operator hit the brakes 0.7 seconds after the impact.

from ossdc-visionbasedacc.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.