GithubHelp home page GithubHelp logo

smousavi05 / stead Goto Github PK

View Code? Open in Web Editor NEW
257.0 13.0 65.0 4.71 MB

STanford EArthquake Dataset (STEAD):A Global Data Set of Seismic Signals for AI

License: Creative Commons Attribution 4.0 International

Python 100.00%
deep-learning dataset earthquake stanford stead

stead's Introduction

STanford EArthquake Dataset (STEAD):A Global Data Set of Seismic Signals for AI

GitHub last commit GitHub forks GitHub stars GitHub watchers Twitter Follow


map

map

https://www.youtube.com/watch?v=Nn8KJFJu-V0


Table of Contents:


Note:

Please note that some of the back azimuths in the current version have been misplaced. If you plan to use back azimuth labels you can recalculate it based on station and event location. Here is code to do so using Obspy:

distance_m, azimuth, back_azimuth = obspy.geodetics.base.gps2dist_azimuth(
                                                                        float(event_lat), 
                                                                        float(event_lon),
                                                                        float(station_lat), 
                                                                        float(station_lon), 
                                                                        a=6378137.0, 
                                                                        f=0.0033528106647474805)

You can get the wavefoms from here:

Each of the following files contains one hdf5 (data) and one CSV (metadata) files for ~ 200k 3C waveforms. You can download the chunks you need and then merge them into a single file using the provided code in the repository.

https://rebrand.ly/chunk1 (chunk1 ~ 14.6 GB) Noise

https://rebrand.ly/chunk2 (chunk2 ~ 13.7 GB) Local Earthquakes

https://rebrand.ly/chunk3 (chunk3 ~ 13.7 GB) Local Earthquakes

https://rebrand.ly/chunk4 (chunk4 ~ 13.7 GB) Local Earthquakes

https://rebrand.ly/chunk5 (chunk5 ~ 13.7 GB) Local Earthquakes

https://rebrand.ly/chunk6 (chunk6 ~ 15.7 GB) Local Earthquakes

If you have a fast internet you can download the entire dataset in a single file using following links:

https://rebrand.ly/whole (merged ~ 85 GB) Local Earthquakes + Noise

  • Note1: some of the unzipper programs for Windows and Linux operating systems have size limits. Try '7Zip' software if had problems unzipping the files.

  • Note2: all the metadata are also available in the hdf5 file (as attributes associated with each waveform).

  • Note3: For some of the noise data waveforms are identical for 3 components. These are related to single-channel stations where we duplicated the vertical channel for horizontal ones. However, these makeup to less than 4 % of noise data. For the rest, noise is different for each channel.

If you had trouble downloading the data from above links or unzipping them, you can get the dataset from SeisBench


You can get the paper from here:

https://rebrand.ly/STEADrg or https://rebrand.ly/STEADac

You can use QuakeLabeler (https://maihao14.github.io/QuakeLabeler/) or SeisBench (https://github.com/seisbench/seisbench) to labele and convert your data into STEAD format.

Last Update in the Dataset:

May 25, 2020

Reporting Bugs:

Report bugs at https://github.com/smousavi05/STEAD/issues.

or send me an email: [email protected]


Reference:

Mousavi, S. M., Sheng, Y., Zhu, W., Beroza G.C., (2019). STanford EArthquake Dataset (STEAD): A Global Data Set of Seismic Signals for AI, IEEE Access, doi:10.1109/ACCESS.2019.2947848

BibTeX:

@article{mousavi2019stanford,
  title={STanford EArthquake Dataset (STEAD): A Global Data Set of Seismic Signals for AI},
  author={Mousavi, S Mostafa and Sheng, Yixiao and Zhu, Weiqiang and Beroza, Gregory C},
  journal={IEEE Access},
  year={2019},
  publisher={IEEE}
}

The CSV file can be used to easily select a specific part of the dataset and only read associated waveforms from the hdf5 file for efficiency.

Example of data selection and accessing (earthquake waveforms):

import pandas as pd
import h5py
import numpy as np
import matplotlib.pyplot as plt

file_name = "merge.hdf5"
csv_file = "merge.csv"

# reading the csv file into a dataframe:
df = pd.read_csv(csv_file)
print(f'total events in csv file: {len(df)}')
# filterering the dataframe
df = df[(df.trace_category == 'earthquake_local') & (df.source_distance_km <= 20) & (df.source_magnitude > 3)]
print(f'total events selected: {len(df)}')

# making a list of trace names for the selected data
ev_list = df['trace_name'].to_list()

# retrieving selected waveforms from the hdf5 file: 
dtfl = h5py.File(file_name, 'r')
for c, evi in enumerate(ev_list):
    dataset = dtfl.get('data/'+str(evi)) 
    # waveforms, 3 channels: first row: E channel, second row: N channel, third row: Z channel 
    data = np.array(dataset)

    fig = plt.figure()
    ax = fig.add_subplot(311)         
    plt.plot(data[:,0], 'k')
    plt.rcParams["figure.figsize"] = (8, 5)
    legend_properties = {'weight':'bold'}    
    plt.tight_layout()
    ymin, ymax = ax.get_ylim()
    pl = plt.vlines(dataset.attrs['p_arrival_sample'], ymin, ymax, color='b', linewidth=2, label='P-arrival')
    sl = plt.vlines(dataset.attrs['s_arrival_sample'], ymin, ymax, color='r', linewidth=2, label='S-arrival')
    cl = plt.vlines(dataset.attrs['coda_end_sample'], ymin, ymax, color='aqua', linewidth=2, label='Coda End')
    plt.legend(handles=[pl, sl, cl], loc = 'upper right', borderaxespad=0., prop=legend_properties)        
    plt.ylabel('Amplitude counts', fontsize=12) 
    ax.set_xticklabels([])

    ax = fig.add_subplot(312)         
    plt.plot(data[:,1], 'k')
    plt.rcParams["figure.figsize"] = (8, 5)
    legend_properties = {'weight':'bold'}    
    plt.tight_layout()
    ymin, ymax = ax.get_ylim()
    pl = plt.vlines(dataset.attrs['p_arrival_sample'], ymin, ymax, color='b', linewidth=2, label='P-arrival')
    sl = plt.vlines(dataset.attrs['s_arrival_sample'], ymin, ymax, color='r', linewidth=2, label='S-arrival')
    cl = plt.vlines(dataset.attrs['coda_end_sample'], ymin, ymax, color='aqua', linewidth=2, label='Coda End')
    plt.legend(handles=[pl, sl, cl], loc = 'upper right', borderaxespad=0., prop=legend_properties)        
    plt.ylabel('Amplitude counts', fontsize=12) 
    ax.set_xticklabels([])

    ax = fig.add_subplot(313)         
    plt.plot(data[:,2], 'k')
    plt.rcParams["figure.figsize"] = (8,5)
    legend_properties = {'weight':'bold'}    
    plt.tight_layout()
    ymin, ymax = ax.get_ylim()
    pl = plt.vlines(dataset.attrs['p_arrival_sample'], ymin, ymax, color='b', linewidth=2, label='P-arrival')
    sl = plt.vlines(dataset.attrs['s_arrival_sample'], ymin, ymax, color='r', linewidth=2, label='S-arrival')
    cl = plt.vlines(dataset.attrs['coda_end_sample'], ymin, ymax, color='aqua', linewidth=2, label='Coda End')
    plt.legend(handles=[pl, sl, cl], loc = 'upper right', borderaxespad=0., prop=legend_properties)        
    plt.ylabel('Amplitude counts', fontsize=12) 
    ax.set_xticklabels([])
    plt.show() 

    for at in dataset.attrs:
        print(at, dataset.attrs[at])    

    inp = input("Press a key to plot the next waveform!")
    if inp == "r":
        continue             

event

event


Example of data selection and accessing (noise waveforms):

# reading the csv file into a dataframe:
df = pd.read_csv(csv_file)
print(f'total events in csv file: {len(df)}')
# filterering the dataframe
df = df[(df.trace_category == 'noise') & (df.receiver_code == 'PHOB') ]
print(f'total events selected: {len(df)}')

# making a list of trace names for the selected data
ev_list = df['trace_name'].to_list()[:200]

# retrieving selected waveforms from the hdf5 file: 
dtfl = h5py.File(file_name, 'r')
for c, evi in enumerate(ev_list):
    dataset = dtfl.get('data/'+str(evi)) 
    # waveforms, 3 channels: first row: E channel, second row: N channel, third row: Z channel 
    data = np.array(dataset)

    fig = plt.figure()
    ax = fig.add_subplot(311)         
    plt.plot(data[:,0], 'k')
    plt.rcParams["figure.figsize"] = (8, 5)
    legend_properties = {'weight':'bold'}    
    plt.tight_layout()
    plt.ylabel('Amplitude counts', fontsize=12) 
    ax.set_xticklabels([])

    ax = fig.add_subplot(312)         
    plt.plot(data[:,1], 'k')
    plt.rcParams["figure.figsize"] = (8, 5)
    legend_properties = {'weight':'bold'}    
    plt.tight_layout()     
    plt.ylabel('Amplitude counts', fontsize=12) 
    ax.set_xticklabels([])

    ax = fig.add_subplot(313)         
    plt.plot(data[:,2], 'k')
    plt.rcParams["figure.figsize"] = (8,5)
    legend_properties = {'weight':'bold'}    
    plt.tight_layout()     
    plt.ylabel('Amplitude counts', fontsize=12) 
    ax.set_xticklabels([])
    plt.show() 

    for at in dataset.attrs:
        print(at, dataset.attrs[at])    

    inp = input("Press a key to plot the next waveform!")
    if inp == "r":
        continue       

event


How to convert raw waveforms into Acceleration, Velocity, or Displacement:

import obspy
import h5py
from obspy import UTCDateTime
import numpy as np
from obspy.clients.fdsn.client import Client
import matplotlib.pyplot as plt

def make_stream(dataset):
    '''
    input: hdf5 dataset
    output: obspy stream

    '''
    data = np.array(dataset)

    tr_E = obspy.Trace(data=data[:, 0])
    tr_E.stats.starttime = UTCDateTime(dataset.attrs['trace_start_time'])
    tr_E.stats.delta = 0.01
    tr_E.stats.channel = dataset.attrs['receiver_type']+'E'
    tr_E.stats.station = dataset.attrs['receiver_code']
    tr_E.stats.network = dataset.attrs['network_code']

    tr_N = obspy.Trace(data=data[:, 1])
    tr_N.stats.starttime = UTCDateTime(dataset.attrs['trace_start_time'])
    tr_N.stats.delta = 0.01
    tr_N.stats.channel = dataset.attrs['receiver_type']+'N'
    tr_N.stats.station = dataset.attrs['receiver_code']
    tr_N.stats.network = dataset.attrs['network_code']

    tr_Z = obspy.Trace(data=data[:, 2])
    tr_Z.stats.starttime = UTCDateTime(dataset.attrs['trace_start_time'])
    tr_Z.stats.delta = 0.01
    tr_Z.stats.channel = dataset.attrs['receiver_type']+'Z'
    tr_Z.stats.station = dataset.attrs['receiver_code']
    tr_Z.stats.network = dataset.attrs['network_code']

    stream = obspy.Stream([tr_E, tr_N, tr_Z])

    return stream
 
 def make_plot(tr, title='', ylab=''):
    '''
    input: trace
    
    '''
    
    fig = plt.figure()
    ax = fig.add_subplot(1, 1, 1)
    ax.plot(tr.times("matplotlib"), tr.data, "k-")
    ax.xaxis_date()
    fig.autofmt_xdate()
    plt.ylabel('counts')
    plt.title('Raw Data')
    plt.show()
    
    
if __name__ == '__main__': 

    # reading one sample trace from STEAD
    dtfl = h5py.File(file_name, 'r')
    dataset = dtfl.get('data/109C.TA_20061103161223_EV') 

    # convering hdf5 dataset into obspy sream
    st = make_stream(dataset)
    
    # ploting the verical component of the raw data
    make_plot(st[2], title='Raw Data', ylab='counts')

raw

    # downloading the instrument response of the station from IRIS
    client = Client("IRIS")
    inventory = client.get_stations(network=dataset.attrs['network_code'],
                                    station=dataset.attrs['receiver_code'],
                                    starttime=UTCDateTime(dataset.attrs['trace_start_time']),
                                    endtime=UTCDateTime(dataset.attrs['trace_start_time']) + 60,
                                    loc="*", 
                                    channel="*",
                                    level="response")  

    # converting into displacement
    st = make_stream(dataset)
    st = st.remove_response(inventory=inventory, output="DISP", plot=False)

    # ploting the verical component
    make_plot(st[2], title='Displacement', ylab='meters')
    

disp

    # converting into velocity
    st = make_stream(dataset)
    st = st.remove_response(inventory=inventory, output='VEL', plot=False) 
    
    # ploting the verical component
    make_plot(st[2], title='Velocity', ylab='meters/second')

vel

    # converting into acceleration
    st = make_stream(dataset)
    st.remove_response(inventory=inventory, output="ACC", plot=False) 
    
    # ploting the verical component
    make_plot(st[2], title='Acceleration', ylab='meters/second**2')

acc


These are some of the studies that used STEAD.

You can check out the code repository of these studies as examples of how a Keras or Tensorflow model can be trained by STEAD in a memory efficient fashion:

  • Earthquake transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking, SM Mousavi, WL Ellsworth, W Zhu, LY Chuang, GC Beroza, Nature Communications 11 (1), 1-12.

  • Bayesian-deep-learning estimation of earthquake location from single-station observations, SM Mousavi, GC Beroza, IEEE Transactions on Geoscience and Remote Sensing, 1 - 14.

  • A machine‐learning approach for earthquake magnitude estimation, SM Mousavi, GC Beroza, Geophysical Research Letters 47 (1), e2019GL085976.

  • Complex Neural Networks for Estimating Epicentral Distance, Depth, and Magnitude of Seismic Waves, Ristea, Nicolae-Cătălin, and Anamaria Radoi., IEEE Geoscience and Remote Sensing Letters.

  • Earthquake detection and P-wave arrival time picking using capsule neural network. Saad, and Chen. IEEE Transactions on Geoscience and Remote Sensing, 59(7), 6234-6243.

  • Prediction of intensity and location of seismic events using deep learning. Nicolis, Plaza, & Salas. Spatial Statistics, 42, 100442.

License

For more details on the license of this repository see LICENSE.

stead's People

Contributors

darigovresearch avatar jeremy850407 avatar jks-liu avatar smousavi05 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stead's Issues

dont download data

SavedModel file does not exist at: EqT_model.h5/{saved_model.pbtxt|saved_model.pb}

pick data

Dear author, how to select the third number (that is, the z component) greater than 50 in the attribute snr_db? Can this be done?Looking forward to your reply, thank you!

the transform of raw waveforms and noise waveforms

Hello, I am a first-year student at Harbin Institute of Technology in China, and my recent study needs to use the stead dataset.
First of all, I am very grateful for your efforts to create such a global seismic data set, which greatly facilitates subsequent researchers.
Then I noticed that you provided a method to convert the original waveform of ground motion into acceleration, velocity, and displacement.
What I need is the acceleration record, but at the same time I also need the noise acceleration record, so I want to ask you, how to convert the noise waveform into the acceleration record? What are the physical properties of the original waveform records of noise?
Thank you very much for answering my confusion!!!

Downloading only the csv files

Hi,
Is it possible to only downloaded the metadata csv files for STEAD? Ideally, I would love the retrieve csvs for each chunk of data. But, the csv for the merged data would be great too. Thank you.

wrong label data of back_azimuth_deg

Hello.
I'm Sangwoo Han in Seismology, Geophysics and Tectonophysics Lab in Seoul National Univ. in South Korea.
First of all, thank you for your effort to make a very cool global benchmark dataset.

I found some problems in the label data of 'back_azimuth_deg'.
There are some 180-degree differences between 'back_azimuth_deg' in STEAD and back-azimuth obtained using locations of source and receiver.
I download STEAD data from mega cloud link.
I attached an image of back-azimuth differences.
figure

Also, I gave you one example of this problem.

  • Trace name: 'AKGG.AV_20060814223729_EV'

I used function gps2dist_azimuth() from obspy.geodetics.base to get back-azimuth.
I also want to know the formula that you used to calculate 'back_azimuth_deg'
Please check this problem and could you reply to this?

Acceleration Data Acquisition Method

Thank you very much for your contribution to building the STEAD dataset, which I also intend to use.

For my work, we are more concerned with the acceleration waveform of ground motion, and you also give the method of obtaining the acceleration waveform in the article.

client = Client("IRIS")
inventory = client.get_stations(network=quake_wave_set.attrs['network_code'],
    station=quake_wave_set.attrs['receiver_code'],
    starttime=UTCDateTime(quake_wave_set.attrs['trace_start_time']),
    endtime=UTCDateTime(quake_wave_set.attrs['trace_start_time']) + 60,
    loc="*",
    channel="*",
    level="response")
st = make_stream(quake_wave_set)
st = st.remove_response(inventory=inventory, output="ACC", plot=False)

When I use this method to acquire acceleration waveforms, I will get data from IRIS over the network, which is simple and feasible for a small number of seismic waves. But when I request more, IRIS rejects my request. I also realized that for millions of orders of data, it is almost impossible in this way.

I would like to ask if you have ground motion acceleration waveform data, or could you please give me another way to get ground motion acceleration waveform for the entire data set?

Thank you so much!

STEAD Data filtering

Thank you for your efforts to build the STEAD dataset. I also tried to use it for my analysis.

Regarding the STEAD dataset, I would like to ask whether this data is filtered or not? From your following papers titled “A machine‐learning approach for earthquake magnitude estimation” and “Earthquake transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking”, the STEAD data is filtered by a band-pass filter using different frequency ranges of 1-40Hz and 1 - 45Hz. I would like to know why they use different frequency ranges?

Thanks

Inquiry about Computation Algorithm for coda_end_sample

Hi:
I'm doing research with STEAD. I would like to know the computation algorithm used for determining the value of coda_end_sample.' According to the paper of STEAD, coda_end_sample represents the sample point at which the dominance of scattered energy from an earthquake signal ends and the noise begins to take over. But I still did not know how to compute it

Best regards,
yuming

LICENSE Missing

It seems a license is missing for the dataset. Suggest CC, CC-BY, or CC-BY-SA.

Obtaining STEAD data

Hey,

I'm having trouble obtaining the STEAD dataset. I downloaded the files from GDrive and tried unpacking them with both unzip and 7zip, which both failed, claiming the zip file was corrupted. I tried unpacking the archive on two different linux machines and tried both the full dataset and the first chunk. I pasted the output from unzip below.

I tried downloading the uncompressed files directly from mega, but got a warning that the file size exceeds the free download quota. While there are no clear numbers from mega, this site suggests that the quota is around 1 GB per 6 hours.

Therefore I was not able to obtain the STEAD dataset. Do you have any suggestion how I might be able to obtain the STEAD dataset? Or maybe which techniques I could try to unpack the zip archive?

Thanks!

 $ unzip chunk1.zip 
Archive:  chunk1.zip
warning [chunk1.zip]:  12884901888 extra bytes at beginning or within zipfile
  (attempting to process anyway)
file #1:  bad zipfile offset (local header sig):  12884901888
  (attempting to re-compensate)
  inflating: chunk1.hdf5             
  error:  invalid compressed data to inflate

数据下载

您好,用链接下载数据时出现:
抱歉,您目前无法查看或下载此文件。

最近查看或下载此文件的用户过多。请稍后再尝试访问此文件。如果您尝试访问的文件特别大或已与很多人共享,那么您最长可能需要等待 24 小时才能查看或下载该文件。如果您在 24 小时后仍然无法访问文件,请与您的网域管理员联系。
这个问题,请问有没有详细的解决方案。

removal of instrument response

Hello thank you very much for sharing these large dataset. I am new to obspy and during the removal of instrument response I got an error for the trace --> AAK.II_20031118065024_EV
code -
dtfl = h5py.File(file_name, 'r')
dataset = dtfl.get('data/AAK.II_20031118065024_EV')
network=dataset.attrs['trace_start_time']
print(network)
# convering hdf5 dataset into obspy sream
st = make_stream(dataset)
client = Client("IRIS")
inventory = client.get_stations(network=dataset.attrs['network_code'],
station=dataset.attrs['receiver_code'],
starttime=UTCDateTime(dataset.attrs['trace_start_time']),
endtime=UTCDateTime(dataset.attrs['trace_start_time']) + 60,
loc="",
channel="
",
level="response")
print(inventory)
print(dataset.attrs['receiver_type'])
st = make_stream(dataset)
st.remove_response(inventory=inventory, output="VEL", plot=False)
error -
ValueError Traceback (most recent call last)
in
16 print(dataset.attrs['receiver_type'])
17 st = make_stream(dataset)
---> 18 st.remove_response(inventory=inventory, output="VEL", plot=False)

3 frames
in remove_response(self, inventory, output, water_level, pre_filt, zero_mean, taper, taper_fraction, plot, fig, **kwargs)

/usr/local/lib/python3.7/dist-packages/obspy/core/trace.py in _get_response(self, inventories)
2635 elif len(responses) < 1:
2636 msg = "No matching response information found."
-> 2637 raise ValueError(msg)
2638 return responses[0]
2639

ValueError: No matching response information found.

Thanks,
Sai

Trouble in citing STEAD dataset for AGU submission

Hello, thanks for your great data.
I met weird trouble when citing the STEAD dataset in my work for AGU submission.
I stated in 'Data and materials availability': STanford EArthquake Dataset (STEAD) used for training is available at https://github.com/smousavi05/STEAD.
However, I got the following submission error:

  • Github is not an archival repository; please instead deposit your data/code in a suitable repository and update your data availability statement. To facilitate this, Github offers a one-click link to make your code and data citable by depositing it in Zenodo or figshare. Information on preserving your data or code in Zenodo and making it citable can be found here: https://guides.github.com/activities/citable-code/

So, can you generate a DOI for STEAD dataset so that I can cite it for AGU submission?

Sorry for the trouble. Thanks again for your work : )

Site-characterization parameter

Is it possible to add some site-characterization parameters?
Such as the site classification according to codes or guidelines, or the average shear-wave velocity.
If so, this work will have much greater application scenarios.

Instrument Response

Hi!

This dataset is great! Thanks for putting it together.

I ran into a problem of not finding the instrument response for a lot of the events. I am only trying to find them from the IRIS website. Is there another place other than IRIS where instrument responses can be found for this dataset?

Thanks!

Data-download

Hello, there is currently no way to download data automatically. Google is throwing following error when accessed from the terminal:

Access denied with the following error: Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator. You may still be able to access the file from the browser

MEGA is also not really suitable for wget and curl

sample rate

what is the sample rate of dataset? 100 Hz for all the files, all the samples?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.