mbostock / shapefile Goto Github PK
View Code? Open in Web Editor NEWA cross-platform streaming parser for the ESRI Shapefile spatial data format.
License: Other
A cross-platform streaming parser for the ESRI Shapefile spatial data format.
License: Other
The dependency used in this project called text-encoding
has been marked with the following warning:
npm WARN deprecated [email protected]: no longer maintained
It would be nice if this package could switch to an alternate implementation.
Hi,
I found that shape.open() method emits error for upper shapefile. For example:
path = "/user/TEST_002.SHP";
It emits error message:
{ [Error: ENOENT: no such file or directory, open '/user/TEST_002.SHP.shp']
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/user/TEST_002.SHP.shp' }
Do you think It is a good idea to make the library support uppercase extension ?
Any plans to add PolylineM/PolylineZ support. I wouldn't care if it didn't return the M or Z values. Just the shapes(X Y).
It would be easy to just add the type numbers, 13 and 23, to shp/index.js then call the readPolyLine function since all the M and Z values are after the Point X Y values.
I'm having trouble getting the streaming parser implemented. Are there any samples around? Here's my test code. It never reaches: console.log('HERE!');
TIA
var shapefile = require('shapefile');
var shpfile = "shpfile";
reader = shapefile.reader(shpfile);
reader.readHeader(function(err, header){
console.log('HERE!');
if (err) console.log('error: %s', err);
else {
reader.readRecord(function(error, rec){
if (error) console.log('error: %s', error);
console.log(rec);
});
}
});
reader.close();
The .dbf file downloaded here returns null values for each district. See this notebook for an example.
Other parsers (such as the one in R) detect values:
@Fil noted that:
the file is full of \x00 then fails on !(value = value.trim()) || isNaN(value = +value) ? null : value; in the readNumber function. I guess we should trim the \x00 not only spaces
Everytime I run shp2json
from the command line, I get the error
error: Decoder not present. Did you forget to include encoding-indexes.js first?
I've used shp2json
with success before and this error only started occurring after a full reinstall of OSX. It happens on node 6.9.3 as well as 7.3 and 7.4.
Please let me know if there's any other information useful for debugging this.
I see you have an example that use exactly what i want to have:
https://gist.github.com/mbostock/5557726
My node version:
node --version
v4.2.6
shp2json --version
0.6.4
v4.2.6 and 0.6.4 are the default for the ubuntu LTS realease.
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
Here what i do:
curl -o estados.zip http://mapserver.inegi.org.mx/MGN/mge2010v5_0.zip
curl -o municipios.zip http://mapserver.inegi.org.mx/MGN/mgm2010v5_0.zip
unzip estados.zip
unzip municipios.zip
ogr2ogr states.shp Entidades_2010_5.shp -t_srs "+proj=longlat +ellps=WGS84 +no_defs +towgs84=0,0,0"
ogr2ogr municipalities.shp Municipios_2010_5.shp -t_srs "+proj=longlat +ellps=WGS84 +no_defs +towgs84=0,0,0"
shp2json fail in both cases:
shp2json states.shp -o states2.json
(node) Buffer.set is deprecated. Use array indexes instead.
error: Invalid array length
shp2json municipalities.shp -o municipalities2.json
(node) Buffer.set is deprecated. Use array indexes instead.
error: Offset is outside the bounds of the DataView
Thanks.
It would be handy if multiple input files could work:
shp2json *.shp > merged.json
Thanks for developing this tool!
I'm using it to import Canadian electoral districts into MongoDB. Here's a sample file: http://ftp2.cits.rncan.gc.ca/pub/geott/electoral/2011/fed308.2011.zip
The shapefile is in NAD83 CNT and the points have to be transformed to WGS84 for visualization. Are you planning on implementing a tool that performs such calculations, and if not, would you be willing to incorporate one if it were to be written?
Hi @mbostock
Following on from #34 I was wondering if you'd be interested in receiving a pull request that provided full support for returning z values from a shapefile?
The GeoJSON spec supports having a z value and if the data is there in the shapefile it would be nice to pass it through to the geojson.
If you're happy to receive the pull request I'm happy to do the work on it :)
Cheers
Rowan
Hello. I have a issue with your module. When i convert shapefile to object of big file (about 3000000 records), from record about 1900000 data of properties is null. Please help me!
I have seen this with several projects reading array buffers and streams that are failing. Seems to be an issue with the underlying stream-source library but can't be 100% sure.
Error is:
Error: unsupported shape type: 1120127468
at new Shp (/home/ec2-user/Learning/shapefile/dist/shapefile.node.js:245:33)
at /home/ec2-user/Learning/shapefile/dist/shapefile.node.js:239:12
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Promise.all (index 0)
Hi,
Thanks for making a good library. I found it does not read holes that stick to outside of its outer rings properly even though the coordinates are in CCW order. Instead, It will read the holes as normal overlapped polygons. Please refer to the screenshot.
Shouldn't we need to check the holes must be fully within theirs outer rings?
Thank you.
Hi,
I posted this questions http://stackoverflow.com/questions/29548558/how-to-iterate-over-a-shapefile-in-node-js
the code is based on https://github.com/mbostock/shapefile/blob/master/test/shp-test.js
I am unable to iterate over the code! any idea why ?
I'm trying to convert a file with shp2json
, and getting an error:
$ curl -o street-tree-data-wgs84.zip https://ckan0.cf.opendata.inter.prod-toronto.ca/dataset/6ac4569e-fd37-4cbc-ac63-db3624c5f6a2/resource/c1229af1-8ab6-4c71-b131-8be12da59c8e/download/street-tree-data-wgs84.zip
$ unzip street-tree-data-wgs84.zip
Archive: street-tree-data-wgs84.zip
inflating: TMMS_Open_Data_WGS84.shp.xml
inflating: TMMS_Open_Data_WGS84.shx
extracting: TMMS_Open_Data_WGS84.cpg
inflating: TMMS_Open_Data_WGS84.dbf
inflating: TMMS_Open_Data_WGS84.prj
inflating: TMMS_Open_Data_WGS84.sbn
inflating: TMMS_Open_Data_WGS84.sbx
inflating: TMMS_Open_Data_WGS84.shp
$ shp2json TMMS_Open_Data_WGS84.shp -o trees.json
Error: error in unzip: code 9
The file in question is here.
It's possible this is me doing something wrong, as I don't know much about SHP files (which is why I was trying to convert). I've filed in case it's a real bug.
Hello there !
First of all, thank for this package !
But... I was unable to build correctly the shapefile.js and shape.min.js
I try to build it using npm run prepublishOnly
and i got some errors.
The first that i encountered is in the script pretest. the rm
command is linux only ( sorry i'm under windows like 90% of people in the world 😩 ) maybe should you use a remover like del-cli in conjunction with make-dir-cli for the folder creation. They are cross-platform.
The second was the command cp
which is linux too... Maybe the usage of cp-cli could help here too (cross-platform of course).
The third... tail
same problem, maybe the same solution !
Then, the $preamble
is not processed, so i got a $(preamble)
in shapefile.js on top...
Fiouu...
Lot of stuff that you do in prepublishOnly script could be integrated inside the rollup.config like uglifyjs part.
The only thing that i was unable to fix is the tail part for build test file. Which, IMO, could be bundled by rollup entirely with an other config file.
If you need some help to make this package more cross-platform, i could give you a little PR about.
Best regards,
Itee
I have eight concurrent nodejs processes with PM2 running each an app with this code:
const shapefile = require('shapefile')
// ....
// ....
shapefile.read(
path.join(__dirname, 'res', value.unzippedFilenamesWithoutExtension + '.shp'),
path.join(__dirname, 'res', value.unzippedFilenamesWithoutExtension + '.dbf'),
{ encoding: 'utf-8' }
).then(geojson => {
regions[key].geojson = geojson
console.log(
`Shapefiles read from ${colors.cyan(value.unzippedFilenamesWithoutExtension + '.shp')} ` +
`and from ${colors.cyan(value.unzippedFilenamesWithoutExtension + '.dbf')}`
)
callback()
}).catch((error) => {
console.error('Error reading shapefile', error.stack)
callback(Error(error))
})
and this error is throwing ONLY at 2 of those 8 processes, it seems something colides on reading the files
/home/jfolpf/.pm2/logs/geoptapi-error-8.log last 15 lines:
8|geoptapi | at wrapper (/var/www/geoptapi/node_modules/async/dist/async.js:268:20)
8|geoptapi | at iterateeCallback (/var/www/geoptapi/node_modules/async/dist/async.js:413:21)
8|geoptapi | at /var/www/geoptapi/node_modules/async/dist/async.js:321:20
8|geoptapi | at /var/www/geoptapi/prepareServer.js:110:7
8|geoptapi | 10-07-2021: Error: Error: Error: Error: RangeError: Offset is outside the bounds of the DataView
8|geoptapi | at /var/www/geoptapi/server.js:27:16
8|geoptapi | at /var/www/geoptapi/prepareServer.js:18:9
8|geoptapi | at /var/www/geoptapi/node_modules/async/dist/async.js:2955:19
8|geoptapi | at wrapper (/var/www/geoptapi/node_modules/async/dist/async.js:268:20)
8|geoptapi | at iterateeCallback (/var/www/geoptapi/node_modules/async/dist/async.js:413:21)
8|geoptapi | at /var/www/geoptapi/node_modules/async/dist/async.js:321:20
8|geoptapi | at /var/www/geoptapi/node_modules/async/dist/async.js:2953:17
8|geoptapi | at /var/www/geoptapi/prepareServer.js:114:7
8|geoptapi | at wrapper (/var/www/geoptapi/node_modules/async/dist/async.js:268:20)
8|geoptapi | at iterateeCallback (/var/www/geoptapi/node_modules/async/dist/async.js:413:21)
do you have an idea of what might be?
450:return shapefile(shp$$1, dbf$$1, dbf$$1 && new TextDecoder(encoding));
I was loosely certain that #59 was the same issue I'm seeing, but now I'm not so sure and wanted to open a new issue just in case.
A link to the source shapefile: wsp_120hr5km_latest.zip
The shapefile in question is one of the NHC's wind speed probabilities releases. It contains multiple concentric polygons that represent the different probabilities of winds. When I convert this shapefile with mapshaper (via the CLI, but also same result in the browser interface), it seems to come over correctly:
Output GeoJSON file is here: https://gist.github.com/rdmurphy/d283421b5b9cfa42f1fe29e390ce9230
But when I pass this same shapefile through shapefile
, I get this:
GeoJSON here: https://gist.github.com/rdmurphy/3a44e9be9736ec0364df6a926571b50a
Is this the same issue as #59, or something else?
And to be clear — totally understand if this is an accepted difference with how shapefile
works! Just wasn't sure if perhaps I've surfaced a different issue.
Thank you!
The coordinate reference systems should be included at least in the read
GeoJSON.
http://geojson.org/geojson-spec.html#coordinate-reference-system-objects
shapefile.read('example.shp')
.then(data => {
data.type
data.bbox
data.features
data.crs //=> undefined
})
There's going to be a .prj
file associated with the shapefile files, might want to parse that and place that information into the GeoJSON crs
. The projection file is a readable string.
GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137,298.257223563]],PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]]
Proj4 does a good job at parsing these CRS codes:
https://github.com/proj4js/proj4js/blob/master/lib/parseCode.js
This might sound odd, but it would actually be helpful to have a command line option --ignore-geometry
which does not include geometries in the output. I'm finding that relatively often I want to examine the contents of some spatial files (eg, statistical boundaries) and all the geometry makes it very inconvenient: huge files, hard to load in a text editor, hard to see the properties I care about.
The downside is: either the output is malformed GeoJSON (containing either no geometry
attribute, or a blank one), or it's not GeoJSON at all (in which case, what?) OTOH, the tool is called shp2json
, not shp2geojson
so maybe that's not disastrous. :)
When trying to read a UTF-8 encoded .dbf, non-ASCII characters unfortunately get mangled. I used the usual suspects at https://github.com/interactivethings/swiss-maps as input.
>>> dbfcat shp/swiss-cantons/swiss-cantons.dbf
header { version: 3,
date: Wed Jul 26 1995 00:00:00 GMT+0200 (CEST),
count: 26,
fields:
[ { name: 'NR', type: 'N', length: 5 },
{ name: 'ABKUERZUNG', type: 'C', length: 2 },
{ name: 'NAME', type: 'C', length: 80 },
{ name: 'COUNTRY', type: 'C', length: 20 } ] }
record [ 1, 'ZH', 'Z�rich', 'CH' ]
record [ 2, 'BE', 'Bern/Berne', 'CH' ]
record [ 3, 'LU', 'Luzern', 'CH' ]
record [ 4, 'UR', 'Uri', 'CH' ]
record [ 5, 'SZ', 'Schwyz', 'CH' ]
record [ 6, 'OW', 'Obwalden', 'CH' ]
record [ 7, 'NW', 'Nidwalden', 'CH' ]
record [ 8, 'GL', 'Glarus', 'CH' ]
record [ 9, 'ZG', 'Zug', 'CH' ]
record [ 10, 'FR', 'Fribourg', 'CH' ]
record [ 11, 'SO', 'Solothurn', 'CH' ]
record [ 12, 'BS', 'Basel-Stadt', 'CH' ]
record [ 13, 'BL', 'Basel-Landschaft', 'CH' ]
record [ 14, 'SH', 'Schaffhausen', 'CH' ]
record [ 15, 'AR', 'Appenzell Ausserrhoden', 'CH' ]
record [ 16, 'AI', 'Appenzell Innerrhoden', 'CH' ]
record [ 17, 'SG', 'St. Gallen', 'CH' ]
record [ 18, 'GR', 'Graub�nden/Grigioni', 'CH' ]
record [ 19, 'AG', 'Aargau', 'CH' ]
record [ 20, 'TG', 'Thurgau', 'CH' ]
record [ 21, 'TI', 'Ticino', 'CH' ]
record [ 22, 'VD', 'Vaud', 'CH' ]
record [ 23, 'VS', 'Valais/Wallis', 'CH' ]
record [ 24, 'NE', 'Neuch�tel', 'CH' ]
record [ 25, 'GE', 'Gen�ve', 'CH' ]
record [ 26, 'JU', 'Jura', 'CH' ]
end
Hi. I need to import shapefiles in my RN app and plot data. I've tried to install this lib and use it but it complains about fs module dependency not found. The fs package does not work on RN (there is a react-native-fs)... I'm a little lost with this. If anyone could help me with this I would appreciate.
The reader pattern seems close to the pull stream concept described here:
http://dominictarr.com/post/149248845122/pull-streams-pull-streams-are-a-very-simple
After i have checked the code, it seems there is a problem in shp/read.js
function read() {
var length = header.getInt32(4, false) * 2 - 4, type = header.getInt32(8, true);
return length < 0 || (type && type !== that._type) ? skip() : that._source.slice(length).then(function(chunk) {
return {done: false, value: type ? that._parse(view(concat(array.slice(8), chunk))) : null};
});
}
the returned object is
{done: , value: {
type:
coordinates:
}}
instead it should be
{done: , value: {
type:
geometry: {
type:
coordinates:
}
}}
Hi,
I have a problem. I'm building a web page that user uploads just the .shp file and I'm using your library to convert this shapefile to geojson.
The problem is that the .shp file come with 2 separated geometries and the geojson output just come with 1 of the 2 geometries.
Any idea to resolve this?
Is there a simple way to turn a File (or Blob, I guess) from the browser into something that shapefile.open
accepts? Right now I'm using
new Promise((resolve,reject) => {
let reader = new FileReader();
reader.onload = evt => resolve(reader.result);
reader.readAsArrayBuffer(blob);
});
which works, but of course this loses the benefit of working stream-wise. It looks like WhatWG readable-streams support is pretty patchy, and focused on fetch
rather than looking at local files.
If there is a better way to open from a local file (I get my File from <input type="file">
), maybe an example in the docs would be helpful? If not, maybe it would be worthwhile to add Blob
as a supported argument?
This works really well, thank you! It would be helpful to have a way of knowing the total feature count, in order to provide percentages to the user when processing files.
I don't know much about how Shapefiles work, but ogrinfo
can provide this info instantly, so presumably it's stored in metadata and doesn't require manuallly counting.
I really love shp2json (and avoiding the massive dependency that is GDAL), but the fact it doesn't support reprojection is often a pain. (Particularly when you write a script that assumes that all your shapefiles will be EPSG:4326, then one arrives in a different projection and now you have to switch to OGR2OGR or something.)
Mike addressed the issue, saying " This is just a Shapefile parser, so it wouldn’t be the right place to implement coordinate system transformations."
I would like to argue against this :)
First, it's not just a Shapefile parser. shp2json is clearly a Shapefile->Geojson converter. And currently, it's producing GeoJSON that doesn't comply with the RFC. If a library or tool is producing GeoJSON as output, it should produce that GeoJSON in EPSG:4326 by default (and maybe allow retaining the original projection with some special flag).
Secondly, it definitely seems like the job of this library to parse the .prj file if there is one, at least.
Sure, it's possible to use workarounds to get around this limitation, but they're not convenient, particularly when you're dealing with a Shapefile whose projection you don't know ahead of time. That makes this kind of command line hard to write:
shp2json foo.shp | reproject --to=EPSG:4326 --from=...what?
(Supporting reprojection to arbitrary CRS's would definitely be beyond the scope of the library, otoh.)
Testing for .shp
or .dbf
at the end isn’t a good test for URLs. For example, it means you can’t pass a URL to an Observable FileAttachment (because those never have the .shp extension, and adding the .shp extension breaks the URL).
I think we should probably get rid of this magic. It might be okay to replace .shp
with .dbf
if the dbf argument is undefined, but otherwise I think we should leave the URLs as-is.
when running a batch of shape files, sometimes I will get a null for geometry, which according to the spec is allowed. In a collection of polygons this will end processing once the null is hit and throw the 'Encountered unknown shape type (0)' message
Some shapefiles require conversion to EPSG:4326. It would be nice to detect these and convert the coordinates automatically from the input projection.
According to the GeoJSON spec (RFC 7946), outer rings of polygons MUST follow the right-hand rule, i.e., be counter-clockwise, whereas the output I see creates polygons with rings that use clockwise winding.
I've read through the command line cartography posts am still puzzled by this.
My shapefiles of regions in BC are here https://github.com/Mbrownshoes/shp_example
to convert and simplify I'm using the following command
geo2topo \
regions=<(shp2json ABMS_RD_polygon.shp) \
| toposimplify -P 0.1 \
| topoquantize 1e5 \
> bc.json
When I plot with d3 I'm using this code, which works for other topojson files you have posted, such as the one in this example (among others)
https://bl.ocks.org/mbostock/5562380/825c2889f93296bcb78e02a1f85a0981554c1ea7
however, I'm not seeing anything plotted. I do see a path when I inspect.
My resulting topojson file has a bbox and well as a transform at the start of the file:
{"type":"Topology","bbox":[273366.8019999984,359771.84100000095,1870586.801,1735720.873999999],"transform":{"scale":[15.972359713597154,13.759627926279242],"translate":[273366.8019999984,359771.84100000095]}
If I look at my topojson file in maphapper it looks fine.
I'm puzzled by why this isn't working. Here's the d3 code to plot.
var svg = d3.select("svg"),
width = +svg.attr("width"),
height = +svg.attr("height");
// const projection = d3.geoAlbers()
// .rotate([126, -10])
// .center([7,44])
// .parallels([50, 58])
// .scale(1970)
// .translate([960 / 2, 600 / 2]);
const path = d3.geoPath()
// .projection(projection)
d3.queue()
.defer(d3.json,"mx.json")
.await(ready);
function ready(error, mx) {
if (error) throw error;
console.log(mx)
var color = d3.scaleThreshold();
svg.append("path")
.datum(topojson.mesh(mx, mx.objects.regions, (a, b) => a !== b))
.attr("fill", "none")
.attr("stroke", "black")
.attr("stroke-linejoin", "round")
.attr("d", path);
}
Hi,
It is a good library. When I try to read a shape file contains multiple and nested polygons, I got the result a geojson list of equal 1 level. How to identify which one is the hole of which one? how to know the ones within the same block/record?
shp.open(filePath)
.then(source => source.read()
.then(function log(result) {
if (result.done) return;
console.log(result.value);
return source.read().then(log);
}))
.catch(error => console.error(error.stack));
Results:
[
{ type: 'Feature', ...}
{ type: 'Feature', ...}
...
]
Thanks a lot.
John
Hi! We think we have found an edge case. We have a file that has a numeric column "Blend - *", that when we run it through shapefile, returns all NULL values in the property field instead of the true values. Values can be properly read by other software such as mapshaper or our C++ program, but not this library.
The shape points get converted correctly, and the property "Blend - *" appears on the result object, just not the associated property values.
Code for testing:
async function parseShapefile(absPath) {
const geoJson = await shapefile.read(absPath)
if (!geoJson) { return {} }
// Find numeric columns to use for rate selection
for (const property of Object.keys(shape.properties)) {
let numeric = true
for (const feature of geoJson.features) {
// Use both global isNaN() and parseFloat() to determine if we have a numeric column.
// isNaN() catches most non-numeric columns, but not null
// parseFloat(null) == NaN, but a column value such as "32 UAN" will return a number (32)
if (isNaN(feature.properties[property]) || Number.isNaN(parseFloat(feature.properties[property]))) {
numeric = false
}
else {
console.log('Found Number')
}
}
if (numeric) {
console.log(`Numeric Column: ${property}`)
}
}
}
File in question:
TID_822919_OID_2363808_Raven_Viper_2021-04-26-03-53-54.zip
It looks like trimming white spaces at the end was intentional - suggest the following code to trim the null values as well.
export default function(value) {
return value.replace(/\0/g, '').trim() || null;
}
It seems like the current implementation is dependent on node.js. Do you have any thoughts on providing support for a pure client side parser?
I thought about this a tiny bit and it seems like one could use the new TypedArrays in Javascript to do this. The DataView object allows for one to specify the endianness of the byte array, which is something we need for reading shapefiles.
http://www.khronos.org/registry/typedarray/specs/latest/#8
Perhaps if the parsing of the features were moved into stream independent functions we could dynamically support both a stream client for node and one for the browser.
Or maybe this is out of scope for this project so feel free to close if so.
Thanks.
Would it be possible to have shapefile.reader()
support an arbitrary data stream as an argument, instead of only a filename?
For example:
$ shp2json -n example.shp | head -n2
{"type":"Feature","properties":{"FID":0},"geometry":{"type":"Point","coordinates":[1,2]}}
{"type":"Feature","properties":{"FID":1},"geometry":{"type":"Point","coordinates":[3,4]}}
events.js:160
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at exports._errnoException (util.js:1012:11)
at WriteWrap.afterWrite (net.js:793:14)
If stdout throws an EPIPE exception we should just swallow it silently and abort.
Normally this object is ignored because the shp parser ends correctly; but you can notice it if you parse the dbf file separately.
Hey there! I'm wondering if is there a way to pipe the output from shapefile
to another transform stream like:
const readableShape = shapefile.readStream('path/to/file.shp');
readableShape
.pipe(myTransformStream)
.pipe(myTransformStream2)
.pipe(myWritableStream);
ran into the error below when running npm test
> [email protected] test /Users/john6251/github/shapefile
> tape 'test/**/*-test.js'
/Users/john6251/github/shapefile/dbf/index.js:1
(function (exports, require, module, __filename, __dirname) { import slice from "slice-source";
^^^^^^
SyntaxError: Unexpected reserved word
tape doesn't appear to be happy being pointed straight at the ES6 source (at least on my box), but since i don't have any trouble running d3
tests locally that follow the same pattern, i'm stumped as to what i could have misconfigured.
When I'm converting a shapefile that has a null value in its attribute table, I'm seeing that as a 0
in the resulting geojson. Here are the files and steps to replicate https://github.com/mhkeller/shapefile-bug
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.