sbstp / attohttpc Goto Github PK
View Code? Open in Web Editor NEWRust lightweight HTTP 1.1 client
Home Page: https://docs.rs/attohttpc/
License: Mozilla Public License 2.0
Rust lightweight HTTP 1.1 client
Home Page: https://docs.rs/attohttpc/
License: Mozilla Public License 2.0
attohttpc will use an unbounded amount of memory if the server sends the same header over and over with different values. This can be used to cause a denial of service via memory exhaustion.
You can reproduce the issue by running the following shell script on Linux and connecting via attohttpc to localhost:8080
#!/bin/bash
(
echo -e "HTTP/1.1 200 OK\r"
i=0
while true; do
echo -e "Header: foobar$i\r"
((i=i+1))
done
) | nc -l localhost 8080
Tested using this code for attohttpc.
When querying some websites, such as http://amvnews.ru/, attohttpc fails with the following error:
Io Error: invalid gzip header
21 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: atto-invalid-gzip-header.tar.gz
I have briefly mentioned this in #84, but it appears to be a distinct issue and it's probably better to track it separately.
For development purposes it can be useful to ignore certificate requirements (ie, expired certificates, mismatching hostnames, etc.)
It would be useful if the RequestBuilder had an option to disable Certificate Verification, something like danger_allow_invalid_certificates()
.
I am trying to get the redirected location from a request that returns a 302 status code with a Location header. Using response.headers().get("location")
returns None
.
Proxy support plan:
On some websites, e.g. http://finprison.net, attohttpc fails with the following error:
Io Error: unexpected end of file
Curl also behaves weirdly for this particular website, but Firefox works fine, so it's probably a weird edge case.
13 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: atto-unexpected-eof.tar.gz
On some websites, e.g. http://tfd.org.tw, attohttpc fails with the following error:
InvalidResponse: invalid status code
Firefox and curl work fine.
15 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: atto-invalid-status-code.tar.gz
Some important types are missing Debug
implementation. Debug
is considered essential and may be easily derived.
Hi.
Do you have plans to allow to specify certificates (.pem
, .pfx
, .p12
etc.) for client authentication?
I did a small change in file streams.rs
(line 51) just to test a connection which requires a PKCS12 certificate:
...
#[cfg(feature = "tls")]
fn connect_tls(
host: &str,
port: u16,
connect_timeout: Duration,
read_timeout: Duration,
) -> Result<TlsStream<TcpStream>> {
use native_tls::Identity;
let mut builder = TlsConnector::builder();
let buf = std::fs::read("/home/user/certificate.pfx")?;
let pkcs12 = Identity::from_pkcs12(&buf, "123456").unwrap();
builder.identity(pkcs12);
builder.danger_accept_invalid_certs(true);
let connector = builder.build()?;
let stream = BaseStream::connect_tcp(host, port, connect_timeout, read_timeout)?;
let tls_stream = match connector.connect(host, stream) {
Ok(stream) => stream,
Err(HandshakeError::Failure(err)) => return Err(err.into()),
Err(HandshakeError::WouldBlock(_)) => panic!("socket configured in non-blocking mode"),
};
Ok(tls_stream)
}
...
and it worked fine.
(The danger_accept_invalid_certs(true)
is related to #38).
Disabling the whole certificate verification (#38) is sometimes a way you don't want to follow, even for development purposes. A much safer alternative if to trust an additional root certificate. It would be nice to implement this option in attohttpc
, which should be easy since native-tls
already implements the add_root_certificate method.
Please note that rustls
users already have this option in attohttpc
through the client_config method since the ClientConfig object has a RootCertStore field.
When the server sends a very large amount of headers, attohttpc panics with the following message:
thread 'main' panicked at 'requested capacity too large', /home/shnatsel/.cargo/registry/src/github.com-1ecc6299db9ec823/http-0.2.3/src/header/map.rs:1550:9
Backtrace:
0: std::panicking::begin_panic
1: http::header::map::HeaderMap<T>::grow
2: http::header::map::HeaderMap<T>::append
3: attohttpc::parsing::response::parse_response
4: attohttpc::request::PreparedRequest<B>::send
5: attohttpc::request::builder::RequestBuilder<B>::send
6: attohttpc_smoke_test::main
You can reproduce the issue by running the following shell script on Linux and connecting via attohttpc to localhost:8080
#!/bin/bash
(
echo -e "HTTP/1.1 200 OK\r"
i=0
while true; do
echo -e "Header$i: foobar\r"
((i=i+1))
done
) | nc -l localhost 8080
Tested using this code for attohttpc.
Currently danger_accept_invalid_certs is not available for tls-rusttls
.
I would suggest adding a helper function to easily disable certificate validation for this feature set.
I think it will be nice to have support cookies.
Some websites, such as hajime.us
, fail to load using attohttpc: Io Error: corrupt deflate stream
. They load fine using Firefox and the curl command-line tool.
Tested using this code. Test tool output from all affected websites: attohttpc-deflate-corrupt-stream.tar.gz
40 websites out of the top million from Feb 3 Tranco list are affected.
I suspect this is an issue with the underlying DEFLATE implementation, but assistance in isolating the failure (e.g. dumping the DEFLATE stream so I could report a bug against miniz_oxide) would be appreciated.
Specifying timeout
for a request appears to silently truncate the response body if it being received when the the timeout expires.
Example:
http://httpbin.org/drip
will return 10 bytes over 2 seconds:
$ time curl "http://httpbin.org/drip"; echo
**********
real 0m2.005s
With a 1 second timeout, curl returns an error:
$ time curl -m 1 "http://httpbin.org/drip"; echo
curl: (28) Operation timed out after 1000 milliseconds with 4 out of 10 bytes received
****
real 0m1.008s
This is an attohttpc program that I expected do the same thing:
use std::time::Duration;
use attohttpc::Result;
fn main() -> Result {
let resp = attohttpc::get("http://httpbin.org/drip")
.timeout(Duration::from_secs(1))
.send()?;
println!("Status: {:?}", resp.status());
println!("Body:\n{}", resp.text()?);
Ok(())
}
But instead of returning an error, the result is successful, and text
is truncated:
$ time cargo run --example drip
Status: 200
Body:
*****
real 0m1.178s
I was just recently investigating an issue caused by https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment and it occurred to me to investigate latency optimization in the project I use attohttpc
in.
I think right now the library doesn't reuse the connection at all. Should there be a way to do it in the future or is it already out of scope of this library?
Since there's no connection reuse, is delayed ACK etc. a problem at all? Would it make sense for attohttpc to set (or allow setting) nodelay + use a large buffer and flush it at the end?
https://doc.rust-lang.org/std/net/struct.TcpStream.html#method.set_nodelay
I started working on a web crawler to test the library with multiple servers and multiple websites and I've found a few issues.
For example this page gets a connection reset error. It seems like the page wants a Accept: */*
header, otherwise is doesn't return anything.
I used netcat to compare with what requests (the python library) is doing.
GET / HTTP/1.1
Host: localhost:9999
User-Agent: python-requests/2.21.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
Seems like we're missing the Accept and User-Agent header.
I will add the other issues once I figure them out.
We're trying to convert our code in lemmy to use attohttpc, (we were using isahc previously), but when doing a lot of concurrent attohttpc gets and posts (in actix threads), we're getting a lot of WouldBlock
errors :
Internal Server Error: Error(Io(Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" })), in apub::fetcher::fetch_remote_object()
We also tried disabling the tls (since that's the only instance of wouldblock we could find), but it didn't help.
For some reason isahc doesn't have these issues.
Related issue : LemmyNet/lemmy#840
I'd like to propose methods to be able to read fields from RequestBuilder
before it is prepared (or sent). These would mirror the methods available in PreparedRequest
:
fn method(&self) -> &Method;
fn headers(&self) -> &HeaderMap;
fn body(&self) -> &[u8];
(and while these names don't clash with RequestBuilder
's current API, I'd personally prefix them as get_method()
, get_headers()
, and get_body()
)
The goal of these is to allow users to build their own extension traits over RequestBuilder
. For example, I'm working on an interface that requires signing each request with a cryptographic key, and adding that signature to the request's headers. The most convenient way would be to do something like:
trait RequestBuilderExt { .. }
impl RequestBuilderExt for RequestBuilder {
fn sign_message(self) -> Self {
// Sign the message using some combination of the
// request's method, url, and body
let signature = ..;
// Add the signature header to the request
self.header("ACCESS-SIGNATURE", signature);
self
}
}
which would let my users simply call .sign_message()
on a RequestBuilder
prior to sending it.
As it stands right now, I can't read existing fields from a RequestBuilder
, and I can't mutate a PreparedRequest
.
Would these new APIs be welcome to attohttpc?
(and, thanks for writing attohttpc, of all http clients out there it has my favorite API)
On some websites, e.g. http://hk-bbc.com, attohttpc fails with the following error:
Http Error: invalid HTTP header name
Firefox and curl work fine.
96 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: atto-invalid-header-name.tar.gz
I got the following panic on 87 out 1,000,000 invocations:
thread 'main' panicked at 'socket configured in non-blocking mode', /home/sdavydov/attohttpc/src/streams.rs:56:51
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1057
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1426
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:195
9: std::panicking::default_hook
at src/libstd/panicking.rs:215
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:472
11: std::panicking::begin_panic
12: attohttpc::streams::BaseStream::connect
13: attohttpc::request::RequestBuilder<B>::send
14: attohttpc_test::main
15: std::rt::lang_start::{{closure}}
16: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:52
17: std::panicking::try::do_call
at src/libstd/panicking.rs:296
18: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:79
19: std::panicking::try
at src/libstd/panicking.rs:272
20: std::panic::catch_unwind
at src/libstd/panic.rs:394
21: std::rt::lang_start_internal
at src/libstd/rt.rs:51
22: std::rt::lang_start
23: __libc_start_main
24: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Code:
use std::env;
use std::time::Duration;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let url = format!("http://{}", env::args().skip(1).next().expect("No URL provided"));
println!("GET {}", url);
const time: std::time::Duration = Duration::from_secs(5);
let res = attohttpc::get(url)
.connect_timeout(time)
.read_timeout(time)
.send();
if res.is_err() {
println!("Library error: {:?}", res);
}
println!("\nDone.");
if res.is_err() {
Err(Box::new(std::io::Error::new(std::io::ErrorKind::Other, "Lib error")))
} else {
Ok(())
}
}
Invoked with
cut -d ',' -f 3 majestic-million-31dec2019.csv | parallel -j 30 /path/to/binary/with/the/above/code
majestic-million.csv
obtained here
Tested on Ubuntu 18.04, attohttpc commit aeb2bd2
It doesn't look like sessions support post with json bodies, which appears to be implemented for regular post requests without sessions. The following code gave the compile error: no method named json found for struct RequestBuilder in the current scope
static mut sess : attohttpc::Session = attohttpc::Session::new();
let request = match method {
HTTP::GET => sess.get(endpoint),
HTTP::POST => sess.post(endpoint),
HTTP::DELETE => sess.delete(endpoint),
};
let res = request
.json(payload)?
.send()?;
It seems that PreparedRequest.send() is the only public way to do a submission.
But, it does too much for me: I have specific (mbedtls) needs that I need to configure, and then I'd like to just use the Read/Write traits on it, with attohttpc doing the work of creating and parsing requests.
(for instance, in some cases, I can accept a 301 redirect, but it can't change the authority unless it's from a CA in a trusted list)
First thought, just make parsing::parse_request public. I can call it directly.
A simple change to lib.rs to add "pub"
Second thought, refactor send(), such that I can provide the stream directly in.
My first attempt to do this resulted in lifetime issues that I haven't solved yet.
attohttpc is now two versions behind the latest version of rustls
(0.18 vs. 0.20). It would be nice to have a new release that depends on the latest version.
I haven't looked at how much effort an upgrade would take.
I have code that uses the Read
trait to stream the response data, but in order to get that for this library I had to jump through Request::split
: request.split().2
. Is there a reason that a simple Request::reader()
isn't exposed?
When Response
is received it can be checked for success using is_success()
method in order to decide on further processing. It may be very helpful to provide a method that will convert Response
into Result<Response>
based on the the StatusCode
.
Other HTTP client libraries provide similar facilities and they are very handy.
For example Python requests
has raise_for_status
and reqwest
has error_for_status
.
What do you think?
As suggested in the smoke testing blog post we could offer to use the rustls
TLS backend instead of native-tls
with a feature flag.
The crate name is very hard to remember.
It also doesn't look like a name of a high-level HTTP client: if I saw it on crates.io I'd expect it to be some low-level implementation detail, not something I can use directly. For reference, other high-level clients are called something like "reqwest", "isahc", "ureq".
It might be a good idea to use a more memorable name for this crate, like "hytt" or "ht2p".
attohttpc currently does not allow specifying a timeout for the entire request (i.e. until the request body is finished transferring), which means an established connection will hang indefinitely if the remote host doesn't stops replying. This leads to a resource leak, which may cause denial of service. This post explains the problem in detail: https://medium.com/@nate510/don-t-use-go-s-default-http-client-4804cb19f779
While it's not necessary to set such timeout by default (most other HTTP clients don't), it must be possible to configure it for attohttpc to be usable for e.g. building reliable services that query a remiote API over HTTP, or for defending against DoS attacks where the remote host sends 1 byte per second, keeping the connection open indefinitely.
Fetching google.com without enabling charsets
feature fails with error: Io Error: stream did not contain valid UTF-8
This is strange because this works fine in e.g. ureq
without charsets support and encoding_rs dependency not being present in the tree at all.
Code used for tests: https://github.com/Shnatsel/rust-http-clients-smoke-test
In the repository both clients have encoding support enabled; I've manually disabled it prior to this test.
I noticed while working with attohttpc that when I set log level of my log
-compatible logger to debug and send messages using attohttpc that debug logs of library clutter logs of my own application and there is no easy way to filter them out.
If target was added in all log macros calls, then one could filter out those targets while implementing own loggers.
When turning on rustls
instead of tls-rustls
, I always get a InvalidBaseUrl
error.
I realize that rustls
is the name of the optional dependency.
It would be good to abort compilation in this situation with cfg!
and std::compile_error!
.
On some websites, e.g. http://futureuae.com, attohttpc fails with the following error:
InvalidResponse: missing or invalid location header
Firefox and curl work fine.
96 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: atto-invalid-location-header.tar.gz
Initially suggested by @adamreichold here.
Generally, allow people to hook into the logic of the library to customize the behavior. A few places where this might be useful:
While switching my code from IPv4 only to mixed mode as well as IPv6 only mode, I encountered a strange bug in attohttpc.
At first, I thought it would be as easy as replacing strings such as "http://127.0.0.1:3000/fancy_rest" with "http://[::1]:3000/fancy_rest". The latter works in my browser and in curl
. Even the url
crate can parse it correctly.
Just when I use attohttpc
I get the following error on Linux:
Error(Io(Custom { kind: Other, error: "failed to lookup address information: Name or service not known" }))
on other systems it might also be displayed like the following:
Error(Io(Custom { kind: Other, error: "failed to lookup address information: Temporary failure in name resolution" }))
Probably trying to interpret [::1] as a hostname instead of an IpAddr
.
Unfortunately, attohttpc
is not yet available on the rust playground, otherwise, I would have shown a minimal example there.
And totally unrelated: thank you for providing a pretty neat, slim and efficient http client library without async+await :D
Is there any thing like this to get each chunk separately?
At the moment when using the tls-rust back end it's checking that every host is a valid DNS, but IP addresses are not valid DNS names and thus it fails here:
attohttpc/src/tls/rustls_impl.rs
Line 63 in 147f7ae
Presumably we need to skip dns resolution in this case?
On 16 websites of the top million, attohttpc
with 10-second connect timeout and 30-second full timeout does not complete the request in 60 seconds.
Tested using this code. Test tool output from all affected websites: attohttpc-hangs.tar.gz
ureq has the same issue, and the author has noticed that some of the affected websites have very long redirect chains. It is possible that the full request timeout is reset on every HTTP redirection.
When querying some websites, such as http://movie-japan.com, attohttpc fails with the following error:
Io Error: InvalidResponse: invalid chunk size
176 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: atto-invalid-chunk-size.tar.gz
On some websites, e.g. http://opentrainer.ru, attohttpc fails with the following error:
InvalidResponse: invalid header
Firefox and curl work fine.
32 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: atto-invalid-header.tar.gz
It'd be nice if there were a way to send a request body without needing to buffer the entire thing in memory first.
When parsing a 304 Not Modified response, attohttpc returns the error InvalidResponse(LocationHeader)
.
Version:
attohttpc = {"version" = "0.10.1"}
Code:
fn main() -> Result<(), attohttpc::Error> {
let resp = attohttpc::get("http://httpbin.org/status/304").send()?;
println!("{}", resp.text()?);
Ok(())
}
Output:
Error: Error(InvalidResponse(LocationHeader))
When the charsets
feature is enabled, Response.text
panics in this program:
fn main() -> Result<(), attohttpc::Error> {
let resp = attohttpc::get("https://rust-lang.org/").send()?;
println!("{}", resp.text()?); // panic here
Ok(())
}
The panic does not occur if the charsets
feature is disabled, or if Response.text_utf8
is used instead.
Version:
attohttpc = {"version" = "0.8.0", "features" = ["charsets"]}
Backtrace:
thread 'main' panicked at 'slice index starts at 12288 but ends at 8192', src/libcore/slice/mod.rs:2670:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:61
4: core::fmt::write
at src/libcore/fmt/mod.rs:1028
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1412
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:65
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:50
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:188
9: std::panicking::default_hook
at src/libstd/panicking.rs:205
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:464
11: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:373
12: rust_begin_unwind
at src/libstd/panicking.rs:302
13: core::panicking::panic_fmt
at src/libcore/panicking.rs:139
14: core::slice::slice_index_order_fail
at src/libcore/slice/mod.rs:2670
15: <core::ops::range::Range<usize> as core::slice::SliceIndex<[T]>>::index_mut
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libcore/slice/mod.rs:2857
16: <core::ops::range::RangeFrom<usize> as core::slice::SliceIndex<[T]>>::index_mut
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libcore/slice/mod.rs:2933
17: core::slice::<impl core::ops::index::IndexMut<I> for [T]>::index_mut
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libcore/slice/mod.rs:2657
18: <alloc::vec::Vec<T> as core::ops::index::IndexMut<I>>::index_mut
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/liballoc/vec.rs:1873
19: std::io::read_to_end_with_reservation
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libstd/io/mod.rs:389
20: std::io::read_to_end
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libstd/io/mod.rs:356
21: std::io::Read::read_to_string::{{closure}}
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libstd/io/mod.rs:713
22: std::io::append_to_string
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libstd/io/mod.rs:333
23: std::io::Read::read_to_string
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libstd/io/mod.rs:713
24: attohttpc::parsing::response_reader::ResponseReader::text_with
at /home/tom/.cargo/registry/src/github.com-1ecc6299db9ec823/attohttpc-0.8.0/src/parsing/response_reader.rs:129
25: attohttpc::parsing::response_reader::ResponseReader::text
at /home/tom/.cargo/registry/src/github.com-1ecc6299db9ec823/attohttpc-0.8.0/src/parsing/response_reader.rs:117
26: attohttpc::parsing::response::Response::text
at /home/tom/.cargo/registry/src/github.com-1ecc6299db9ec823/attohttpc-0.8.0/src/parsing/response.rs:143
27: attohttpc_example::main
at src/main.rs:3
28: std::rt::lang_start::{{closure}}
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libstd/rt.rs:61
29: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:48
30: std::panicking::try::do_call
at src/libstd/panicking.rs:287
31: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:78
32: std::panicking::try
at src/libstd/panicking.rs:265
33: std::panic::catch_unwind
at src/libstd/panic.rs:396
34: std::rt::lang_start_internal
at src/libstd/rt.rs:47
35: std::rt::lang_start
at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libstd/rt.rs:61
36: main
37: __libc_start_main
38: _start
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/300 This multiple choice status code cannot have an automatic follow-up.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.