GithubHelp home page GithubHelp logo

Comments (25)

dot-asm avatar dot-asm commented on June 16, 2024 1

We proceed in #185. Thanks again!

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

In order to maintain backward compatibility implied by semver, std in 0.3.11 is an internal feature and a hard assumption is made that an "unknown" OS, such as x86_64-fortanix-unknown-sgx, doesn't provide an std environment. Well, apparently fortanix does. To determine the best course of action, do you know if it supports multiple threads?

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

It does support multiple threads, but there might be some differences in how they work or which functions are available in std. I've not done anything with multiple threads in x86_64-fortanix-unknown-sgx, so I can't say much about that.

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

Could you verify the following? In your etherium-consensus clone create .cargo/config.toml file with the following lines:

[patch.crates-io]
blst = { git = "https://github.com/dot-asm/blst", branch="unknown-std" }

and execute your test. And I mean some actual test, not just build sanity cargo check. (I ask, because I don't have a system with working fortanix edp.)

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

That solved the compiling problem, but when trying to run it in sgx I get the following error (probably related to the C code or how they're linked).

ERROR: Unsupported dynamic entry: .init functions
ERROR: while running "ftxsgx-elf2sgxs" "target/x86_64-fortanix-unknown-sgx/release/examples/bls" "--heap-size" "33554432" "--ssaframesize" "1" "--stack-size" "131072" "--threads" "1" "--debug" got exit status: 1

Code used: https://github.com/ralexstokes/ethereum-consensus/blob/main/ethereum-consensus/examples/bls.rs
Command used: cargo +nightly run --example bls --target x86_64-fortanix-unknown-sgx --no-default-features -F serde --release

I've also tried it with the following minimal binary, with the same error message, so it is not related to ethereum-consensus or other dependencies:

[package]
name = "blst-sgx-test"
version = "0.1.0"
edition = "2021"

[dependencies]
blst = { path = "blst/bindings/rust", version = "0.3" }
use blst::min_pk::SecretKey;

fn main() {
    let mut ikm = [0u8; 32];
    let sk = SecretKey::key_gen(ikm.as_slice(), &[]).unwrap();

    println!("Done");
}

Edit: Disabling default features, enabling portable or enabling no-threads did not help either.

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

Unsupported dynamic entry: .init functions

Ouch! cargo update and try again.

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

cargo update

Provided that you used the [patch.crates-io]. If you cloned unknown-std manually, refresh your copy:-)

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

Now it works, but only if the enclave is configured to have at least 2 threads (unless configured otherwise they only have a single thread).

[package]
name = "blst-sgx-test"
version = "0.1.0"
edition = "2021"

[package.metadata.fortanix-sgx]
threads=2

[dependencies]
blst = { version = "0.3.6" }
rand = "0.8.4"

When configured with only one thread I get the following error when trying to verify a signature:

Error while executing SGX enclave.
Enclave panicked: thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "operation would block" }', /home/jens/.cargo/registry/src/github.com-1ecc6299db9ec823/threadpool-1.8.1/src/lib.rs:777:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

ERROR: while running "ftxsgx-runner" "target/x86_64-fortanix-unknown-sgx/debug/blst-sgx-test.sgxs" got exit status: 255

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

message: "operation would block"

Well, it originates in threadpool, so I'd be inclined to blame the threadpool:-) But did it work earlier? With 0.3.10 that is? Or did you have to engage the no-threads feature all along? If it worked, can you tell if it worked with a specific threadpool version? Is it possible that threadpool fails to note how many cores the enclave gets? After all, blst does take a note if there are multiple cores available and shouldn't take the blocking path [in case only one core is available]...

For reference. It's generally beneficial to use parallelized interfaces, because even single-signature verification is parallelizeable.

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

so I'd be inclined to blame the threadpool:-)

Yes :)

But did it work earlier? With 0.3.10 that is?

I'm not quite sure what you mean: As far as I can tell the version in Cargo.toml doesn't matter when using [patch.crates-io] with the unknown-std branch. Previously it didn't even get to this point due to the .init problem. I've switched from using a local copy to the .cargo/config.toml override (wanted to try it out), maybe that's where the version change came from.

Or did you have to engage the no-threads feature all along?

I already forgot about that feature; It works if I set the amount of threads to >2 or if I use the no-threads feature, so I'd say it works as expected (either blst or threadpool doesn't see that only one thread is available, for some reason, but that's not a problem since multi-threadding can easily be disabled).

If it worked, can you tell if it worked with a specific threadpool version?

I don't think I ever tried a different version, unless you changed the threadpool version in the unknown-std branch.

Is it possible that threadpool fails to note how many cores the enclave gets?

Quite likely, I don't know. They probably check how many cores the cpu has (which is only one on the machine I'm testing on, making this situation weirder) and use that to estimate how many threads to start. Applications running normally in Linux can start as many threads as they want, but in SGX there are some extra things involved in creating new threads and they're thus limited to a fixed/max amount of threads.

because even single-signature verification is parallelizeable.

Interesting, I didn't know that.

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

But did it work earlier? With 0.3.10 that is?

I'm not quite sure what you mean:

I mean did it work 2 weeks ago, and if so, how exactly. It's assumed that the difference between 2 weeks ago and now is insignificant in respect to the problem at hand. Assumption can be wrong of course, but we have to explore the options:-)

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

I mean did it work 2 weeks ago

I see. I've not used blst before and was looking at the options for bls signatures and implementations of the ethereum consensus. Mainly to see which libraries works in SGX.

Therefore I don't know if it previously worked, this was the first/only version I tried.
I've just tested the following versions, all of which have the .init problem.

  • 0.3.1
  • 0.3.10
  • 0.3.11

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

all of which have the .init problem.

But 0.3.11 is the first one that introduces the .init segment. Anyway, "I've not used blst before" effectively answers the question. It's more likely that it [the multi-threading with a single core allocated for enclave] didn't work all along:-) I'll see what can be done and propose possible solutions here for you to test. Though probably not today... Meanwhile you can use no-threads feature for exploration purposes :-)

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

cargo update and try with single core and without no-threads. At least we'll know if fortanix threadpool correctly detects amount of assigned cores. (Next suggestion definitely not today:-) Cheers.

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

But 0.3.11 is the first one that introduces the .init segment.

Ah, that was my bad. I've specified the version as blst = { version = "0.3.9" } instead of blst = { version = "=0.3.9" }, thus it still took version 0.3.11. Here is a list of the behavior of the some recent versions (all tested with one thread, and without the no-threads feature):

  • 0.3.1: operation would block
  • 0.3.9: operation would block
  • 0.3.10: operation would block
  • 0.3.11: Unsupported dynamic entry: .init functions
  • unknown-std branch: operation would block

So the .init Problem really was just with version 0.3.11.
Using the no-threads feature also works in older versions (at least those that already had it).


cargo update and try with single core and without no-threads

Same behavior as last time:

Error while executing SGX enclave.
Enclave panicked: thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "operation would block" }', /home/jens/.cargo/registry/src/github.com-1ecc6299db9ec823/threadpool-1.8.1/src/lib.rs:777:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

ERROR: while running "ftxsgx-runner" "target/x86_64-fortanix-unknown-sgx/debug/blst-sgx-test.sgxs" got exit status: 255

.cargo/config.toml

[patch.crates-io]
blst = { git = "https://github.com/dot-asm/blst", branch="unknown-std" }

[target.x86_64-fortanix-unknown-sgx]
runner='ftxsgx-runner-cargo'

Cargo.toml

[package]
name = "blst-sgx-test"
version = "0.1.0"
edition = "2021"

[package.metadata.fortanix-sgx]
threads=1

[dependencies]
blst = { version = "0.3.11" }
rand = "0.8.4"

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

cargo update and try with single core and without no-threads

Same behavior as last time:

Darn it! All right, time to look at threadpool... The panic line points at lib.rs:777 and this appears to be a function that spawn working threads. So that it ought to panic upon pool instantiation... Hmm... But could you try following first (to ensure that we don't hit the wall later on)? The error is likely to indicate that enclave threads are non-preemptive. Do you have a test that verifies a significant amount of signatures? Would it work with two threads allocated to enclave? blst's cargo test does it, so if it works, it would be sufficient. The idea is to ensure that there are no more "would block" panics...

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

Hmm... threadpool uses num_cpus and the latter unconditionally says there is just one cpu on "fringe" OSes, such as the one in question. This means that even if you assign 2 cores to enclave it [the threadpool] will think there is only one cpu available. Incidentally this should still improve the performance of some operations, those that use even the "main" thread. I mean the "main" thread will be executing on one enclave thread and the [single-thread] threadpool would be executing on the second. This happens at least in the signature verification subroutine. As you might imagine, assigning more than 2 cores won't do anything. With this in mind, you can drop the previous request to perform some tests, it's more or less obvious that there won't be opportunities to block. But if you choose to perform those tests anyway, it would be interesting to confirm that [signature verification] performance actually improves with two threads. [Just in case, it won't double, because it's only partially parallelizeable.] The thing is that if performance doesn't improve for whatever reason, then there is no point in looking into it further, but to coerce no-threads. Well, which I'm leaning toward anyway... Alternative appears just too fragile to maintain:-(

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

I'm leaning toward [coercing no-threads]

This naturally doesn't preclude the possibility of revising it at a later point. Ideal coarse of action would be to have fortanix provide working threadpool, as opposed to grappling with the stock one. Collecting the performance data would serve as motivation factor;-)

On a more general note. As far as I can tell, when targeting sgx, rustc gets pretty aggressive with counter-measures to u-architecture attacks by default. This doesn't translate to the C part, not by default, let alone assembly. This effectively puts blst in a "privileged" position, it's bound to win all the benchmarks. Presumably with an impressive margin, because the said counter-measures are pretty expensive. Either way, this is not necessarily the trade-off you (or rather sgx users) would aim for:-) You can control C compilation by setting $CC to sufficiently recent clang and $CFLAGS to match the rustc code generation settings. But this won't do anything to assembly. It's now possible to do #ifdef-ing in assembly, and it would be easy to replace ret-s with instruction sequences with lfence in the middle. Question is if it would be sufficient. One would have to make some assumptions, and strategically sprinkle lfense-s around, maybe even additional assembly subroutines... But it should be a separate issue... [If we choose to pursue this avenue.]

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

Here are some some benchmark results (all run with version 0.3.11). Keep in mind that (for simplicity) I'm always verifying 1000*num_threads_used signatures, so these benchmarks don't all have the same amount of computation to do):

It's also only a single measurement per configuration and measuring time in SGX is a bit more involved than on normal hosts (requires an ECall, and thus basically another context switch before we end up in normal user-land to make a syscall to get the time, if it is implemented that way), so take it with a big grain of salt.

Threads available Threads used (including main thread) features Time taken Time taken per 1000 Signatures
1 1 no-threads 1.2941s 1.2941s
1 1 - - operation would block
2 1 no-threads 1.3070s 1.3070s
2 1 - 1.3504 1.3504
2 2 no-threads 2.5912s 1.2956s
2 2 - - operation would block
3 1 no-threads 1.3030s 1.3030s
3 1 - 1.3503s 1.3503s
3 2 no-threads 2.5921s 1.2960s
3 2 - 2.6956s 1.3478s
3 3 no-threads 3.8843s 1.2947
3 3 - - Once instance has previously been poisoned
10 1 no-threads 1.3015s 1.3015s
1 1 - 1.3561s 1.3561s
10 9 no-threads 11.6725s 1.2969s
10 9 - 12.0236s 1.3359s
10 10 no-threads 13.0048s 1.3005s
10 10 - - operation would block
10 20 no-threads - operation would block
10 20 - - operation would block

As reference here are some measurements outside of SGX (running under normal linux):

Threads Time taken Time taken per 1000 Signatures
1 1.2811s 1.2811s
2 2.5647s 1.28235
10 12.8313s 1.2831s

It is apparently a hard limit on the number of threads that can exist at the same time, thus making anything with a non-pre-determined (compile time known) hard limit on the number of threads rather unreliable. Hence I'm tempted to always use no-threads in SGX, unless there actually is an upper bound to how many threads threadpool creates. It also probably depends quite a bit on the workload, as the communication between threads might be a bit more involved than it usually is.

The time (per 1k signatures) is most likely practically constant because we're CPU bound and I only have a single physical core available in the VM. Hence I cannot really measure how it works with 2 physical cores. It might also be that there is something else limiting how well it can be executed in parallel (we might be limited by ECalls: Calls between user-land and the SGX Enclave).

An interesting fact is that no-threads is (at least on this single-core CPU/configuration: Azure VM with a single core, Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz) faster than if I specify no features. On normal linux they're quite neck-on-neck (probably only measurement errors), but in SGX it seems like no-threads is faster, regardless of how many threads I start in the application.

Interestingly, I've noticed that thread creation behaves differently depending on whether I use the no-threads feature. Without the feature it tends to start a bunch of threads in the beginning and actually executes them at the same time. Whereas I tend to get a single thread that finishes before the others start when using the no-threads feature (it's almost always this way). Though that's of course up to so many different reasons that I wouldn't try to draw any conclusions out of that.

// With no-threads
Hello, world!
Done
Hello, world!
Hello, world!
Hello, world!
Hello, world!
Done
Done
Done
Done

// Without no-threads
Hello, world!
Hello, world!
Hello, world!
Hello, world!
Hello, world!
Done
Done
Done
Done
Done

Here is a new error I encountered while trying different configurations. For some reason it only happened in this single configuration, and only in SGX, but it happens every single time.

Since you probably want to further investigate this error (unless it's already known): I've managed to reliably reproduce this bug in version 0.3.10 and the unknown-std branch, in both cases only without the no-threads feature and only when using 3/3 threads (including main thread) in the application. I don't know why, but it looks like we don't get to line where blst/threadpool tries to start a new thread and instead we get this error:

Hello, world!
Hello, world!
Attaching debugger
Error while executing SGX enclave.
Enclave panicked: thread 'main' panicked at 'Once instance has previously been poisoned', /home/jens/.cargo/registry/src/github.com-1ecc6299db9ec823/blst-0.3.10/src/lib.rs:35:14

ERROR: while running "ftxsgx-runner" "target/x86_64-fortanix-unknown-sgx/release/perf-measure.sgxs" "--" "3" got exit status: 255

For completeness, here is the code I was measuring with:

use rand::thread_rng;
use rand::RngCore;
use blst::min_pk::SecretKey;
use std::thread;
use std::time::Instant;
use std::env;

const BLS_DST: &[u8] = b"BLS_SIG_BLS12381G2_XMD:SHA-256_SSWU_RO_POP_";

fn main() {
    let args: Vec<String> = env::args().collect();
    let threads: usize = args[1].parse().unwrap();
    if threads == 0 {
        println!("Invalid arg");
        return
    }

    let start = Instant::now();

    let handles: Vec<_> = (1..threads).map(|_|
        thread::spawn(|| {
            do_stuff();
        })
    ).collect();
    do_stuff();
    let _x: () = handles.into_iter().map(|v| v.join().unwrap()).collect();

    let elapsed = start.elapsed();
    println!("Elapsed: {:.4?}", elapsed);
}
fn do_stuff() {
    println!("Hello, world!");

    let mut rng = thread_rng();
    let mut ikm = [0u8; 32];
    rng.try_fill_bytes(&mut ikm).unwrap();
    let sk = SecretKey::key_gen(ikm.as_slice(), &[]).unwrap();
    // let sk = SecretKey::random(&mut rng).unwrap();
    // let pk = sk.public_key();
    let pk = sk.sk_to_pk();
    // dbg!(&pk);

    let msg = b"Hello World";
    let sig = sk.sign(msg, BLS_DST, &[]);
    // dbg!(&sig);

    for _ in 0..1000 {
        let _res = sig.verify(true, msg, BLS_DST, &[], &pk, true);
        // dbg!(&res);
    }

    println!("Done");
}

Edit: This was a lot more insightful than I initially expected.

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

Hmmm... Here is the difference I'm talking about. For your snippet, compare results on Linux for cargo run --release -- 1 and taskset 1 cargo run --release -- 1. You should see that the second command will show worse results. Because pinning to a single core effectively suppresses parallelization in sig.verify. And the expectation was that the same command with no-threads setup on SGX would be as slow as the latter, while a working with-threads setup on SGX would be as fast as the former. And we don't see any of the differences. Not even with Linux. What's up with that?

BTW, what is the expected application for SGX? Wouldn't performing secret key operations be the one? Almost exclusively? In other words we probably shouldn't be dying on the parallelization hill:-) But don't get me wrong, it's a meaningful exercise (at least for me) to figure things out, so thanks!

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

Since you probably want to further investigate this error (unless it's already known): I've managed to reliably reproduce this bug in version 0.3.10 and the unknown-std branch, in both cases only without the no-threads feature and only when using 3/3 threads (including main thread) in the application. I don't know why, but it looks like we don't get to line where blst/threadpool tries to start a new thread and instead we get this error:

I'm inclined to write it off as fortanix's problem. std::sync::Once is supposed to serialize invoking threads, and "poisoning" seems to mean that the first [and the only] invocation crashed. I wouldn't be surprised if it turns out to be a "would block" in disguise, which appears to be a "hard" limitation on fortanix edp. Well, it's possible to arrange a call for an application to make prior spawning own threads, as a way to ensure that there is no race to the Once instance in question. But since the idea is to coerce no-threads for now, it would be an unnecessary complication.

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

For your snippet, compare results on Linux for cargo run --release -- 1 and taskset 1 cargo run --release -- 1

On the single-core system

# Without features
> cargo run --bin perf-measure --release -- 1
Elapsed: 1.2873s
> taskset 1 cargo run --bin perf-measure --release -- 1
Elapsed: 1.2934s

# With "no-threads"
> cargo run --bin perf-measure --release -- 1
Elapsed: 1.2750s
> taskset 1 cargo run --bin perf-measure --release -- 1
Elapsed: 1.2755s

On a multi-core system:

# Without features
> cargo run --bin perf-measure --release -- 1
Elapsed: 901.5476ms
> taskset 1 cargo run --bin perf-measure --release -- 1
Elapsed: 1.0096s

# With "no-threads"
> cargo run --bin perf-measure --release -- 1
Elapsed: 1.0329s
> taskset 1 cargo run --bin perf-measure --release -- 1
Elapsed: 1.0011s

If I read that correctly, it only makes a significant difference if there are multiple physical cores available (or perhaps with Hyperthreadding) and if no features are selected. Which makes sense to me.

And the expectation was that the same command with no-threads setup on SGX would be as slow as the latter, while a working with-threads setup on SGX would be as fast as the former. And we don't see any of the differences. Not even with Linux. What's up with that?

I guess that's because the Linux machine I was testing on only had a single physical machine (same one I used for testing SGX). We might see a difference if we had more than 1 physical core, but any kind of parallelization might suffer in SGX, as context switching likely has more overhead than on normal Linux (I've never tested that).

BTW, what is the expected application for SGX? Wouldn't performing secret key operations be the one? Almost exclusively? In other words we probably shouldn't be dying on the parallelization hill:-) But don't get me wrong, it's a meaningful exercise (at least for me) to figure things out, so thanks!

Primarily signature verification: Checking the Ethereum 2.0 Beacon chain signatures or at least those of the Sync Committee. But we might use it for signing, too, at some point. So secret key operation won't be the primary focus for now. Most likely we won't go with multiple threads anyways, unless absolutely necessary, so I fully agree with you. :)

I wouldn't be surprised if it turns out to be a "would block" in disguise

Could be, does the code run in std::sync::Once spawn threads if the no-threads feature is disabled? If yes, that would explain it.

But since the idea is to coerce no-threads for now, it would be an unnecessary complication.

👍

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

On the single-core system

Where does one find single-core systems nowadays?

901.5476ms vs. 1.0096s

Hmm... The improvement should be ~30%. ~10% sounds like you ended up on the same hyperthreaded core.

it only makes a significant difference if there are multiple physical cores available

Of course! I simply couldn't imagine that you would be exercising a single-core system:-)

what is the expected application for SGX?

Primarily signature verification

What's the point of executing it in SGX? I mean it's all public, while SGX is a tool for concealing things. Besides, as already mentioned, it comes with a hefty performance penalty. Even if there is no significant difference now, the correct course of future action is to align blst with the common SGX practices. I'm talking about -mlvi-hardening (default in x86_64-fortanix-unknown-sgx), and not only passing it to clang, but even modifying the assembly accordingly...

from blst.

DragonDev1906 avatar DragonDev1906 commented on June 16, 2024

Where does one find single-core systems nowadays?

Cheapest Azure VM that supports SGX. Technically multi-core but the VM has only access to a single core.

What's the point of executing it in SGX? I mean it's all public, while SGX is a tool for concealing things.

We're having some secret data in SGX. Mainly a secret key for signing, although with a different signature scheme as of now. In addition to that the signing (and thus handling of the secret key) will likely be handled in a separate enclave and might in the future even calculated by multiple enclaves together (Multi-party-computation or threshhold-signatures), such that we don't rely on a single Enclave (perhaps not even just on SGX) to handle the secret key, as a protection against the worst-case scenario of SGX leaking data.

But this Enclave still requires knowledge about the public on-chain data, as it interacts with the chain while processing other transactions and/or data that is not available on-chain. Simplest use case: Someone deposits 1 ETH in a smart contract, the SGX enclave sees that information and based on that signs a message (in our case this is used as a proof that the enclave has seen this deposit and can be used to withdraw the deposit on-chain should the SGX enclave be stopped by the (untrusted/semi-trusted) host. To know what happened on-chain, without having to trust someone outside the Enclave we need to check if blocks from Ethereum are valid, thus we require Signature verification.

from blst.

dot-asm avatar dot-asm commented on June 16, 2024

I see, thanks! I've actually added the -mlvi-hardening to the blst build script already, check it out (in the unknown-std branch). And I'd suggest for you to create a separate issue requesting assembly hardening. Well, I can create one and tag you, so the race is on:-)

from blst.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.