GithubHelp home page GithubHelp logo

wasmi-labs / wasmi Goto Github PK

View Code? Open in Web Editor NEW
1.3K 49.0 263.0 14.07 MB

WebAssembly (Wasm) interpreter.

Home Page: https://wasmi-labs.github.io/wasmi/

License: Apache License 2.0

WebAssembly 1.43% Rust 98.54% Shell 0.03%
wasm rust interpreter webassembly vm runtime

wasmi's Introduction

Continuous Integration Test Coverage Documentation Crates.io
ci codecov docs crates

Wasmi - WebAssembly (Wasm) Interpreter

Wasmi is an efficient and lightweight WebAssembly interpreter with a focus on constrained and embedded systems.

Version 0.31.0 has been audited by SRLabs.

Announcement: Transfer of Ownership

As of 2024-02-01, the original owner and maintainer of the Wasmi project, Parity Technologies, has officially transferred ownership of the project to me, Robin Freyler. Read more about this transfer here.

Distinct Features

The following list states some of the distinct features of Wasmi.

  • Simple, correct and deterministic execution of WebAssembly.
  • Low-overhead and cross-platform WebAssembly runtime for embedded environments.
  • JIT bomb resisting translation.
  • Loosely mirrors the Wasmtime API.
  • 100% WebAssembly spec testsuite compliance.
  • Built-in support for fuel metering.

WebAssembly Proposals

The new Wasmi engine supports a variety of WebAssembly proposals and will support even more of them in the future.

WebAssembly Proposal Status Comment
mutable-global Since version 0.14.0.
saturating-float-to-int Since version 0.14.0.
sign-extension Since version 0.14.0.
multi-value Since version 0.14.0.
bulk-memory Since version 0.24.0. (#628)
reference-types Since version 0.24.0. (#635)
simd Unlikely to be supported.
tail-calls Since version 0.28.0. (#683)
extended-const Since version 0.29.0. (#707)
function-references 📅 Planned but not yet implemented. (#774)
gc 📅 Planned but not yet implemented. (#775)
multi-memory 📅 Planned but not yet implemented. (#776)
threads 📅 Planned but not yet implemented. (#777)
relaxed-simd Unlikely to be supported since simd is unlikely to be supported.
component-model 📅 Planned but not yet implemented. (#897)
WASI 👨‍🔬 Experimental support via the wasmi_wasi crate or the Wasmi CLI application.

Usage

As CLI Application

Install the newest Wasmi CLI version:

cargo install wasmi_cli

Run wasm32-unknown-unknown or wasm32-wasi Wasm binaries:

wasmi_cli <WASM_FILE> --invoke <FUNC_NAME> [<FUNC_ARGS>]*

As Rust Library

Refer to the Wasmi crate docs to learn how to use the Wasmi crate as library.

Development

Build & Test

Clone the Wasmi repository and build using cargo:

git clone https://github.com/wasmi-labs/wasmi.git --recursive
cd wasmi
cargo build
cargo test

Benchmarks

In order to benchmark Wasmi use the following command:

cargo bench

Use translate, instantiate, execute or overhead filters to only run benchmarks that test performance of Wasm translation, instantiation, execution or miscellaneous overhead respectively, e.g. cargo bench execute.

We maintain a timeline for benchmarks of every commit to master that can be viewed here.

Supported Platforms

Wasmi supports a wide variety of architectures and platforms.

  • Fore more details see this list of supported platforms for Rust.
  • Note: Wasmi can be used in no_std embedded environments, thus not requiring the standard library (std).
  • Only some platforms are checked in CI and guaranteed to be fully working by the Wasmi maintainers.

Production Builds

In order to reap the most performance out of Wasmi we highly recommended to compile the Wasmi crate using the following Cargo profile:

[profile.release]
lto = "fat"
codegen-units = 1

When compiling for the WebAssembly target we highly recommend to post-optimize Wasmi using Binaryen's wasm-opt tool since our experiments displayed a 80-100% performance improvements when executed under Wasmtime and also slightly smaller Wasm binaries.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

wasmi's People

Contributors

aldaronlau avatar arkpar avatar athei avatar barafael avatar bddap avatar berrysoft avatar chevdor avatar dependabot[bot] avatar eira-fransham avatar elichai avatar ithinuel avatar kpp avatar leoyvens avatar lygstate avatar nikvolf avatar oluwamuyiwa avatar pepyakin avatar rcny avatar reuvenpo avatar robbepop avatar sergejparity avatar sorpaas avatar taegyunkim avatar tbu- avatar thabokani avatar therdel avatar tjpalmer avatar tomaka avatar willglynn avatar yjhmelody avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wasmi's Issues

Module validation exceeds stack limit

I've decided to try out this interpreter, but just loading my module makes it fail on the stack limit being exceeded:

let module = deserialize_file("livesplit_core.wasm").unwrap();
let module = Module::from_parity_wasm_module(module).unwrap();
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Validation("Function #1204 validation error: Stack: exceeded stack limit 1024")', libcore/result.rs:945:5

Decrease pointer-chasing/improve cache efficiency

There are two problems, the way I see it:

  1. Starting or resuming (after a nested call) a function call requires 4 derefs - instructions are a Vec in an Rc in an Rc in a Vec. This can blow the cache unless we're extremely lucky with where data gets allocated. Essentially, this means every function call requires up to 8 derefs which may be (and probably will be) on different cache lines. If we implemented TCO then tail calls would only require 4, but that's still bad.
  2. The instructions are allocated in separate blocks, meaning more cache incoherency when changing function contexts (again, on calls and returns).

Since optimisers avoid function calls anyway and inline when possible (so the resultant wasm that we're executing should have minimal function calls) the impact of this is somewhat mitigated, but it's something that we can fix, so why not.

Reconsider the limits

We need to reconsider the limits (particularly maximal value and frame stack height).
Ideally we should provide a way for a user to change this limits.

Also, we might want to synchronize with limits.h

Structured error types

Many applications of the interpreter can handle some error in a meaningful way (memory access violation, extern signatures mismatch, etc.), so it makes sense to encode them in other than a heap allocated string

Optimization: Use compact encoding of bytecode

Depends on #98

For now, every instruction is represented by an enum value. It means that each instruction occupies size required for a tag plus a size of the instruction that has the largest payload (probably BrTable) (which requires to be properly aligned). Now each instruction occupies about 24 bytes, even though most instructions are 1 byte wide.

Fortunately, we don't need to address instructions by an index and can iterate them sequentially, and branches can specify the exact byte offset for their target.

Shrinking the size of one instruction will allow us to use cache more efficiently and also will reduce memory overhead greatly.

High level API (and/or wasm_bindgen CLI integration)

Ideally I'd like it to be as seamless to host a wasm module in Rust as it is to host it in JavaScript. wasm_bindgen creates wrappers that let us do things like easily invoke functions that accept strings, etc. Would it be possible to do something like this for Rust so that the interop is smoother? Apologies if something like this exists and I'm just not finding it

Optimization: use unions for representing RuntimeValue

Depends on #98

Currently, a RuntimeValue is represented by a rust enum.

As a refresher: A rust enum requires space for the variant tag plus the size of a payload of the largest variant, which also requires to be properly aligned. Thus, a RuntimeValue takes 8 bytes for the payload (for 64-bit wide values) and 8 bytes for the alignment (on x86_64).

We can shelve 8 bytes by removing a tag. We can achieve it by using C-like unions for representing runtime values internally. It is possible because after validation it is statically guaranteed that each operation will be used with operands of appropriate types (i.e., f32.* operators will always be used with f32 operands).

Shrinking RuntimeValue may potentially improve cache efficiency for the operand stack and remove branching to check operand types.

interpreter pauses

hello,

I'm currently developing serverless-wasm and I'm wondering how I could implement two features:

  • putting a hard limit on the run time for a function (real time, or number of opcodes)
  • supporting asynchronous networking in host functions. Something like, WASM side calls tcp_connect, host creates the connection, stops the interpreter right there, and resumes once the connection is established

From what I understand, I could proceed like this (for the async part):

  • have some host function return a custom trap to say "WouldBlock" and registers that we're waiting for a specific socket
  • write another version of Interpreter::start_execution that would store the function context when receiving that trap from Interpreter::run_interpreter_loop instead of dropping it
  • once the code calling the interpreter knows the connection is established (or we got an error), modify the function stack to push the return value
  • resume executing from there

It looks like it would be doable, and could be used to implement breakpoints, too.

The other feature, limiting the run time of a function, would probably be easier to implement, with the loop regularly checking that we do not exceed some limit passed as arguments.

Do you think those features would be useful in wasmi?
The serverless-wasm project is in very early stage, and I admit the readme is not too serious, but I really want to get it to a state where it's nice to use, and I need at least the async networking feature to get it.

Support returning strings

I'm trying to write a function that returns a string, but right now I'm stuck. I wrote a function that I'm compiling to wasm:

#[no_mangle]
pub extern fn test3() -> *mut c_char {
    CString::new("ohai!").unwrap()
        .into_raw()
}

I'm calling it with wasmi like this:

let result = instance.invoke_export(
    "test3",
    &[],
    &mut exports,
).expect("failed to execute export");
println!("wasm returned: {:?}", result);

This returns:

wasm returned: Some(I32(1114120))

But it doesn't seem like there's a way to access the actual bytes. I can imagine some incredibly complex hacks to fit strings through an interface that only allows i32 by writing my own toy memory allocator, but I'd rather not do that.

Probably related to #88

Remove `Box<[Target]>` from `Instruction`

Currently copying a Vec<Instruction> is very expensive. Although most variants are Copy-able, BrTable contains a Box<[Target]>, which means that copying an instruction always requires checking the variant and you can only copy one at a time in serial. We should instead use an encoding like so:

// Note the addition of `Copy`
#[derive(Debug, Copy, Clone, PartialEq)]
enum Instruction {
    // ...
    BrTable { count: u32 },
    BrTableTarget(Target),
    // ...
}

This would mean that we can memcpy a vector of Instructions, much faster. It also reduces pointer chasing, see #136.

Finally, this is good preparation work for #100, since it means that decoding doesn't mean allocating a Box<[Target]>.

LittleEndianConvert is not exported

Due to a bug in Rust's module system, MemoryInstance::get_value is bounded by T: LittleEndianConvert but LittleEndianConvert is not exported.

Reduce binary size of compiled wasmi

I'm interested in fitting wasmi into small embedded applications, with footprints in the 32k-64k range. Currently, aggressively optimizing wasmi generates binaries in the 300-400k range. Would you be interested in taking patches that further reduce the size of the wasmi crate?

Access memory used by module

It does not matter if module exports, imports, declares own memory and does not exports, there should be a way to request memory the module instance is using while running

Update examples

At the moment, examples are just copied from parity-wasm repo. However comments are stale and I think we can provide better examples.

Optimization: Value handling operations

Related to #99
Depends on #98

For now, functions like into_little_endian do an allocation. This is very unfortunate for the functions that are called upon every access to wasm memory.

At the very least, we can use change definition from

pub trait LittleEndianConvert where Self: Sized {
    fn into_little_endian(self) -> Vec<u8>;
    fn from_little_endian(buffer: &[u8]) -> Result<Self, Error>;
}

to something like this

pub trait LittleEndianConvert where Self: Sized {
    fn into_little_endian(&self, out: &mut [u8]); // Will panic if `out` of not appropriate size
    fn from_little_endian(buffer: &[u8]) -> Result<Self, Error>;
}

this will avoid allocations for into_little_endian case.

API parity with browser/other engines

To achieve parity with alternate implementations of larger projects that happen to use wasmi, api surface of wasmi must be as close to browser engines as possible.

This includes (so far) API-s should be under feature flag:

  • stop/resume
  • memory used tracking (#153)

Benching system

to address lack of benchmarks, we might want to use special wasm binaries which utilize specific benching api

it should consist of:

  1. start benchmark (extern "C" fn start_bench(name_ptr: *const u8, name_len: u32))
  2. start iteration (extern "C" fn start_iter())
  3. end iteration (extern "C" fn end_iter())
  4. blackbox/observe (extern "C" fn bb_observe(ptr: *mut u8, len: u32))
    (can be replaced by test::black_box on nightly)

each start_iter should follow end_iter sequentally so there should be no nested iterations or overlaps (higher-level utility library should enforce it via closures/ownership)

bb_observe is used to avoid compiler optimisations

Fuzzer: validation error: type mismatch in i64.load16_s, expected [i32] but got [i64]


AGFzbQEAAAABJAdgAn9/AGACf34AYAF/AX9gAX8BfmABfgF+YAF9AX1gAXwBfAMYFwAAAQICAwIC
AgQEBAQEBQYCAgQEBAUGBQMBAAEH4QERDGkzMl9sb2FkMTZfcwAGDGkzMl9sb2FkMTZfdQAHCGkz
Ml9sb2FkAAgMaTY0X2xvYWQxNl9zAAkMaTY0X2xvYWQxNl91AAoMaTY0X2xvYWQzMl9zAAsMaTY0
X2xvYWQzMl91AAwIaTY0X2xvYWQADQhmMzJfbG9hZAAOCGY2NF9sb2FkAA8LaTMyX3N0b3JlMTYA
EAlpMzJfc3RvcmUAEQtpNjRfc3RvcmUxNgASC2k2NF9zdG9yZTMyABMJaTY0X3N0b3JlABQJZjMy
X3N0b3JlABUJZjY0X3N0VXJlABYK9gIXFgAgACABOgAAIABBAWogAUEIdjoAAAsUACAAIAEQACAA
QQJqIAFBEHYQAAsWACAAIAGnEAEgAEEEaiABQiCIpxABCxMAIAAtAAAgAEEBai0AAEEIdHILEQAg
ABADIABBAmoQA0EQdHILEwAgABAErSAAQQRqEAStQiCGhAsNAEEAIAAQAEEALgEACw0AQQAgABAA
QQAvAQALDQBBACAAEAFBACgCAAsOAAAAAAAAAAANADIBAAsOAEEAIACnEABBADMBAAsOAEEAIACn
EAFBADQCAAsOAEEAIACnEAFBADUCAAsNAEEAIAAQAkEAKQMACw4AQQAgALwQAUEAKgIACw4AQQAg
AL0QAkEAKwMACw0AQQAgADsBAEEAEAMLDQBBACAANgIAQQAQBAsOAEEAIAA9AQBBABADrQsOAEEA
IAA+AgBBABAErQsNAEEAIAA3AwBBABAFCw4AQQAgADgCAEEAEAS+Cw4AQQAgADkDAEEAEAW/Cw==

wasmi: ok
wabt: type mismatch in i64.load16_s, expected [i32] but got [i64]

Fuzzer: invalid import signature index

AGFzbQEAAAABJAhgAX8AYAF+AGABfQBgAXwAYAF/AGACf30AYAJ8fABgAX4BfgLZAhAIc3BlY3Rl
c3QJcHJpbnRfaTMyAAAIc3BlY3Rlc3QJcHJpbnRfaTMyAAAIc3BlY3Rlc3QJcHJpbnRfZjMyAAII
c3BlY3Rlc3QJcHJpbnRfZjY0AAMIc3BlY3Rlc3QNcHJpbnRfaTMyX2YzMgAFCHNwZWN0ZXN0DXBy
aW50X2Y2NF9mNjQABghzcGVjdGVzdAlwcmludF9pMzIAAAhzcGVjdGVzdAlwcmludF9mNjQAAwR0
ZXN0DWZ1bmMtaTY0LT5pNjQABwhzcGVjdGVzdAlwcmludF9pMzIAAAhzcGVjdGVzdAlwcmludF9p
MzIAAAhzcGVjdGVzdAlwcmludF9pMzIAAAhzcGVjdGVzdAlwcmludF9pMzIAAAhzcGVjdGVzdAlw
cmludF9pMzIAAAhzcGVjdGVzdAlwcmludF9pMzIABAhzcGVjdGVzdAlwcmludF9pMzIADAMDAgAB
BAUBcAECAgczCAJwMQAJAnAyAAoCcDMACwJwNAALAnA1AAwCcDYADQdwcmludDMyABAHcHJpbnQ2
NAARCQgBAEEACwIBAwpgAiwBAX0gALIhASAAEAAgAEEBakMAAChCEAQgABABIAAQBiABEAIgAEEA
EQAACzEBAXwgABAIuSEBIAFEAAAAAAAA8D+gRAAAAAAAgEpAEAUgARADIAEQByABQQERAwAL

wasmi: successful validation
wabt: 000018a: error: invalid import signature index

How to Run the code in vscode

How to run wasmi example (tictoctoe.rs) code??
I have done with cargo test everything is fine. Then I used cargo run to execute .rs code. But it not showing me any result.

What is the command to run code?

Why need MemoryInstance::copy and MemoryInstance::copy_nonoverlapping

Hi there,

wasmi seems pretty much interesting. I'm looking into it's implementation. I found that MemoryInstance::copy and MemoryInstance::copy_nonoverlapping are used nowhere and can be removed along with their unit tests. All tests passed. My question is: are they essential in this project? Could I remove them? Thanks!

Add origin information about Wasm traps

Hi,

Is there any way to debug where trap happened? At least function would be good, file+line and/or traceback is even better.

I'm using rust's wasm32-unknown-unknown for generating wasm code itself (and the trap is a panic). Maybe that's something to tweak at the compiler side?

Optional argument to resume_execution allows value stack underflow.

Interperter assumes this invariant:

An interpreted function will not pop too many or too few values off the value stack.

Since pop is in a hot code path, the above invariant allows for optimizations.

If, however, we trusted the user to push a runtime value to the stack, then we popped it, the stack would be one shorter than expected. Near the end of execution, the value stack would underflow.

Enter func::resume_execution

func::resume_execution looks like this:

pub fn resume_execution<'externals, E: Externals + 'externals>(
    &mut self,
    return_val: Option<RuntimeValue>,
    externals: &'externals mut E,
) -> Result<Option<RuntimeValue>, ResumableError> { ... }

That return_val argument gets passed to Interpreter::resume_execution.
When return_val is None, Interpreter::resume_execution ignores it.
When return_val is Some, it gets pushed to the value stack.

  • If the user passes Some(val) and the host doesn't pop, the value stack becomes too tall.
  • Conversely, passing None when the host expects an argument 1) causes the host to pop some random value, and 2) makes the stack one shorter than expected.

Sugessted fix

Either get rid of the return_val argument, or make it mandatory.

move validation to parity-wasm

I'm a user of parity-wasm but not of wasmi. When I load wasm from an untrusted source, or I manipulate a wasm AST, I'd like to ensure it is valid. There is a validator in https://github.com/paritytech/wasmi/blob/master/src/validation/mod.rs#L168 that I would like to use without adding a dependency on wasmi. Are you open to moving that code to the parity-wasm crate, along with the ModuleContext machinery required to make it work? I'm sorry if this questions a design decision you already made - its OK if you don't feel its appropriate to move this code.

Allocate all instructions in single block

My idea is to use a wrapper around a buffer with shared ownership, so each Instructions type has a pointer to a section of the buffer but the full buffer doesn't get dropped until all owners are dropped (similar to how Bytes works). Probably we would have to preallocate a buffer big enough to hold all instructions and then have some kind of fn extend(iter: impl IntoIterator<Item = Instruction>) -> Option<BufferHandle> (straw man naming, of course) that locks the shared buffer, writes the elements and then returns a handle that spans the newly-added elements. This avoids dealing with lifetimes and makes Module::drop much faster, at the cost of possibly making allocating a new module slower (we can either generate instructions in serial or generate them in parallel but use intermediate buffers that must then be copied from).

Fix spec testsuite

Current testsuite actually is far from current. I tried to upgrade spec suite and it appears that wasmi couldn't pass it (although, I'm not sure why, maybe it is fault in decoding in parity-wasm.

Add negative tests for host integration

For example, test what if:

  • ImportResolver returned a signature that incompatible from requested,
  • Externals::invoke_index returned a value when no value expected by the signature, and vice versa

Example showing shared linear memory use

Can you show an example of having the host read/write data to a linear memory that was imported by the wasm module and not contained within?

As an example, this demo here shows how this string round-trip through linear memory is done when the host is JavaScript. I would love to be able to do the same thing with Rust as my host and am looking for an example of how to accomplish this in wasmi but I'm not finding a clear direction in the docs.

Fuzzer: validation of br_table types

AGFzbQEAAAABBAFgAAADAgEAChYBFAADQAJ9QwAAAABBAA4BAAELGgsL

wabt:

BeginModule(version: 1)
  BeginTypeSection(4)
    OnTypeCount(1)
    OnType(index: 0, params: [], results: [])
  EndTypeSection
  BeginFunctionSection(2)
    OnFunctionCount(1)
    OnFunction(index: 0, sig_index: 0)
  EndFunctionSection
  BeginCodeSection(22)
    OnFunctionBodyCount(1)
    BeginFunctionBody(0)
    OnLocalDeclCount(0)
    OnLoopExpr(sig: [])
    OnBlockExpr(sig: [f32])
    OnF32ConstExpr(0 (0x040))
    OnI32ConstExpr(0 (0x0))
    OnBrTableExpr(num_targets: 1, depths: [0], default: 1)
    OnEndExpr
    OnDropExpr
    OnEndExpr
    EndFunctionBody(0)
  EndCodeSection
EndModule
test.wasm:0000026: error: br_table labels have inconsistent types: expected f32, got void

spec:

test.wasm:0x24-0x25: invalid module: type mismatch: operator requires [] but stack has [f32]

https://github.com/pepyakin/wasmi/blob/73c1451a842718ca8120f008561a33b4d1aaec1d/src/validation/func.rs#L497-L514

A new release?

Is there anything that blocks a new release now?

In particular I'm interested in #41

mmap impl for MemoryInstance

When the heap grows from, say, 1GB to 3GB, in naive vector implementation of MemoryInstance we need to reallocate vector, copy 1GB worth of data into the new vector and then fill with zeroes newly allocated 2GB worth of data.

On 64-bit machines (and maybe in some cases on 32-bit ones) we might allocate virtual memory up to memory instance's limit, and then just move the heap_end pointer upon call to grow, thus avoiding memory copying. Maybe it is possible to do zero-filling in a lazy way, i.e upon the first access to the page, however, i'm not sure how to do this in robust and portable manner.

f32/64.neg/abs may be wrong

It seems like it delegates the implementations to Rust's f32/64::neg/abs which might to be wrong as according to https://github.com/sunfishcode/wasm-reference-manual/blob/master/WebAssembly.md#floating-point-negate
those instructions are supposed to be bitwise instructions that preserve the bits, so they can't be implemented as subtractions. However f32/64::neg generate the following LLVM IR:

define float @example::foo(float %x) unnamed_addr #0 {
  %0 = fsub float -0.000000e+00, %x
  ret float %0
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.