GithubHelp home page GithubHelp logo

Comments (63)

yaahc avatar yaahc commented on July 16, 2024 13

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork? If not it seems like this must be fixed in cargo geiger rather than via an advisory.

from advisory-db.

myrrlyn avatar myrrlyn commented on July 16, 2024 11

If we're going to debate the virtue of a source file, then I'd like to respectfully rejoin that, for example, I am the author of a crate with a ridiculously high unsafe metric despite actually only having a select few parts of the codebase that are actually unsafe. The #[safe] annotation allows authors and readers alike to distinguish between "this performs an action that, while safe, is not registered as such with the compiler", and "this is unsafe and warrants significant attention".

This crate provides an opportunity to decrease the background-noise present and increase the signaling capacity of the unsafe keyword, so that rg unsafe becomes actually useful.

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024 11

@myrrlyn I'm sorry we disagree, but as a reviewer "these uses of unsafe don't merit review" is one of the assumptions you definitely want to double check, so you want to find all occurrences and not only the ones the author thought were maybe dangerous.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024 11

I don't think "this circumvents cargo-geiger" is enough justification. cargo-geiger is a separate tool, and one with a flawed metric that this crate neatly illustrates. If cargo-geiger users care about this crate being caught, cargo-geiger should handle that, not the rust security advisory database.

I don't quite agree with @myrrlyn -- I don't think one should use this crate in production code -- however, his underlying point about degrees of unsafe is one I strongly agree with. cargo-geiger does nothing to help there. Perhaps that's not the point of the tool, but it's just one of the problems with treating the count of unsafe as a security metric. At best it gives a rough filtering to areas of code you might want to look at closer.

And this advisory database is about security.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 9

If plutonium deserves an advisory, does my rc-borrow? It's not directly trying to hide it's internal unsafe usage, but because basically the entire API is emitted from a macro, cargo-geiger shows it as not containing any unsafe.

Or what about proc-ceasar? The point of that crate is to make your source code completely unreadable by applying the Ceasar Cypher to it. That both hides unsafe usage from cargo-geiger and makes the source code illegible to a human; it's strictly worse than plutonium thus.

It's not a security db's job to pass judgement on whether a library is "good." Both plutonium and proc-ceasar are demonstrations of how proc-macros can be used to obfuscate source code from naive static analysis. They both do exactly what they advertise. There's nothing malicious nor exploitable about the crates themselves.

You can argue that maybe they shouldn't have been published on crates. But that's not the security advisory db's point to argue.

It's also worth mentioning that this advisory won't help prevent any malicious actors from using this technique. It's trivial to reimplement an unsafe hiding macro in your own crate, and any malicious actor is going to do that before pulling in a crate whose express purpose is replacing the unsafe keyword.

If anything, this advisory makes it easier to find out about this trick, and accomplishes the exact opposite of its intent of avoiding hiding of unsafe from cargo-geiger.

from advisory-db.

myrrlyn avatar myrrlyn commented on July 16, 2024 8

Co-signed. As written, the crate is a modification to the visual representation of Rust source code; it does not represent or use a vulnerability in either the compiler’s intake parser or output generator. The only change it makes is in the reader-facing representation of a narrow part of syntax.

plutonium represents a point of discussion about the philosophical approach that we as authors and viewers of Rust syntax take with regard to a class of operation. It does not represent any flaw in either the production or execution of a program, and as such, does not qualify as a vulnerability.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024 8

The crate isn't unmaintained, though. By all appearances it implements its intended functionality perfectly, the only problem is that some people don't like that functionality.

Informational advisories are modeled as an enum:

https://github.com/RustSec/rustsec-crate/blob/ee8b2e5/src/advisory/informational.rs#L10-L21

pub enum Informational {
    /// Security notices for a crate which are published on <https://rustsec.org>
    /// but don't represent a vulnerability in a crate itself.
    Notice,

    /// Crate is unmaintained / abandoned
    Unmaintained,

    /// Other types of informational advisories: left open-ended to add
    /// more of them in the future.
    Other(String),
}

I think we could probably use the catch-all "notice" category for this purpose. This surfaces as a warning in cargo audit.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 7

@najamelan if you care about trusting the packages you're running, you should be using cargo-crev and reviewing the code you're downloading. Libraries definitely do not need unsafe to do malicious things, and cargo-geiger should not in any way be a substitute for actual code review.

I suppose you should just go ahead and ban proc-caesar as it also allows writing the unsafe keyword without writing the unsafe keyword, and rc-borrow because it shows up as "clean" in cargo-geiger due to having all of the API produced by a macro.

unsafe is useful as a hint of where to pay specific attention to memory safety, but is not anywhere close to being able to ignore reviewing unsafe-free crates.

You opted in to whatever your dependency is doing when you trusted them to provide a library without doing thorough code review yourself.

It really does sound like what you need is cargo-crev. It even provides the geiger numbers as a guideline towards the most important review targets.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 7

I mostly just hope that the RustSec DB can stick to mostly objective advisories. crev's web-of-trust model is built to handle subjective review and trust; the central authority that rustsec uses is best when the central authority has some amount of (perceived) objectivity.

from advisory-db.

alex avatar alex commented on July 16, 2024 6

this is code that let's you write memory unsafe code without the unsafe keyword. That's unsoundness. When it happens by accident it's a security issue, when it happens on purpose, it still is.

from advisory-db.

RalfJung avatar RalfJung commented on July 16, 2024 6

@myrrlyn

No, it is not unsoundness. Unsoundness is specifically a term for the emission of a class of incorrect executable instructions, or a class of semantic input that causes the compiler to construct an incorrect view of the abstract program. It does not apply to the dialect of syntax that viewers observe

You are wrong here. Unsoundness has nothing to do with executable instructions. It has to do with whether safe code can cause UB (and UB, too, has nothing to do with executing instructions unless you mean "executed on the Rust Abstract Machine"). See for example this definition.

Whether or not plutonium is unsound depends on whether the following is considered "safe code":

#[safe]
fn scary() {
    // code
}

Given that unsafe is the signalling keyword for unsafe code in Rust, and given that that keyword does not appear here -- to the contrary, its negation does -- I think one can very reasonably say that plutonium is unsound. Sure, it is "just" introducing a different keyword as alternative to unsafe, but since unsoundness is all about that keyword, I think this is enough.

from advisory-db.

RalfJung avatar RalfJung commented on July 16, 2024 6

It's second-order unsound because it is behaving as advertised, but the advertised behavior is bad.

That's like saying a buffer overflow is not a bug when the crate says that it is intentional. I do not buy that argument. Sure this is a social system, but it risks becoming meaningless if we do not try to uphold some objective standard. "Soundness" is the objective standard on which the entire Rust ecosystem rests. The social part is the one where we all agree (or not) that soundness is important, and unsoundness a bug.

"(Un)sound" is an objective technical criterion, on which crate author intentions have no influence. (The only part I think that is subjective here is how to mark a macro as being "unsafe to use"; I do not know if "it should have unsafe in its name" is the consensus opinion.) Plutonium is unsound. Documentation can point out that it is deliberately unsound, but no amount of documentation can change the fact that it is unsound. Thus, is should have an advisory filed with whatever treatment we give to soundness bugs.

We could decide that unsoundness is not a problem when it is intentional. That is how I understand your argument. I strongly disagree with that.

I'd be way more comfortable with this if we distinguished between "concrete vulns" and "unsound", but we don't, so I'm wary of expanding the definition.

I'd say we should make that distinction. If we did, would you object to an "unsound" advisory being file for plutonium? (EDIT: ah, looks like you do not. We remain in very strong disagreement then.)

from advisory-db.

RalfJung avatar RalfJung commented on July 16, 2024 5

I do wish we distinguished between what you're calling concrete vulns vs unsound, but I think plutonium is a part of a third category that's an additional layer of meta.

I disagree, I don't think such a layer even exists -- every "second-order unsound" library is also "first-order unsound".

Plutonium is "first-order" unsound, as demonstrated by this "safe" program that is causing UB:

#[safe]
fn nasty() {
    *(std::ptr::null_mut::<i32>()) = 0;
}

fn main() { nasty() }

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024 5

That would be my preference, and I feel it's common sense in a safety focused language, but that is not what happens (not today at least), and not what this advisory was about. It also doesn't mean that that needs to happen for all categories of advisories. What I take away from this discussion is that there is demand for:

  • more distinction in different types of advisories
  • formal description of what goes in each category
  • better configuration options about what output you want to see

A possible implementation for cargo publish could be that it requires a flag to publish something to crates if it contains an exploitable vulnerability, but just prints the output from cargo-audit to the console on warnings.

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024 4

I'm fine with this being a warning. I just feel this should never slip unnoticed into a dependency graph, and the crate was deliberately designed to exploit a limitation of cargo-geiger so people wouldn't notice they have unsafe in certain dependencies.

It is further problematic as grepping for the string unsafe is a common review technique disabled by this crate, which warrants a warning in my opinion.

I can only conclude that the intend and purpose of this crate is to make reviewing for memory safety harder, which is not something I think should be on crates. If people want to write unsafe code without using the keyword at home, I couldn't care less, but this invites anyone not wanting hassle for using unsafe to just add it as a dep in Cargo.toml and I'd rather we don't make that easier and less transparent.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 4

@najamelan have you misunderstood what plutonium does? All it does is change how you write the unsafe keyword, from

fn scary() {
    unsafe {
        // code
    }
}

to

#[safe]
fn scary() {
    // code
}

and as such, unsafety is still opt in. It's existence changes nothing about Rust's unsafe model, it just introduces a different way of writing code that adheres to it.


As for having it be a "this crate shouldn't be used in production code" warning: for one, that's a value judgement that a centralized DB shouldn't be in charge of. (Instead, maybe use a crev review of "don't use," which is a level that they do offer.) But more importantly, how do you define an actual cutoff point for what crates deserve that kind of advisory? There's thousands of toy libraries that shouldn't be used in production, do they all deserve an advisory?


And I must stress once again: plutonium is not doing anything sneaky. All it is doing is providing a macro that inserts an unsafe block around the body of the function. It's trivial for me to write a macro_rules macro pro_gamer_move! that does the exact same thing and which impedes cargo-geiger in the exact same fashion.

plutonium does not deserve a security notice. The domain of "bad style" crate advisory is for review, and cargo-crev handles it nicely.


Perhaps also cargo-deny should offer functionality to blacklist crates yourself.

from advisory-db.

myrrlyn avatar myrrlyn commented on July 16, 2024 3

No, it is not unsoundness. Unsoundness is specifically a term for the emission of a class of incorrect executable instructions, or a class of semantic input that causes the compiler to construct an incorrect view of the abstract program. It does not apply to the dialect of syntax that viewers observe.

from advisory-db.

RalfJung avatar RalfJung commented on July 16, 2024 3

@CAD97

If plutonium deserves an advisory, does my rc-borrow? It's not directly trying to hide it's internal unsafe usage, but because basically the entire API is emitted from a macro, cargo-geiger shows it as not containing any unsafe.

Is it possible to cause UB using that macro? If yes, does the macro name contain "unsafe"? If no, then yes I'd say this is unsound.

I've seen such unsafe-to-use macros before, which internally use unsafe { .. }, and the usual convention seems to be to make unsafe part of the macro name to mark that.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 3

@RalfJung, here's a thought experiment:

I think unsafe fn and unsafe { } are different enough that they should have different keywords; the former expresses a requirement on the user to remain safe, and is potentially unsound to call; the latter is an upholding of requirements, and is never unsound to execute. (Assume I've put things in micro modules to eliminate safe code that can break safety invariants.)

Believing this strongly, I publish a new crate, trusted, which provides a single macro:

#[macro_export]
macro_rules! trusted {
    {$($tt:tt)*} => {unsafe{$($tt)*}};
}

with the intent of using this macro instead of unsafe blocks for when the unsafety is properly and fully contained within the block.

Is this an unsound crate? Does it deserve an advisory, informational or not? I'd argue that it adds an important distinction between encapsulated unsafe and "leaky unsafe" that also relies on safe code outside the block for correctness.

I ask this because plutonium::safe is the exact same API, just applied as an attribute to a function rather than a block style macro. I seem to recall safety-dance discussing potentially providing a version of #[safe] that takes a review identifier and a hash of the code, and prevents compiling the code if it changes from the reviewed. It'd not be incorrect if that attribute also made the body of the function an unsafe block, as it, by existing, points to a review showing the annotated function safe.

I'd find it unfortunate if we said macros were sound or not based on whether their name contains the string unsafe.

If a macro is intended to always be safe to use, and in some cases allows you to execute unsafe code without upholding the preconditions, then that would be a soundness issue.

If a macro is intended to be unsafe to use, even if it doesn't contain the string unsafe in it's name, I don't think its existence is unsound. You can argue that it's a bad crate, sure, but not that it deserves to be linted out of existence.

It's sort of blunt to put it that way, but it comes down to the fact that you need to read the docs for the macros you're using, because they can do anything. (Including, but not limited to, stealing your Bitcoin wallet at compile time.)

I'm in full agreement with @Manishearth here; you can use plutonium to write unsound, potentially vulnerable code, but plutonium itself is not unsound nor potentially vulnerable.


This is definitely on the edge. proc-ceasar is, honestly, probably also on the edge. The main worry here is, imho, that accepting these "maybe" cases into the central authority decreases trust in said central authority.

It would definitely benefit RustSec if it could provide objective rules for when something definitely does deserve an advisory, and some objective rules for things that definitely doesn't deserve an advisory, and should be delegated to more opinionated services like crev, geiger, and/or deny. Even if the middle ground is fuzzy, it's beneficial to have clearly defined areas and guidelines that can be used to make decisions for the middle area.

I would personally put the useful "yes this is advisory worthy" main goalposts at

  • "major:" libraries that, through the exclusive use of safe APIs, can perform unsound operations (exhibit undefined behavior).
    • As the language does not provide for unsafe macros, macros are considered safe by default, but may be considered unsafe to use based on their documentation.
  • "minor:" libraries that fail to uphold documented postconditions in a way that isn't major (defined above), such as panicking when documented not to.

Just a side note: to those saying crev is "worse" than RustSec for this kind of thing because it doesn't have a central authority, for one that's sort of the point, that it doesn't need one. But more importantly, nothing prevents the use of a crev proof repository as a central authority.

And yes, crev struggles to bootstrap itself to being useful, due in part to adoption. But if people don't adopt crev because it doesn't have adoption, it'll never get that adoption in the first place.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024 3

I also do think the database should try and distinguish between "vulnerability" and "can accidentally let you write UB" (i.e. unsound), because the social processes around fixing the two are different -- specifically the latter may not require action on part of the end user, but might on part of intermediate crates. (End users still need to know, and perhaps pressure their upstreams to verify things, or audit them themselves)

from advisory-db.

anderejd avatar anderejd commented on July 16, 2024 2

Hi everyone, the cargo-geiger guy here. Interesting discussion.

In my opinion this issue has very little or nothing to do with cargo-geiger and is only related to the intention of the crate in question. The intention is to subvert the expectations of developers and to sneak code past code review.

That being said, if anyone has the time to add support for macro expansion to cargo-geiger that would be very nice!

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024 2

It also sneaks past grepping for "unsafe" which I think is commonly used for finding low hanging fruit in review.

I think the point is that reviewing every version of everything in your dependency graph is often impossible from a resource point of view, so we use heuristics by sheer necessity. In that light anything that only has one purpose, hampering chances for non-exhaustive review, is a very big sign of "possible danger" or "possible bad intent" which I feel it would be worth having a centralized warning for that everyone can benefit from. I don't really understand why that is considered so opinionated.

I definitely don't review every new version of everything when I run cargo update, but if this were to appear in my dep graph, I would definitely take the time to have a closer look. How could anyone wanting to be responsible for what they publish on crates not want to be alerted if this creeped in is beyond me.

Individualized tools like cargo-crev or cargo-deny aren't much help for benefiting from the review work of others. Centralized tools have a much higher practical effect. I don't think the focus on theory is very helpful. In theory cargo-crev could be a good tool for this, in practice it's not. In theory this might or might not violate the soundness rules of safe rust. For practical purposes that's not really the point.

I think the discussion in this issue brings up a few fundamental questions:

  1. How do we move towards a practically safer eco system, together?
  2. Why do people feel like warning about this is opinionated rather than common sense?
  3. Is the problem rather which tool is used for what, than the underlying issue? If so, what other tools could effectively help here as opposed to theoretically.
  4. Isn't there other low hanging fruit, like wouldn't it be nice if cargo publish required passing an extra flag when you try to publish to crates with known vulnerabilities in your deps? Obviously can only work for people who have cargo audit installed, but wouldn't it make sense to ship that by default one day?

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024 2

is a very big sign of "possible danger" or "possible bad intent"

Yes, precisely, this is a "security fishiness" metric, NOT a vulnerability. I don't dispute that this crate in a depgraph is fishy, I dispute that it is a security issue meriting addition to this database. It's up to tools like crev and geiger to handle this.

Why do people feel like warning about this is opinionated rather than common sense?

Warn about it all you want, just not here, the security vulnerability database.

We do need tools to help us evaluate and review the libraries

Sure. These tools exist, they're cargo-crev and cargo-geiger. This is not a security vulnerability and should not be a part of cargo audit's database, which needs to have a higher standard if we wish to have people actually use it to watch for security issues and not be worried about being swamped with other things. The fact that this was merged certainly reduced my confidence in this database.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024 2

It's second-order unsound because it is behaving as advertised, but the advertised behavior is bad.

With first-order unsound bugs I accidentally need to use the crate in my crate in a way that I expect works but is actually broken under the hood, and I need to be notified so I can audit my crate, and upgrade if necessary (Ideally, I should always upgrade, but it's not always possible, especially if a patch doesn't exist yet).

There's no "oops that's actually broken" moment here. Whoever adds plutonium to their crate knows exactly what they're doing. It's their downstream that need to worry about it being introduced in their dep graph, which is what makes it second order, to me.

This is a social system, the intent matters; you have an additional layer of intentional abuse involved before this crate becomes a vulnerability.

I'd be way more comfortable with this if we distinguished between "concrete vulns" and "unsound", but we don't, so I'm wary of expanding the definition.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 2

I'm not entirely sure I get what you mean here, but you can have pub fn, no mention of unsafe on your API that inside the function body uses unsafe {}. That is, unsafe is not contagious.

And that's what the #[safe] attribute in plutonium is, once you strip away the documentation's presentation of "make everything less safe" anyway.

It is just a transform from #[safe] fn foo() { /* code */ } to fn foo() {unsafe{ /* code */ }}. Inside the function annotated by #[plutonium::safe], you're not writing Safe Rust, you're writing Unsafe Rust. If you accept that #[plutonium::safe] is an "unsafe macro" (which the language has no definition of, so falls entirely to documentation), then there is nothing wrong with this.

To be clear, I know the difference between unsafe blocks and functions. That's why I think that you can argue for my thought experiment trusted! as being a meaningful distinction. #[plutonium::safe] is then the exact API, just as an attribute rather than a block macro.

So let me ask you directly, @najamelan:

Say I (or @myrrlyn, who has expressed that using this sort of macro would make his crate maintenance easier by separating "kinds" of unsafe) release the trusted crate, whose entire body I've inlined below.

/// Alias for an `unsafe` block.
///
/// This should be used to mark `unsafe` blocks that are not in any way dependent
/// on external safe code for correctness. That is, any unsafe preconditions should be
/// upheld solely by code within this block and safety invariants of the code it calls.
#[macro_export]
macro_rules! trusted {
    {$($tt:tt)*} => {unsafe{$($tt)*}};
}

Would you see it fit to submit a security advisory for that crate? After all, I can write

#[macro_use]
extern crate trusted;

fn main() {
    trusted! {
        *std::ptr::null::<u8>();
    }
}

and cause UB without the string unsafe anywhere in my code.

If you would, I strongly disagree with you. trusted! is explicitly a different way of spelling unsafe that communicates a different intent than just an unsafe block.

If you wouldn't, what makes this any different than #[plutonium::safe]?

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024 2

Would you see it fit to submit a security advisory for that crate?

Yes

It is just a transform from #[safe] fn foo() { /* code */ } to fn foo() {unsafe{ /* code */ }}

I think I really don't understand what makes the former desirable over the latter. Eg. what problem are you trying to solve? If you want to distinguish between "different kinds of unsafe" in your crate, just put a keyword of your choice in a comment, then you can even search through your codebase for it, no? No unsound, no lying to reviewers, ...

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 2

@najamelan and I don't see the purpose of requiring unsafe code to use the unsafe keyword when this is the reason that Rust has its macro system. Nothing is unsound about trusted!, unless you have a draconian view of soundness the Rust language is unsound because it allows you to cause UB in unsafe blocks. How is writing trusted! different from writing unsafe for terms of soundness of API?

I personally agree that it's probably not worth it to use trusted! rather than unsafe. But I don't think the answer is to uniformly declare trusted! as evil and attempt to lint it out of existence. (And to be clear: that is what a RustSec vulnerability advisory is (perceived as): an attempt to lint (the unfixed versions of) a crate out of use.)

In any case, I'm not going to argue this point any further one way or another. I agree that you probably shouldn't be using plutonium, after all!

What I disagree with is trying to ban plutonium or any other crate that tries to offer a different way to scope unsafe. Experimentation is a good thing. I see linting plutonium away as roughly equivalent to linting fehler away: many people would prefer that both of these crates aren't used. But the macro system exists specifically so that people can create micro scoped dialects that express their problem domain better than macroless Rust.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024 1

@najamelan

I am fine with this being a notice or a warning, but I do find it problematic if that is hidden by default. For one, I have been using and thus to some extend relying on cargo-audit for a while, but I didn't know that I'm not seeing all categories of warnings.

For what it's worth, there aren't presently any advisories in the database using the informational = "notice" setting, so you haven't been missing anything.

Agreed the docs could be better, also there are some unmerged PRs which make the warning functionality for informational advisories more flexible.

@yaahc

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork? If not it seems like this must be fixed in cargo geiger rather than via an advisory.

This is true of any crate an advisory is filed against.

The best we can do is handle crates on a case-by-case basis.

@CAD97

As for having it be a "this crate shouldn't be used in production code" warning: for one, that's a value judgement that a centralized DB shouldn't be in charge of.

I would be perfectly fine with removing the verbiage to that effect from the advisory.

Would you mind opening a PR?

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 1

I made the wording more objective in #279.

I still believe that the advisory should be marked obsolete and the description point people to use cargo-crev for this kind of "bad code judgement," but don't have the bandwidth to write the wording currently. The description should probably also note how the advisory got in when it shouldn't (being more a value judgement than an objective advisory, even with my objectivity PR).

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024 1

and as such, unsafety is still opt in

It is if you use the crate, not if it creeps into your dependency graph.

Just to be clear. I just reported this to cargo-geiger, because I felt people should know about it, where it was suggested this should be dealt with by the advisory. At cargo-deny they also think this is something that should be handled by the advisory, so I filed a PR here.

Given that this seems to be an uphill battle, just know that it doesn't matter that much to me. Cargo geiger will get fixed, and have got it hardcoded in my cargo-deny configuration. We can agree to disagree, but every time time you do a cargo test and cargo has internet access, you are running random untrusted code on your dev machine. That's the state of affairs today.

Probably some companies do development uniquely in containers/VM, but there probably isn't many given the level of friction it adds. And there is no reason to believe Rust developers pick better passwords than npm developers. Seeing plutonium just reminds me of how dire the situation is and warning people about it is a drop in the ocean, but it's a drop. I'd just hope that collectively we find ways to get closer to improving on this one step at a time.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024 1

@CAD97 cargo-crev is an interesting tool, but it has comparatively lower adoption/reach, and while you may happen to disagree with it, it seems people are also interested in using the RustSec Database for this purpose. I’d also like to point out these tools aren’t a dichotomy: I’d suggest using both.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024 1

If we did, would you object to an "unsound" advisory being file for plutonium? (EDIT: ah, looks like you do not. We remain in very strong disagreement then.)

If we did, I would still dislike it, but I would not be bothered to actually object.

That's like saying a buffer overflow is not a bug when the crate says that it is intentional.

A buffer overflow is different, though. If a crate intentionally causes a buffer overflow, it should be marked as a soundness vuln. This is one level removed, this crate enables others to intentionally escape unsafe. People could, for example, use it the way @CAD97 (and earlier, @myrrlyn) describe, as an "alternate unsafe block" with different semantics and it would be totally safe. Hell, we've had multiple discussions in the past whether or not unsafe should have been called assert_safe, safe, trustme, trusted, or another one of many words.

Intent matters because the people writing intentional bugs do not need to be warned, their downstream does. If someone uses plutonium for bad, they should absolutely have their crate placed in this db.

(I actually really want the converse of this crate, a macro that converts fn foo(args) {body} to unsafe fn foo(args){ fn foo_inner(args){body}; foo_inner() } so that we can experiment with rust-lang/rfcs#2585 )

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024 1

attempt to lint it out of existence ... trying to ban plutonium ... I see linting plutonium away

I apologize if the wording in the advisory that it shouldn't be used in production was to strong and to opinionated, and I have no problem with that being changed, but other than that, I really don't see where you are coming from.

Nobody is trying to ban anything or lint it out of existence. Only make sure that people are conscious that something really fishy slipped into their dependency tree and that they are ok with that. If there are many like you, it wouldn't be linted out of existence at all. It does sound a bit like the argument that GMO usage shouldn't be labled in ingredient lists because people won't want to buy it.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024 1

That analogy is facile: When you go to the store and buy GMO produce, does the cashier give you a stern look and say "are you SURE you want this"? Labeling something does not make a value judgement, warning on it does. This is why crev and geiger are the right places for this, crev is equipped to handle "this crate does not gel with my subjective values". Crev is the counterpart of the GMO label, not this database.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024 1

Sure, sounds good

from advisory-db.

bjorn3 avatar bjorn3 commented on July 16, 2024 1

A buffer overflow is different, though. If a crate intentionally causes a buffer overflow, it should be marked as a soundness vuln. This is one level removed, this crate enables others to intentionally escape unsafe.

fn call_asm_bytes(code: &[u8]) {
    unsafe {
        region::protect(code.as_ptr(), code.len(), region::Protection::ReadWriteExecute).unwrap();
        (*(&code as *const _ as *const fn()))();
    }
}

also let's you intentionally escape unsafe code. It doesn't by itself cause a buffer overflow. Instead you have to pass it asm bytes that perform buffer overflow. However it is unsound, as it doesn't verify the asm bytes for memory safety. This is not much different from other crates that allow you to intentionally escape unsafe.

from advisory-db.

workingjubilee avatar workingjubilee commented on July 16, 2024 1

Just like "GMO" is in actuality a very specifically legislated definition that has gaping holes you could slip a three-eyed fish through, and so people do in fact eat what they would probably label "GMO food" if its production was described to them (whether or not that is a naive decision on their part), it is not the case that people have a choice about whether or not their code includes unsafe in Rust. I assert this because it is common knowledge that the core and standard libraries rely on unsafe code and so it is challenging to write any practical program without eventually invoking an unsafe function (somewhere, deeply buried in the compiler). If the utility of such warnings as unsafe are that we can verify the code and establish for ourselves that the safe interfaces to unsafe functions are indeed safe, and the primary purpose of plutonium is to circumvent cargo geiger, then plutonium already existed even before it was written, as even if you listed off all the locations to me, it is beyond me to verify the entirety of rustc and, having experimented, it appears cargo geiger does not even attempt to trace my crate's choices with use std::.

Sure, people say that open source makes bugs easier to notice, but I sure haven't checked if all 2701 contributors to rustc are in fact not secretly all alter egos creatively devised by Ken Thompson, here to make us really reflect on trusting trust. It would have been really clever of Ken Thompson to find a way to download his mind into my brain if so, but it would hardly be the most surprising discovery of the year, all things considered. I mean... 2020.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024 1

Yes, I fully agree that a publicly-exposed safe fn that performs unsafe operations without internally guaranteeing that their preconditions are met is worthy of an advisory that the function is unsound. The language has a way to mark preconditions of a fn necessary for soundness: unsafe.

The language provides no such mechanism for macros.

Tangent on the safety of macros

I fully agree that the default for macros is to be safe, and if at all possible, a macro that is not safe to call should require using unsafe to call it by calling some unsafe fn internally without an unsafe block internal to the macro. But I also strongly believe that because the language provides no mechanism for actually marking a macro as unsafe to call, that property can only be determined from documentation. If a macro does not have a # Safety header in its documentation, I would still be on board with filing a soundness advisory against it. To be honest, I think the only reasonable macro that allows writing arbitrary unsafe code inside it is a macro whose entire purpose is to be an alias for unsafe.

Unsafe code is tricky, and doing any macro source transforms on unsafe code is scary.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024

It’s definitely an interesting and unusual case.

It might make sense to create a new class of informational advisories for cases like this, as those are the existing mechanism for publishing advisories which aren’t specifically an advisory of a security vulnerability (presently they are used exclusively for tracking unmaintained crates). Informational advisories are surfaced as warnings instead of vulnerabilities.

from advisory-db.

Shnatsel avatar Shnatsel commented on July 16, 2024

I see it was filed as a regular security advisory rather than a special "informational" advisory type we use for e.g. unmaintained crates (example). @najamelan how do you feel about downgrading the advisory to a warning? This will still allow using it in cargo-deny, which I believe was the original motivation for the advisory.

from advisory-db.

hdevalence avatar hdevalence commented on July 16, 2024

The crate isn't unmaintained, though. By all appearances it implements its intended functionality perfectly, the only problem is that some people don't like that functionality.

from advisory-db.

andrew-d avatar andrew-d commented on July 16, 2024

FWIW, my vote is that some sort of notification is correct here. To draw a comparison: if a crate bundled and made use of a shared object that contained a vulnerability, I would want to know about that, even if all the Rust code in the crate was safe/sound since the expected behavior ("this crate is safe and behaves as expected") is violated. Similarly, plutonium subverts the expectations and security guarantees of a developer that transitively depends on it, in a way that could result in unsafety or unsoundness, and that meets the bar for a reportable vulnerability to me.

from advisory-db.

hdevalence avatar hdevalence commented on July 16, 2024

@najamelan

I can only conclude that the intend and purpose of this crate is to make reviewing for memory safety harder, which is not something I think should be on crates.

The thing is, security advisories are not about what you think should be on crates.io.

There is no security issue here. There should not be a security advisory, informational or otherwise. It devalues the meaning of this database.

Let's be clear: we're talking about a crate with 0 reverse dependencies and (as of this writing) 102 downloads by anyone, ever. The only reason there's an "advisory" is because someone saw it mentioned on Reddit and decided they didn't like it.

from advisory-db.

sunjay avatar sunjay commented on July 16, 2024

If cargo-geiger users care about this crate being caught, cargo-geiger should handle that, not the rust security advisory database.

+1 on this and the rest of Manish's comment as well. I think this particular sentence should be the main focus of this issue. The security advisory should be removed and a check for this should be added to cargo-geiger if that is deemed necessary.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024

It seems we're pretty polarized on this issue. I see both sides of it as well: it's definitely true this is not a vulnerability in and of itself, however it'd be nice if, somewhere, there existed a list of crates which circumvent static analysis around safety.

There's a case to be made that RustSec isn't the place for it, however in developing the informational advisory feature originally to handle unmaintained crates, my observation was the advisory format is great for cataloging this information.

Furthermore, it's been designed to be quite flexible: the alerting behavior for informational crates can be configured on a category-by-category basis. As it were, if we were to catalogue this advisory as informational, by default it wouldn't be surfaced by cargo audit at all unless the user explicitly opted into doing so via configuration/arguments.

If we do want to go down this road, I think it might make sense to define a new category of informational advisories for this purpose, although I'm a bit at a loss as to what to call it.

Here's what I'd propose as a path forward:

  • Mark RUSTSEC-2020-0011 as an informational = "notice" advisory as I suggested earlier, which would make it invisible to anyone who does not explicitly opt into warnings for notice advisories
  • If people feel particularly strongly about "yanking" this advisory from the database entirely, please open a PR that adds obsolete = true to it (or update #276 to do this), which will flag it as ignored/yanked by the client (by policy we don't delete advisories once they've been assigned an ID, ala crates.io's yank feature not actually "deleting" the crate). Once that happens we can continue the discussion there.

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024

I am fine with this being a notice or a warning, but I do find it problematic if that is hidden by default. For one, I have been using and thus to some extend relying on cargo-audit for a while, but I didn't know that I'm not seeing all categories of warnings. And I can't find anything on how to configure that in the README. If it were up to me, cargo publish would refuse to publish crates with dependencies which have advisories without requiring a flag, like --allow-dirty does for a dirty working directory.

Running cargo help audit reveals:

    -L, --log-level <log-level>
            The log level for messages [default: warn]

Running cargo audit --log-level notice gives:

error: unrecognized option `--log-level`

cargo-audit 0.11.2
Tony Arcieri <[email protected]>
Audit Cargo.lock for crates with security vulnerabilities

FLAGS:
    -c, --config CONFIG       path to configuration file
    -h, --help                print help message
    -v, --verbose             be verbose

It doesn't work, but there is now a mention of a configuration file, hmm... Where is the documentation for how to use the configuration file?

I think the point is that people should consent to something like this getting into their dependency graph. Unsafety should be opt in, not opt out. Currently you already need to know about and not forget to run tools like cargo-audit before every release, but if you then also have to know about mysterious configuration settings, it's quite self defeating.

I am fine with notices and warnings returning 0 exit status for CI and such, but for the 0-2 lines of output that an up to date crate should have, is it really that bad to just show everything by default?

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork?

No it wouldn't. But if they add it to crates, anyone who notices can file an advisory for it. And if not they need to use a git dep, something that is visible during compilation and can be more consistently detected by cargo deny. For sure there are many ways to do malicious things that are hard to detect right now. The ecosystem isn't very secure. And it's not really feasible to manually verify every version of every dependency, especially as the ecosystem is young and many crates update often. That's where automated tools come in to alleviate the problem somewhat.

If not it seems like this must be fixed in cargo geiger

People are working on that.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024

I marked RUSTSEC-2020-0011 as informational = "notice" in #278

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024

I'd be surprised if a weird annotation would sneak past code review. The intention is to sneak past automated tools like geiger, making it a geiger issue

from advisory-db.

anderejd avatar anderejd commented on July 16, 2024

@Manishearth Okay, you think this technique to sneak code past code review would be ineffective, does that matter? Is it not the intention to subvert code review (and code review tools) rather the approximated effectiveness that matter?

Is it realistic to think that we as programmers read every line of code of every dependency of every dependency and fully understand every piece? I would say no, that is for most open source projects at least unrealistic. We do need tools to help us evaluate and review the libraries, be it a simple text search or something more.

The crate in question is a security problem since it aims to make it harder to rely on one of Rust's main selling points over C/C++, that we can search for the "unsafe" keyword during code review.

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024

I don't dispute that this crate in a depgraph is fishy

I'm relieved to see we agree on that.

Yes, precisely, this is a "security fishiness" metric, NOT a vulnerability.

Nor are unmaintained crates, but maybe the whole concept of the advisory can be re-examined. I think you're always entitled to question existing conventions.

It's up to tools like crev

I don't think crev is a workable solution at the moment. If you want I can explain in detail why. Maybe something else could be, but right now the advisory seems the most appropriate.

and geiger

Won't stop it from getting round grep "unsafe"

be worried about being swamped with other things

I'm sorry, I would like to take you serious, but really? You will only get swamped by it if you actually have it in your crate graph. That being said, any crate you run cargo audit on should probably have at most 0 serious issues and 2-3 low risk issues like fishy and unmaintained crates. I sure think it's desirable that people can opt out of seeing notices, warnings and maybe certain categories in those, or even individual ones by ignoring certain advisory numbers in a config file. I don't think that works today, but that definitely is possible to implement and seems to be intended.

Is that what you call being swamped?

from advisory-db.

RalfJung avatar RalfJung commented on July 16, 2024

It also sneaks past grepping for "unsafe" which I think is commonly used for finding low hanging fruit in review.

Note that the convention I mentioned of using unsafe in macro names when they an introduce UB into otherwise safe code would fix this as well.

So, as far as I am concerned, plutonium would become sound if it renamed its #[safe] attribute to #[unsafe].

Yes, precisely, this is a "security fishiness" metric, NOT a vulnerability.

AFAIK there is precedent for reporting unsound code as a vulnerability here. I do not think that that is per se unreasonable. And by that standard, if we agree that plutonium is unsound, it makes sense to have an advisory.

I think it could make sense to have "unsound" as a separate severity/category for advisories, to distinguish it from more "concrete" vulnerabilities. If we go with that, plutonium would be in the "unsound" category. For example, some parts of this advisory are similarly just about "it's unsound" -- the crate itself never "drops uninitialized memory", but if client code causes panics at the wrong time (of which we do not have an example, but we also did not look for one), then there may be UB.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024

I'm sorry, I would like to take you serious, but really?

I'm talking about the general lowering of standards I perceive here, not just this example. I do not want us to lower the standards of a security vulnerability database.


@RalfJung I covered some of this in this comment

In Rust there are two kinds of vulnerabilities around unsafe: "this API has parameter choices that are undefined behavior/bad memory access" and "this API can be used in such a way in downstream code that leads to it invoking undefined behavior or bad memory access" (in C and C++ the latter is basically always true, vulnerabilities are more of the former kind). Essentially, the difference between "an attacker can exploit this" and "downstream can exploit (or accidentally trip) this". withoutboats goes into some detail on this here

It's already unfortunate we conflate the two. The first is a matter of "everyone should patch asap", the second is a matter of "downstream code should be audited for problematic use, and should trigger the first kind of vuln on those" (though ideally everyone patches anyway). Ideally as a community we have ways to talk about these differently because the implications, severities, and social dynamics are different. We don't, but that's not the problem at hand.

Plutonium is not in either category, it is in the category of "this API can be used in such a way that it can create APIs that can be used in such a way that downstream might have UB". This is another level of meta, and to me it's very subjective if that class should be considered as a class of vulnerabilities. It's only being considered as a member of the "downstream can exploit this" class because it's actually "downstream can exploit this to circumvent cargo-geiger", which is a very different statement and not indicative of a vulnerability.

I do wish we distinguished between what you're calling concrete vulns vs unsound, but I think plutonium is a part of a third category that's an additional layer of meta.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024

I'd be way more comfortable with this if we distinguished between "concrete vulns" and "unsound", but we don't, so I'm wary of expanding the definition.

This is exactly how the informational advisory feature works. It introduces a second class of advisories which are explicitly distinct from advisories for security vulnerabilities.

How informational advisories are surfaced to the user is configurable. Right now only informational advisories for unmaintained crates are surfaced, and are surfaced as warnings instead of as explicit "vulnerabilities".

The only other class right now is a catch-all notice class (which despite its name is not surfaced to the user at all, although perhaps it should). The enum used for informational advisories is explicitly open-ended by design though: we could add a unsound categorization, and decide on how to surface those.

Notably, cargo deny also consumes the RustSec Advisory DB, and could potentially error if unsound crates if configured to do so.

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024

This is exactly how the informational advisory feature works.

I actually think the "unsound" class is typically more severe than "unmaintained". Not in this case, but in general.

In this case it requires another layer of intent so I still feel it's a different class and should not be the subject of this db.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024

A final small wonder: is this (at least in part) due to documentation? If plutonium were more diplomatic in it's documentation, would that change your opinion of its worthiness?

I will fully admit that the documentation of plutonium is less than diplomatic; its expressed purpose is to "make everything less safe".

But I think the RustSec central authority should at least attempt to avoid making calls based on the diplomacy of documentation. There is, imho, nothing inherently wrong with using macros to create a dialect of Rust where unsafe is spelled differently.

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024

I think unsafe fn and unsafe { } are different enough that they should have different keywords; the former expresses a requirement on the user to remain safe, and is potentially unsound to call; the latter is an upholding of requirements, and is never unsound to execute. (Assume I've put things in micro modules to eliminate safe code that can break safety invariants.)

  • writing unsafe {} means I'm the one upholding requirements.
  • writing pub unsafe fn means I expose a function for which the compiler, nor I can uphold the requirements, so the user needs to use unsafe {} as a speed bump and to indicate to others (reviewers) that there is a risk of this author using the API without upholding those requirements.

For more, see https://doc.rust-lang.org/nomicon/safe-unsafe-meaning.html. From this link:

The need for all of this separation boils down a single fundamental property of Safe Rust, the soundness property:

No matter what, Safe Rust can't cause Undefined Behavior.

So what ever I throw in the macro from plutonium, it can never cause any unsoundness, because the compiler has my back right? Unfortunately @RalfJung has already demonstrated that is not the case.

By not requiring you to use the keyword unsafe, the author of plutonium guarantees that their implementation takes over from rustc in guaranteeing that whatever you put in it will be checked for safety. That is unless we question a very well ingrained contract in the rust language and community.

Since that is obviously not the case, by using the macro, you say to any onlookers, "nothing to see here, rustc has your back, move on". Which is lying, because you just turned off rustc without wanting us to know that you did. That alone doesn't inspire trust if you ask me.

The difference between unsafe fn and unsafe {} is that in the former the callers code needs to be audited. In the latter, if this happens in a pub fn, whatever the caller does, if they don't use unsafe can never introduce unsoundness. It is the code that implements the function that needs to be more carefully scrutinized for errors. So if you want your users to not have to worry about making mistakes provide them with a safe API. No need for a macro for that. Just don't try to hide the fact that you are playing with fire inside their house.

All in all, the two are conceptually very similar. In unsafe {}, you are the caller, in pub unsafe fn you delegate responsibility to a downstream caller. That's all. We mainly want to know about the former. The latter still means that the documentation you wrote about how to use it correctly might be wrong and still allows creating unsoundness even if users follow the docs correctly, so you still have some responsibility and you can still make things blow up, but less likely than in the former. And by definition it's obvious that something unsafe is happening, so it is less dangerous in that sense. Users will often check the code as well as the docs, because it's obvious they have some responsibility if something goes wrong.

I seem to recall safety-dance discussing potentially providing a version of #[safe] that takes a review identifier and a hash of the code, and prevents compiling the code if it changes from the reviewed. It'd not be incorrect if that attribute also made the body of the function an unsafe block, as it, by existing, points to a review showing the annotated function safe.

I'm not entirely sure I get what you mean here, but you can have pub fn, no mention of unsafe on your API that inside the function body uses unsafe {}. That is, unsafe is not contagious.

As for reviews, just put a comment on it with a link to the review, or have the actual review text and name of the reviewer in the comment. Of course, if you later change the code, the review is no longer valid and I imagine that's what the discussion was about. No need to remove the unsafe keyword though.

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024

@RalfJung

We could decide that unsoundness is not a problem when it is intentional. That is how I understand your argument. I strongly disagree with that.

This seems to be the core argument here. I am inclined to agree with you, with the caveat that deliberately unsound APIs do not contain a “vulnerability” per se (but are still fundamentally “unsound” from an objective, technical perspective).

I feel there is some baggage from the way this particular advisory was filed it’d be nice to separate discussion around the above topic on, if anyone wants to open a new issue for discussion (not to mention this issue is closed)

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024

@Manishearth what about allowing cargo deny to use existing RustSec infrastructure to manage (optional) blacklists?

(Perhaps I should ask them about that idea before suggesting it up front, but)

from advisory-db.

Manishearth avatar Manishearth commented on July 16, 2024

Like, third party blacklist databases? Sure. I don't think it belongs here, though

from advisory-db.

tarcieri avatar tarcieri commented on July 16, 2024

More like a collection of crates which deliberately expose unsound behavior as @RalfJung described here:

#275 (comment)

...but rather than exposing that information through cargo audit, deliberately expose it through cargo deny instead, allowing it to use the existing RustSec infrastructure which it already integrates with.

from advisory-db.

najamelan avatar najamelan commented on July 16, 2024

That analogy is facile: When you go to the store and buy GMO produce, does the cashier give you a stern look and say "are you SURE you want this"?

If that cashier is called cargo-audit and is there because this shop advertises that they have health and environmental expert at their customers disposal, and I specifically ask them if there is anything I might want to know about this product in those regards, they better.

Unfortunately that is not the analogy. It's more like I go and eat at an organic restaurant, because you know, unlike some C-burger restaurants, organic restaurants are not supposed to allow unsoundness in safe food. It so happens that I am one of those customers who always has their FDA expert and a mobile lab with them when they go out eating, so I ask my pocket expert to test the food.

The reasoning @CAD97 brings forth here is that my FDA expert should not tell me they detected GMO's in the organic food, because if they did, GMO's would be linted out of existence, which is kind of saying people really shouldn't have the choice, even if they go through lengths of specifically trying to find out about this, because I know that they will reject it and my business model only works if I can force it onto people.

I don't say that was their intention, I'm only saying the logic really resembles.

from advisory-db.

CAD97 avatar CAD97 commented on July 16, 2024

@najamelan the fact that you've argued for cargo publish to require the dirty flag if cargo audit is not clean damages your argument that this would be purely opt-in to pedantic checking.

from advisory-db.

RalfJung avatar RalfJung commented on July 16, 2024

@Manishearth

(I actually really want the converse of this crate, a macro that converts fn foo(args) {body} to unsafe fn foo(args){ fn foo_inner(args){body}; foo_inner() } so that we can experiment with rust-lang/rfcs#2585 )

You mean like https://crates.io/crates/unsafe_fn ? :D

@CAD97 Those are some good arguments. Indeed I already pointed out the weak spot in my own argument being the definition of soundness of a macro.

Re trusted!, part of me feels like it should be called unsafe_trusted!. But maybe that's just me desperately clinging to an objective soundness condition for macros? I feel like we ought to have a systematic way across the ecosystem to figure out if calling a macro could accidentally cause UB, to use @Manishearth's formulation. But maybe that's hopeless?

So, given there seems to be rough consensus that tracking soundness violations in the DB would make sense but should be a distinct category, is macro soundness really the main remaining point of contention here? As in, @CAD97 @Manishearth would you agree that a crate with a public function like this should get an advisory filed (in the soundness category) and no amount of docs on the crate's side can make this not an unsoundness?

from advisory-db.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.