GithubHelp home page GithubHelp logo

url's Introduction

This repository hosts the URL Standard.

Code of conduct

We are committed to providing a friendly, safe, and welcoming environment for all. Please read and respect the Code of Conduct.

Contribution opportunities

Folks notice minor and larger issues with the URL Standard all the time and we'd love your help fixing those. Pull requests for typographical and grammar errors are also most welcome.

Issues labeled "good first issue" are a good place to get a taste for editing the URL Standard. Note that we don't assign issues and there's no reason to ask for availability either, just provide a pull request.

If you are thinking of suggesting a new feature, read through the FAQ and Working Mode documents to get yourself familiarized with the process.

We'd be happy to help you with all of this on Chat.

Pull requests

In short, change url.bs and submit your patch, with a good commit message.

Please add your name to the Acknowledgments section in your first pull request, even for trivial fixes. The names are sorted lexicographically.

To ensure your patch meets all the necessary requirements, please also see the Contributor Guidelines. Editors of the URL Standard are expected to follow the Maintainer Guidelines.

Tests

Tests are an essential part of the standardization process and will need to be created or adjusted as changes to the standard are made. Tests for the URL Standard can be found in the url/ directory of web-platform-tests/wpt.

A dashboard showing the tests running against browser engines can be seen at wpt.fyi/results/url.

Building "locally"

For quick local iteration, run make; this will use a web service to build the standard, so that you don't have to install anything. See more in the Contributor Guidelines.

Formatting

Use a column width of 100 characters.

Do not use newlines inside "inline" elements, even if that means exceeding the column width requirement.

<p>The
<dfn method for=DOMTokenList lt=remove(tokens)|remove()><code>remove(<var>tokens</var>&hellip;)</code></dfn>
method, when invoked, must run these steps:

is okay and

<p>The <dfn method for=DOMTokenList
lt=remove(tokens)|remove()><code>remove(<var>tokens</var>&hellip;)</code></dfn> method, when
invoked, must run these steps:

is not.

Using newlines between "inline" element tag names and their content is also forbidden. (This actually alters the content, by adding spaces.) That is

<a>token</a>

is fine and

<a>token
</a>

is not.

An <li> element always has a <p> element inside it, unless it's a child of <ul class=brief>.

If a "block" element contains a single "block" element, do not put it on a newline.

Do not indent for anything except a new "block" element. For instance

 <li><p>For each <var>token</var> in <var>tokens</var>, in given order, that is not in
 <a>tokens</a>, append <var>token</var> to <a>tokens</a>.

is not indented, but

<ol>
 <li>
  <p>For each <var>token</var> in <var>tokens</var>, run these substeps:

  <ol>
   <li><p>If <var>token</var> is the empty string, <a>throw</a> a {{SyntaxError}} exception.

is.

End tags may be included (if done consistently) and attributes may be quoted (using double quotes), though the prevalent theme is to omit end tags and not quote attributes (unless they contain a space).

Place one newline between paragraphs (including list elements). Place three newlines before <h2>, and two newlines before other headings. This does not apply when a nested heading follows the parent heading.

<ul>
 <li><p>Do not place a newline above.

 <li><p>Place a newline above.
</ul>

<p>Place a newline above.


<h3>Place two newlines above.</h3>

<h4>Placing one newline is OK here.</h4>


<h4>Place two newlines above.</h4>

Use camel-case for variable names and "spaced" names for definitions, algorithms, etc.

<p>A <a for=/>request</a> has an associated
<dfn export for=request id=concept-request-redirect-mode>redirect mode</dfn>,...
<p>Let <var>redirectMode</var> be <var>request</var>'s <a for=request>redirect mode</a>.

Implementations

A complete JavaScript implementation of the standard can be found at jsdom/whatwg-url. This implementation is kept synchronized with the standard and tests.

A complete C++ implementation of the standard can be found at ada-url/ada. This implementation is kept synchronized with the standard and tests, and is currently used in Node.js.

The Live URL Viewer lets you manually test-parse any URL, comparing your browser's URL parser to that of jsdom/whatwg-url.

url's People

Contributors

achristensen07 avatar annevk avatar anonrig avatar autokagami avatar burlog avatar cvrebert avatar dawei-wang avatar domenic avatar foolip avatar jyasskin avatar karwa avatar mattmenke2 avatar mgiuca avatar mikewest avatar ms2ger avatar pombredanne avatar prayagverma avatar rmisev avatar rubys avatar sideshowbarker avatar simonsapin avatar snuggs avatar tabatkins avatar tantek avatar timothygu avatar tkent-google avatar tobie avatar trowbotham avatar zcorpan avatar zyszys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

url's Issues

Consider railroad diagrams in syntax sections to aid understanding

I've tried to explore railroad diagrams to assist the prose that is already present, but I haven't really figured out how to make them work well without a bunch of surrounding prose. E.g., defining the contents of a host can be done as such:

<pre class=railroad>
Choice:
  N: domain
  N: IPv4 address
  Sequence:
    T: [
    N: IPv6 address
    T: ]
</pre>

However, it's not clear from this diagram alone that these items together represent a host. The non-terminals can also not contain links.

IPv4 addresses are even harder:

<pre class=railroad>
Choice:
  N: 0-9
  Sequence:
    N: 1-9
    N: 0-9
  Sequence:
    T: 1
    N: 0-9
    N: 0-9
  Sequence:
    T: 2
    N: 0-4
    N: 0-9
  Sequence:
    T: 25
    N: 0-5
</pre>

<pre class=railroad>
Sequence:
  N: decimal byte
  T: .
  N: decimal byte
  T: .
  N: decimal byte
  T: .
  N: decimal byte
</pre>

It's not clear the first railroad is "decimal byte", it's not clear the second railroad is "IPv4 address", and they don't link to each other.

I think I was hoping for something closer to ABNF, but without the ABNF implications that you can then stick that into some parser generator.

origin tuple doesn't serialize host

While retrieving the origin with the attribute getter, no step in the specification(s) says to serialize the host (as it does when executing the host attribute getter). I think that should actually be specified in the origin section.

This leads to the domain-to-unicode algorithm receiving an IPv4 address directly as a 32bit identifier (which doesn't handle that case either).

Clarification on URL.host setter's behavior

Raised at https://code.google.com/p/chromium/issues/detail?id=551901

It's confusing that the host getter gives us:

  • "host:port" (for non-default port)
  • "host" (for default port)

while the host setter handles input without port as a host-only modification operation.

The spec should include some notes to prevent misuse.

For the host setter, for example ... hard to phrase correctly ...:


Note: Unlike the host getter, a value without a port does not mean implication of the default port. I.e. the host setter does nothing on url's port if the given value does not contain a port.


URLUtils is wrong for Location object

The Location object probably requires special treatment since setting its setters do not actually change the underlying URL directly. They just cause navigation. And the last navigation seems to win, too:

location.pathname = "x"
location.protocol = "https"

Navigates to a URL with a new scheme, not a new path.

Remove URL.domainToASCII and URL.domainToUnicode

They are still not implemented and it's no longer clear to me this is the best API. In particular:

  1. We might want an API around hosts in general. E.g., new URLHost(...) (the URL prefix for disambiguation).
  2. Converting an entire URL to Unicode might be something we want to cover too.

For both of these, overloading toString() to take an argument about how to serialize seems somewhat compelling, given the precedent in JavaScript. Though need to check with @domenic.

File state => Otherwise is written really confusingly

https://url.spec.whatwg.org/#file-state => Otherwise => step 1

From what I gathered: we have 3 conditions, they are all ORed, so:
Making one case "remaining consists of one code point" and another case "remaining’s second code point is not one of "/", "", "?", and "#"" is unnecessary. The latter will never be true without the former also being true, so we don't have to check for it explicitly, which leads me to believe that something about these conditions is wrong.

username/password are encoded differently at different parts of the spec

The authority state mentions that username and password should be encoded with the default encode set.

The "set the username" and "set the password" steps further down (which are referenced by the attribute setters) mention the specific encode sets dedicated to encoding username and password.

Is it expected that an implementation only appends to a temporary user/pw buffer and then calls set the user / set the password seperately, or is this a bug in the spec?

Redesign URLUtils

Otherwise they do not operate with the latest base URL in mind.

At which point we might want to consider a slightly different design for these that does not involve an internal url. 😟

When to generate unique identifiers?

In some cases, https://url.spec.whatwg.org/#origin says to "return a new globally unique identifier". The "new" part seems to indicate that a different identifier should be returned every time the origin of a given URL is obtained. As a result, some URLs are not same-origin as themselves, which seem undesirable.

Should unique identifiers be generated when an URL is "instantiated" instead? (That concept would need to be properly defined.)

Grammar specification for URLs

Any thoughts on providing a formal grammar? The current spec defines URLs in terms of algorithms - while that's helpful to quickly write an implementation in an arbitrary language, it makes it tough to write code generators that will update parsers as the spec evolves.

username/password are not inherited from base URL

The test cases and the spec have different behaviour here, I think...

new URL("/some/path", "http://[email protected]/smth").href === "http://[email protected]/some/path" according to tests, but an implementation according to spec leads to "http://example.org/some/path".

Fixes are needed (at least) in the RELATIVE state and the RELATIVE_SLASH state. Looks like this in code. Example failures from the tests here.

Formatting conventions

The formatting conventions used in this spec seem to be different (and slightly less informative) than those of some of the other specs; for example (taking WHATWG HTML as gospel):

  • % or 0x22 vs. 0x2F (ASCII %)
  • "%" vs. U+0025 PERCENT SIGN (%)

Would you be open to pull requests changing the former to the latter?

| in windows file paths needs to be handled specially

Web Platform Tests contains the test Parsing: <//C|/foo/bar> against <file:///tmp/mock/path>

  1. file host state has "C|" in its buffer, current pointer points to "/" => set state to path state
  2. path state receives buffer, and pointer pointing to f => appends f, o, o
  3. path state's | to : conversion isn't invoked anymore, since c is never "/" and then buffer never exactly matches the drive letter definition anymore (buffer instead equals C|foo)

I think there should be a "decrement pointer by 1" in the file host state before redirecting to path state, so path state tries to match windows drive letter immediately.

Reject extra leading zeroes in IPv4 addresses?

IPv4-in-IPv6 does it. The step “Otherwise, if value is 0, parse error, return failure.” rejects addresses like ::01.0.0.0.

Should IPv4 do the same? A single leading zero there indicates an octal component, but the input could have more: 000010.0.0.0. Or there could be leading zeroes after the hexadecimal indicator: 0x00010.0.0.0.

Parsing an empty host

If I’m reading the parsing algorithm correctly, parsing some-non-special-scheme:/// causes the host parser to be called with an empty string. Since domain to ASCII sets VerifyDnsLength to false, asciiDomain is also empty (not failure) and IPv4 parsing is called with an empty string. There, parts is first a list of one empty string, then an empty list:

Let parts be input split on ".".
If the last item in parts is the empty string, set syntaxViolationFlag and remove the last item from parts.

Later, numbers stays empty and some of the steps are nonsensical when they talk about the last item of numbers.

So:

  • The IPv4 parser should probably return failure for the empty string as a first step
  • This would make the host of this example URL an empty domain. Should it be null instead?

Changes in the base are not reflected often enough

I may have missed a crucial spec, but I cannot find any part of either the HTML or URL standards that explains the behavior shown in this jsbin:

console.log(document.querySelector("a").href); // http://example.com/foo.html
document.querySelector("base").href = "http://otherexample.org";
console.log(document.querySelector("a").href); // http://otherexample.org/foo.html

HTML's set the frozen base URL does not update a elements (or other URLUtils instances).

URL's href getter (and other getters) just returns the input, without first doing set the input or any other type of reparse.

So as far as I can tell from reading the specs, updating a <base> element (or otherwise updating the document's base URL, if it's possible to do so) will not update URLUtils instances. But browers definitely do this, at least for <a>.

I'm unsure where the appropriate fix is. In particular HTML could certainly update all the <a>s, but could it properly update all the URLUtils in play? Either URLUtils needs some way of not only getting the base, but knowing when it is changed (in which case all the base-setters need to push such a notification to all relevant URLUtils); or, URLUtils needs to re-consult get-the-base before every getter invocation. I think.

The text of this standard appears vulnerable to mismatching other standards

RFC 3986 suggests to rely only on the smallest possible set of reserved characters that is necessary to split the URL into 5 components (Section 5.2.1 Pre-parse the Base URI). Assuming that the RFC implied left-to-right parsing, that would mean encoding only the terminator expected by the parser in each component. The query component has the hash mark as its terminator.

The RFC goes as far as to recommend keeping raw as many characters as possible in section 3.4 Query:

as [..] one frequently used [query] value is a 
reference to another URI, it is sometimes better 
for usability to avoid percent-encoding those 
characters.

On the other hand, the following part of the RFC implies encoding of many characters.

   pchar         = unreserved / pct-encoded / sub-delims / ":" / "@"
   query         = *( pchar / "/" / "?" )
   pct-encoded   = "%" HEXDIG HEXDIG

   unreserved    = ALPHA / DIGIT / "-" / "." / "_" / "~"
   reserved      = gen-delims / sub-delims
   gen-delims    = ":" / "/" / "?" / "#" / "[" / "]" / "@"
   sub-delims    = "!" / "$" / "&" / "'" / "(" / ")"
                 / "*" / "+" / "," / ";" / "="

https://www.ietf.org/rfc/rfc3986.txt

  • (a) Encode special characters sub-delims, :, @, /, ? in (name value) pairs when generating the query component, allowing their use as delimiters in the resulting query string. (I found only a special interpretation of one sub-delim = as the name=value separator in the RFC. The RFC remains silent about the role of other special characters, even &, as delimiters of the resulting query string).
  • (b) Encode characters not allowed in the resulting query string at all. (Despite this, browsers emit raw back-ticks et al in their HTTP requests as mentioned in a Mozilla bug referenced from #17).

More to that, Appendix C Delimiting a URI in Context seems to imply that double quotes 22", whitespace 20SP, hyphens 2D- and angle brackets 3C<, 3E> need encoding when the URLs are further submerged into a context of a text message directed at a human reader. It would be nice to remain strict about the parsers that seem external to the URL parser and let additional encoders protect against specific external parsers. On the other hand, not every message reader applies a parser to line breaks, so protecting the Appendix characters using the percent encoder for own hyphens seems a reasonable option when splitting the URLs with hyphens on line breaks. The RFC requirement mentioned in (b) above already protects double quotes 22", whitespace 20SP and angle brackets 3C<, 3E> with the percent-encoding algorithm.

So far I see the following algorithms for encoding and decoding (name value) pairs as satisfying the RFC's musts and following its shoulds. I guess this should agree with https://github.com/tkem/uritools. (The RFC did not mention the vestige of isindex HTML tag submitting a request with words separated by the plus characters: the plus character in the query part of the URL decodes to the space character).

GetURLFromClient(network) -> URL
  (Because the network accepts only byte arrays, we 
  receive URLs as byte arrays).

SendHTTPGet(URL, network) -> response
  (Because the network accepts only byte arrays, we
  send URLs as byte arrays).

GetURLStringFromUserOrPage(browser) -> unicodeURL
  (Because input and rendering interacts with humans, URL 
  parsers should accept strings with some special characters
  and Unicode characters that can be percent-encoded 
  without sacrificing the parsing of the URL's structure.
  For this, parsers may allow a mix of UTF-8 byte arrays and 
  UTF-16 code units when parsing percent-encoded strings).

RenderURLStringForUserOrPage(unicodeURL, browser) 
    --> browserShowingURLString
 (Because rendering URL strings in special characters and
 Unicode improves their 
 interpretation by humans, we may need to show some 
 pct-encoded special and non-ASCII characters as raw special 
 and Unicode).

URLParser(URL) -> (scheme authority path query fragment):
  Split the string URL based on the structure:
      scheme ":" hier-part [ "?" query ] [ "#" fragment ]
  (The parser will split hier-part into authority and path 
  expecting an optional leading double-slash and a slash 
  indicating the beginning of the path).
  ==> query should hide its own ASCII hex 23#. The 
  encoder will provide that.

QueryParser(query, delimeters="&") -> *(name value)
  (a) Expect query to comply with the spec (no reasoning
  except protecting against the fragment 23# search).
    query = * (ALPHA / DIGIT / pct-encoded 
          / one-of 2D- 2E. 5F_ 7E~ 21! 24$ 26& 27' 28( 
                       29) 2A* 2B+ 2C, 3B; 3D= 3A: 40@ 2F/ 
                       3F?)
  ==> query must hide the following characters found 
  in *(name value):
    ASCII 00-1F 20SP 22" 23# 25% 2B+ 3C< 3E>
    5B[ 5C\ 5D] 5E^ 60` 7B{ 7C| 7D} 7FDEL
    non-ASCII.

  (b) Split the string querySpec expecting a separator 26& 
  into *nameValuePlus elements.  To comply with an earlier 
  HTML4 suggestion on avoiding confusion in developers, 
  optionally allow other delimiters from sub-delims such as 3B; 
  2C, 21! 24$ 27' 28( 29) 2A*, as well as special pchars 
  3A: 40@ and special query characters 2F/ 3F?, if they 
  reside in the delimiters argument.

    http://www.w3.org/TR/html4/appendix/notes.html#h-B.2.2
    http://stackoverflow.com/a/7287629/80772

  Split *nameValuePlus elements into pairs *(namePlus, valuePlus) 
  using the sub-delim 3D=.

  ==> names and values must protect own 26& 3D= 3B; 2C, 
  21! 24$ 27' 28( 29) 2A* 3A: 40@ 2F/ 3F?.

  (c) Always decode the "+" vestige to 20SP, resulting in
  *(namePct, valuePct) pairs.  (The inverse encoding of
  20SP may be done with either percent- or plus-encoding,
  and the latter appears more clear).

  (d) Decode percent-encoded UTF-8 in *(namePct, valuePct) 
  pairs to UTF-16 code units, resulting in *(name value) pairs.

==>
EncodeQuery(*(name value), vestigeSep=true) --> query
  For each character in each element of each pair *(name value):

  i. Encode the character if it falls into one of the following categories,
    using percent-encoding unless mentioned otherwise:
    26& 3D= (sub-delims from QueryParser.b to satisfy the standard
      query composer and name, value separator)

    3B; 2C, 21! 24$ 27' 28( 29) 2A* 3A: 40@  2F/ 3F? (sub-delims 
      and special characters from QueryParser.b to parse results of 
      unusual query composers; what's optional in the composer 
      becomes mandatory in the parser)

    00-1F, 
      20SP to "+" (instead of the percent-encoding) if vestigeSep (from 
      QueryParser.c).
      20SP if not vestigeSep
      22" 23# 25% 2B+ 3C< 3E> 5B[ 5C\ 5D] 5E^ 60` 7B{ 7C| 7D} 
      7FDEL (as not allowed by QueryParser.a)

    non-ASCII (using their UTF-8 presentation; from QueryParser.d)

  ii.  Add any other character unmodified.

  query = "&".join(["%s=%s" % (name, value) for (name, value) in args])

EncodeURL(scheme, authority, path, query, fragment, delimiters="") -> URL
  URL = authority + path
  if scheme: URL = scheme + ":" +  URL
  if query: URL += "?" + query 
  if fragment: URL += "#" + fragment
  if delimiters:
    For each character in URL:
      Percent-encode the character if it is found in delimiters.
      (Appendix C appears to demand protection of 2D- at line 
      breaks in the message context because the context lacks
      a standard URL delimiter parser.  Protecting quotes 27' in 
      HTML attributes containing URLs using this function does 
      not suffice because browsers will attempt to interpret 
      ampersands 26& that separate the query elements, 
        http://www.w3.org/TR/html5/syntax.html#before-attribute-value-state
      An HTML attribute encoder will protect any attribute value, 
      including URLs).

IPv4 number parser doesn't handle zeros well

If the IPv4 number parser is given just a "0" (like when parsing the IP 127.0.0.1):

  1. Set R to 10
  2. input has at least 1 code point, so: remove 1 code point from input, set R to 8

input is now empty, so number parsing input later on should lead to an error since there's no number to parse.

I quickfixed with jsdom/whatwg-url@590d1fa, by just re-interpreting "at least" to mean "greater than" 😝

Good defaults on URL()

Let’s not make the addEventListener() mistake again with the pointless 3rd argument. Web APIs need reasonable defaults.
Most uses of relative URLs in the URL() constructor would be against the document URL [citation needed :P]
We should define the 2nd argument to be location by default in contexts where that is available. This can be changed without breaking backwards compat.

IPv4 serializer is backward

Test case: serialize 3232235521. Should give 192.168.0.1. Current spec gives 1.0.168.192.

It should basically be "prepend" in all cases "append" is used in the algorithm.

Credit to @Sebmaster for implementation that uncovered the problem; me for debugging :)

Consider URLUtils.prototype.toString({as:"unicode"})

As a way to get a URL without percent-encoding. Garbage for non-utf-8 stuff, of course. Probably need to not unescape certain bytes so the specifics might rely on figuring out what normalization means, if anything.

Creation of URLSearchParams from Object/Map

Currently URLSearchParams cannot be directly created from an Object holding keys and values.

Combination of Object.keys, Array.reduce and append feels like boilderplate given that creation of query strings from objects is a common pattern in popular JS libraries, e.g. jQuery, superagent, node's url.format.

I suggest accepting an Object, Map (or anything that is iterable and has string keys) in the constructor and the set method.

i.e.

URLSearchParams({foo:"bar", baz:"quz"});

or

const tmp = new URLSearchParams();
tmp.set({foo:"bar", baz:"quz"});

could be equivalent to:

const tmp = new URLSearchParams();
tmp.append("baz", "quz");
tmp.append("foo", "bar");

I suggest always sorting the keys, since Object and Map don't guarantee any particular order, but a consistent order is very desirable for improved cacheability of resources.

toString method in API

Is it considered to be added to this spec? Chrome (+Polymer's polyfill) and Firefox seems to implement it.

Is an URL’s path a list of strings or a single string?

A URL’s path is a list of zero or more ASCII string holding data, usually identifying a location in hierarchical form. It is initially the empty list.

Sounds good.

(Just to name the things in the list, this could be "[…] a list of zero or more <a>path components</a> holding […] A <dfn>path component</dfn> is an ASCII string.")

An absolute URL must be a scheme, followed by ":", followed by either a scheme-relative URL, or if URL is not special, a path, optionally followed by "?" and a query.

Here, it looks like a path is a single string that is concatenated with other strings. "a path" here probably should be something like "a path as components separated with /." Also, should there be an initial / before the first component?

A scheme-relative URL must be "//", followed by a host, optionally followed by ":" and a port, optionally followed by a path that starts with "/".

Same here. Are components separated by /? What does it mean for a list of string to start with "/", is that the value of the first component?

A path must be zero or more URL units, excluding "?".

URL units being code points, this sounds like a path is a single string.

Set url’s object to a structured clone of the entry in the blob URL store corresponding to the first string in url’s path. [HTML]

… and a list of strings again. (Same in various places in the parser.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.