GithubHelp home page GithubHelp logo

aarondandy / wecantspell.hunspell Goto Github PK

View Code? Open in Web Editor NEW
116.0 11.0 19.0 5.79 MB

A port of Hunspell v1 for .NET and .NET Standard

Home Page: https://www.nuget.org/packages/WeCantSpell.Hunspell/

License: Other

C# 99.12% HTML 0.88%
hunspell dotnet spellcheck spell-check spell port

wecantspell.hunspell's Introduction

WeCantSpell: Hunspell

A port of Hunspell for .NET.

bee icon

Download and install with NuGet: WeCantSpell.Hunspell

NuGet version CI

Features

  • Reads Hunspell DIC and AFF file formats
  • Supports checking and suggesting words
  • No unmanaged dependencies and mostly "safe" code
  • Can be queried concurrently
  • Confusing LGPL, GPL, MPL tri-license
  • Compatible with .NET, .NET Core, and .NET Framework
  • Uses .NET to handle most culture, encoding, and text concerns.

License

"It's complicated"

Read the license: LICENSE

This library was ported from the original Hunspell source and as a result is licensed under their MPL, LGPL, and GPL tri-license. Read the LICENSE file to be sure you can use this library.

Quick Start Example

using WeCantSpell.Hunspell;

var dictionary = WordList.CreateFromFiles(@"English (British).dic");
bool notOk = dictionary.Check("Color");
var suggestions = dictionary.Suggest("Color");
bool ok = dictionary.Check("Colour");

Performance

"Good enough"

This port will likely perform slower relative to the original binaries and NHunspell but it should be acceptable. It is worth considering that while NHunspell is faster, it hasn't been updated in a long while and may be missing important fixes and changes.

Benchmark .NET 8 .NET 4.8 NHunspell
Check ๐Ÿข 7,376 ฮผs ๐ŸŒ 19,496 ฮผs ๐Ÿ‡ 6,324 ฮผs
Suggest ๐Ÿ‡ 367 ms ๐Ÿข 758 ms ๐ŸŒ 1,904 ms

Note: Measurements taken on an AMD 5800H.

Specialized Examples

Construct from a list:

var words = "The quick brown fox jumps over the lazy dog".Split(' ');
var dictionary = WordList.CreateFromWords(words);
bool notOk = dictionary.Check("teh");

Construct from streams:

using var dictionaryStream = File.OpenRead(@"English (British).dic");
using var affixStream = File.OpenRead(@"English (British).aff");
var dictionary = WordList.CreateFromStreams(dictionaryStream, affixStream);
bool notOk = dictionary.Check("teh");

Encoding Issues

The .NET Framework contains many encodings that can be handy when opening some dictionary or affix files that do not use a UTF8 encoding or were incorrectly given a UTF BOM. On a full framework platform this works out great but when using .NET Core or .NET Standard those encodings may be missing. If you suspect that there is an issue when loading dictionary and affix files you can check the dictionary.Affix.Warnings collection to see if there was a failure when parsing the encoding specified in the file, such as "Failed to get encoding: ISO-8859-15" or "Failed to parse line: SET ISO-8859-15". To enable these encodings, reference the System.Text.Encoding.CodePages package and then use Encoding.RegisterProvider(CodePagesEncodingProvider.Instance) to register them before loading files.

using System.Text;
using WeCantSpell.Hunspell;

class Program
{
    static Program() => Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);

    static void Main(string[] args)
    {
        var dictionary = WordList.CreateFromFiles(@"encoding.dic");
        bool notOk = dictionary.Check("teh");
        var warnings = dictionary.Affix.Warnings;
    }
}

Development

This port wouldn't be feasible for me to produce or maintain without the live testing functionality in NCrunch. Being able to get actual near instant feedback from tests saved me from so many typos, bugs due to porting, and even bugs from upstream. I was very relieved to see that NCrunch had survived the release of "Live Unit Testing" in Visual Studio. If you want to try live testing but have been dissatisfied with the native implementation in Visual Studio, please give NCrunch a try. Without NCrunch I will likely stop maintaining this port, it really is that critical to my workflow.

I initially started this port so I could revive my old C# spell check tool but I ended up so distracted and burnt out from this port I never got around to writing the Roslyn analyzer. Eventually, Visual Studio got it's own spell checker and vscode has a plethora of them too, so I doubt I will be developing such an analyzer in the future. Some others have taken up that task, so give them a look:

For details on contributing, see the contributing document. Check the hunspell-origin submodule to see how up to date this library is compared with source .

wecantspell.hunspell's People

Contributors

aarondandy avatar dandybrandy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wecantspell.hunspell's Issues

Libraries shouldn't know about filesystems (or web clients, or...)

(For all I know, this is from hunspell proper, so maybe this feedback is in the wrong place.)

tl;dr- Libraries should be as pure as possible, because purity maximizes flexibility, composability, testability, and greatly decreases the maintenance burden on the library author.

I would suggest an API like:

var checker = new HunspellDictionary(ISet<string> dictionaryWords, string affixContent);
bool notOk = hunspell.Check("teh");
var suggestions = hunspell.Suggest("teh");
bool ok = hunspell.Check("the");

As it is now, this library thinks in terms of filesystems for affix and dictionary files. But what if data source is a database? Or a web service endpoint? This is especially true for people that might consume this in an ASP.NET Core web application.

  • Throw useful parse exception if the application developer gives you bad input.
  • Leave the culture handling to the framework + application context
  • Reuse ISet.Comparer for GetHashCode and Equals implementations where it makes sense to do so

A bunch of stuff goes away afterwards:

  • CulturedStringComparer
  • Utf16StringLineReader
  • HunspellLineReaderExtensions
  • IHunspellLineReader
  • DynamicEncodingLineReader
  • StaticEncodingLineReader
  • Most of HunspellDictionary
  • All System.IO dependencies
  • AffixReader gets considerably simpler
  • Probably a bunch of other stuff

This sets you up to delete (or delegate to the framework) a bunch of other stuff:

  • CharacterSet -> HashSet<char>
  • ArrayWrapper, ArrayComparer -> use HashSet<T>
  • Deduper
  • EncodingEx

And avoids bugs around:

  • Byte-order markers (BOM)
  • Endian-ness

By maintaining purity, all of your operations are CPU-bound, so the need for async disappears. (Or rather, it shifts to the application developer who may want to run it on a threadpool thread, but that would be their choice to make.)

Try using a trie or sorted lookup datastructure to improve performance

During the initial developement of this port, a regular old dictionary of type Dictionary<string, T> was used to store various words, affixes, and their associated details. This really helped speed up the development of the code initially and even has some performance strengths in specific cases.

It is clear to me now that the choice to use Dictionary<,> was probably wrong for a lot of things as it is not a very good data structure when you want multiple related results for a query. When profiled this is where the code spends most of its time.

I would like to see how the use of a trie or a sorted collection impacts performance in the library. Looking back at the Hunspell source and some e-mails discussing design I think the original source uses something like a sorted linked list as the main storage for root words.

The following locations in the code may benefit from swapping out and utilizing a better datastructure:

GetMatchingAffixes

The methods SuffixCollection.GetMatchingAffixes and PrefixCollection.GetMatchingAffixes both look for affixes that begin or "end" with a certain string of text. This part of the code could probably benefit greatly from having a list of Affix<> that can be indexed by word instead of having all AffixEntries being confusingly nested into groups. It may also create a case for the internal Affix<> type to be converted to a reference type. The GetMatchingWithDotAffixes method is also realated but is often a cold path so optimiazation there should be focused on code size.

FindLargestMatchingConversion

The method MultiReplacementTable.FindLargestMatchingConversion while small may make many calls to a dictionary as it is a loop that is itself called from within a loop. It's responsibility is to find the longest matching entry for a substring.

PatternSet Check

The method PatternSet.Check searches for a pattern entry that has a text value that is ia subset of another.

WordList

While I'm not sure that a trie or even a sorted list would have an impact on the WordList.EntriesByRoot collection, the entries beind sorted may have a beneficial impact of keeping related roots near eachother in memory.

Support for UWP

I'm trying to use this package in a UWP app, but it works only when building the project in Debug mode.
When building in Release mode the build fails with the following error:

ILT0038: 'WeCantSpell.Hunspell.WordEntryDetail' is a value type with a default constructor. Value types with default constructors are not currently supported. Consider using an explicit initialization function instead.

I believe that to resolve this error the default empty CTOR in the WordEntryDetail class needs to be removed.

[Q] Add custom words to loaded dictionary?

Hello,

I didn't find any ticket or test about adding custom words to a loaded dictionary.
Is it possible?

Or do it need to re-create a new dictionary by merging?

Thanks,
Hervรฉ

Thank you!

Do you have an easy way to tip/donate? Our company found this solution incredibly helpful ๐Ÿ‘

Thanks for taking the time to create and maintain it ^_^

Provide simple construction API

If users want to construct a dictionary from another source or an affix from another source it should be much easier to do so. The API should be simplified to a point where one method call can be used to construct a dictionary from a set of words efficiently. There should also be some documentation to show this. From #5

Parsing text for individual words

This is more of a question, but I'd like to use this in a project I'm working on. From what I can tell WordList.Check is designed to check single words.

Are there any recommendations on what tool to use that I can break up sentences into words, etc., that should be checked? A naive way would be to just use string.split(), but I'd like to see if there's a tool that can automatically handle numbers, currency, sentence puncuation. I've been looking at some NLP tools but wondering if you've used anything in particular.

Optimize using ref

See if the new ref features in C# 7 can help with performance and reduce copies in the code.

First algorithm fails on E5-26xx

We are using your tool in my enterprise, and automatically testing in a pool of computers.
We've found that the first search algorithm always fails in some computers, making our tests flaky. At first, we thought it may be related to work load on those machines, but tinkering with limit time parameters didn't change a thing. In the end, we realized it fails always in the same machines, and works fine in the rest.
The only point in common we've found on those machines is that them all have Intel processors from the E5-26xx family (2640, 2650, 2680).

Can you shed some light about this issue, or share a thought?
Thanks.

How to ignore punctuation symbols

Hello. I have a text: "Hello, my name is Bob. How do you do?"
I use Split(' '); and do Check for all words. But how can i ignore: comma, dot and question in Check method?

Maybe this library have properties for this? Or i must use regex?

Some suggestions have incorrect spelling

I've found a few cases where a misspelled word is suggested. This issue does not occur in the native C++ version.

But if I take a word like abjurers and put a typo in so that it is now abmurers, one of the suggested words is abjureers, which is not an English word. Here are a few more examples:

Word Word Misspelled Bad Suggestion
epoxied epooied poiseed
jewelries ewelries jewelrys
squabbles suabbles squabblees

Occasional System.IndexOutOfRangeException for Suggest

Hello,

I'm using library for spellcheck and suggestions. Nothing fancy in initialization:

hunspell = WordList.CreateFromStreams(dictionaryStream, affixStream);
hunspell.Suggest(word);

It's running on net6.0: <TargetFrameworks>netstandard2.0;net6.0</TargetFrameworks>
Here are my affix file and dictionary file en.zip (pretty sure I've got it from open sources, so it's safe to share)

Sometimes I get exception: System.IndexOutOfRangeException when trying to call .Suggest()
I can't reproduce locally, for given inputs and outputs that produce exception. This is happened in production/stage system over multiple days. For past 14 days and 400k calls, it happened 7 times so far.

Here is a stack trace that I received:

[14:53:56.828](154)  Exception: System.IndexOutOfRangeException: Index was outside the bounds of the array.
   at WeCantSpell.Hunspell.WordList.QuerySuggest.LeftCommonSubstring(String s1, String s2)
   at WeCantSpell.Hunspell.WordList.QuerySuggest.NGramSuggest(List`1 wlst, String word, CapitalizationType capType)
   at WeCantSpell.Hunspell.WordList.QuerySuggest.Suggest(String word)

It happened for following suggestion requests: nth, gua, gua, colo, leav, Jira, o

I've just updated to version 3.1.1 to see if it will help, and I have high hopes about 4.0. I'll report later with my findings

Rename project

Because NetCore is not really a good library target anymore and because NetStandard may itself be a moving target (who knows?) I am going to rename this repository. The whole reason I started work on this port was to support another spelling related project I am working on. Because of this I think I will just turn that name (WeCantSpell) into an umbrella project containing both this Hunspell port as well as the tool that would consume it. Naming is hard, and ... whatever I'm tired of trying to pick a good one :) Nothing with respect to functionality will change but people referencing the pre-release packages will have to swap out some namespaces and a package.

Future target frameworks

Because the large gap between .NET 4.5 and .NET 6, I'm not sure yet if I want to continue supporting .NET versions that are out of support or that may soon be out of support. I'm doing some quick analysis of what versions should be supported by exploring github and nuget to see what is out there. Based on that, and my tolerance for adding in shims, I think I will be able to determine a new list of target frameworks for a future 4.x release containing possibly breaking changes. If anybody out there has any feedback, and is actually paying attention to this project, please let me know what framework versions you are targeting still.

Suggest algorithm optimization: Levenshtein distance

Because this is a port of the original library, I don't want to get too creative with the algorithms. For suggest, especially, these brute forcing nested loops can really hurt as the x64 compilers don't put the same amount of up-front effort into optimizations. Maybe something like Levenshtein distance, which I don't even know if I spelled correctly, could be of use without changing the results?

If something can be improved there, the benefits could impact #33, #40, and #43 .

Learning resources from https://github.com/Turnerj/Quickenshtein#learning-resources

Dictionaries are a bad choice for a dictionary

It seems that while Dictionary works it is not a very performant tool to use for this dictionary. Explore other options for both building a word list and later searching a word list.

[Breaking] Remove 3.5 target and add netstandard 2.0

I am looking at a new major version bump where I plan to remove .NET 3.5 support and add in net standard 2.0 support. Net45 and netstandard1.3 support will remain. I am curious if anybody would be impacted by these changes. I think I am going to do it anyway, but it helps me understand better if it is worth spending energy on fixes on a legacy v2 branch.

WordList.Query.CompoundCheckWordSearch cleanup

I really dislike the methods related to WordList.Query.CompoundCheckWordSearch and would like to see them cleaned up while preserving existing performance numbers. The method used to be much larger but warm methods where split up for performance reasons to reduce conditional checks and branching. The negative impact of this refactoring is that there are now multiple methods dangling our there. I would really like to see something that reduces the amount of code used to solve this problems, improving readability and maintaining the current performance. If no good solution can be found a refactoring of these methods into a new private type may be beneficial.

  • CompoundCheckWordSearch
  • CompoundCheckWordSearchMultiDetailWithWords
  • CompoundCheckWordSearchMultiDetailScpdFlags
  • CompoundCheckWordSearchMultiDetail
  • CompoundCheckWordSearchCompoundOnlyDetailScpd
  • CompoundCheckWordSearchCompoundOnlyDetail

NGram performance

The method WordList.QuerySuggest.NGram through the two methods it calls named NGramWeightedSearch and NGramNonWeightedSearch do a series of brute force substring checks that are pretty expensive. These two methods could likely benefit from using some kind of a better algorithm for these contains checks, even if extra allocations may be required, This will hopefully have a positive impact on the suggest performance which is pretty bad at the moment.

Get a contributing file setup

Need to get a contributing file setup, maybe a nice .md file to document how things in the port map to origin would help too.

  • establish a style, document it, explain how strict/loose it is :deal-with-it:
  • need to start using PRs myself, get a quick blurb about that
  • some design docs would be good
  • get tabs and formatting and all that into editorconfig and link to it as a source of truth for style

Pipes, can they help?

The file loading performance is pretty terrible, maybe pipelines will help a bit? Then again, maybe not...

[Breaking] Clean up public surface

There are some public methods that are either terrible or not very useful that should be removed. While its for the best, this would be a breaking change and would need to be planned carefully.

Areas for improvement: Infrastructure

  • ArrayComparer<>: I would like to see this removed as there aren't many usages of it. The string path does some weird boxing that could be avoided with specialized methods, instead of generic. The GetHashCode could use the new HashCode where available.
  • ArrayEx: It would be nice to get rid of this class. If most usages of ArrayEx.Empty are for ArrayWrapper, maybe things can just move there instead
  • ArrayWrapper<>: Is this just an immutable array?
  • Deduper: can the new HashSet.TryGetValue be used instead of Dictionary?
  • EnumEx: Is this still needed? If so, should it use unchecked?
    • This is probably needed until netcore 2.1
  • FileStreamEx: revisit the defaults for FileStream constructors and make sure this is still useful
  • MyIsAlpha vs CharIsNotNeutral: why the difference between 127 and 128?
    • This is just how things are in origin ๐Ÿคทโ€โ™‚๏ธ
    • I did spot a mistake while investigating though! #54
  • ReDecodeConvertedStringAsUtf8 uses unsafe code!
    • Still need this for netstandard2.0
  • Unused variable in SetCurrent:
  • IncrementalWordList.CheckIfNotNull: why might a word entry be null? If intentional, that should definitely be marked as nullable.
  • MemoryEx.Replace: changes to this or its usage could reduce allocations, maybe
  • OperationTimeLimiter: For a number of reasons, this should be replaced by some other tool
  • ReferenceHelpers.Swap: I think the tuple syntax is better for this now
  • SimulatedCString: This just needs to be looked at again. If anything can be done to remove or simplify it, that would be great.
  • StringBuilderEx: contains some unsafe code, can it be removed?
    • These are still needed for netstandard2.0
  • StringDeduper: can this be merged with the normal Deduper? Can this also benefit from HashSet.TryGetValue?
    • I'm just going to remove it: #57

Strong-Naming The Library

Hello,
I like this library, it is very useful. but a little thing to as if I may to sign the library.

Thanks.

Fix up the perf tests

After updating NBench the tests are failing to load NHibernate to run the performance comparisons. Also, while I appreciate what NBench does I want to try out BenchmarkDotNet (sp?) and see if that is easier to run repeatedly and if it can also deliver consistent numbers.

[Breaking] Reduce NuGet targets

Some of the build targets may be redundant. Will have to do some experimentation with different platforms to see what builds NuGet selects for them:

  • Unity
  • core 1.0, 1.1, 2.0
  • wp
  • net45, net451, net46, net 461, net 462
  • uwp
  • anything else I can think up

The 451 build should probably target 45, and the 461 build may not even need to exist. The netstandard1.1 build may be redundant with the PCL build. The PCL build may be harder to build in the future though... I'm not sure if people would even use the PCL or netstandard1.1 builds in the future.

Any suggestion on how to use this library for real-time word suggestions?

I'm trying to use WeCantSpell to create an autocorrect feature for a project I'm working on. I call WordsList.Suggest every time a new letter is added/removed from the word I'm writing but the results are generated very slowly, the more letters i feed into the method, the more time it takes to compute a result. Is there any way I can make it run faster? Is there a way to limit the number of words to retrieve as suggestions?

Areas for improvement: Affix

  • Can changes to the AffixLineRegex be improved? Does "compiling" help or hurt? Can the new source generator stuff be applied to older versions of .NET somehow?
    • this is now for loops and spans, and IMO easier to maintain than that regex
  • Affix<>.Create isn't really helping much I don't think
    • This was all heavily refactored around arrays again
  • AffixConfig.Options and flags: Definitely a possible foot-gun, but I can see the performance benefits it may offer.
    • Going to leave this alone, I think the impact on performance is important enough to keep it as is
  • AffixReader.TryParseCommandKind: I wonder if a delegate dictionary would have the same performance but make the code simpler.
    • I made a sorted list that seems to work well, and also works without needing a string allocation too
  • AffixReader: I wonder if any inspiration can be found in the STJ reader, as a ref struct? Probably not, but worth a look maybe.
    • This was reafactored to use the new LineReader, which I think should be a big enough improvement for now
  • FlagSet feels a bit like a value type, like ImmutableArray, maybe. Also, it contains unsafe code.
    • This was heavily refactored
  • FlagValue.ParseFlagsInOrder does not handle a case for FlagMode.Uni . Is that OK? Either way, it should probably be more explicit.
    • This is no longer an issue after the refactoring
  • HunspellLineReaderExtensions: async code should consider using of IAsyncEnumerable to reduce allocations
    • This no longer applies. It could use an IAsyncEnumerable but it likely isn't worth it
  • MorphologicalTags: not sure I really like those public static MUTABLE fields, maybe they should become properties if performance allows
  • SpecialFlags: I'm not really feeling these public static fields
  • โ›” LineReader: should maybe indicate if the stream is owned and to be disposed by the reader
    • Can't do this with async methods
  • StaticEncodingLineReader: does this extra layer slow things down much?
    • Obsolete
  • StringValueLineReader: can this move to be more internal and would it work better as a mutable struct without boxing(interfaces)?
    • Obsolete

Infix support for Kurdish language

I am trying to make a spell checker for Kurdish. The problem is, Kurdish relies a lot on infixes (mostly because of clitic pronouns). I'd appreciate it if you provide any guidance on what's the best approach for a language like that.

If Nuspell supports Infixes

That's great news, I'd rather create a Nuspell dictionary than a custom library on my own.

If Nuspell doesn't support Infixes and there aren't any reasonable ways to work around that limitation

I have noticed that Hunspell uses very little memory and is quite fast. So if I want to create a custom library for Kurdish, I want to know which algorithms Hunspell uses.

Here is a general idea of what I am trying to accomplish:

Consider this word: Bexshin (Forgiving), It can come in these forms:

  • Bimbexshe => [You] Forgive me
  • Bitbexshm => [I] Forgive you
  • Biyanbexshe => [You] Forgive them
  • And many more!

Instead of a list of words, we can have a list of patterns like so: Bi{pronoun}bexsh{pronoun}

More Examples:

Eat (Dexo{pronoun})

  • I eat => Dexom
  • We eat => Dexoyn
  • They eat => Dexon

Can be represented as:

Work (Kar{pronoun}dekrid)

  • I worked => Karmdekird
  • He Worked => Karรฎdekird
  • They worked => Karyandekird

So I need an algorithm to very quickly tell me what are the closest matching patterns, and then I can expand only those patterns and based on the Levenshtein distance to the input word give back a list of suggestions.

I know that I can read the source code, and I will. But it'd make my job much easier if you gave me a few leads on which algorithms can be useful based on your experience.

Suggest() method result inconsitent

Hello

Here is my code (it's pretty basic) :

var dictionaryFr = WordList.CreateFromFiles(ressources + "\\fr-toutesvariantes.dic", ressources + "\\fr-toutesvariantes.aff");

for (int i = 0; i < 10; i++)
{
    List<string> suggestList = dictionaryFr.Suggest("Systemes").ToList();
    System.Diagnostics.Debug.WriteLine(suggestList.Count);
}

And this is what I'm getting :

0
3
3
3
3
0
3
0
3
3

So sometimes I get suggestions, sometimes I don't ๐Ÿ˜ข . Please help !

I'm using the 3.0.1 version of the nuget, and my code run on .Net Framework 4.5

Areas for improvement: Word List

  • WordEntry & WordEntryDetails .GetHashCode: maybe this would be better off using the new HashCode?
  • WordList.NGramRestrictedFlags: what is this for?
    • Should at least be useful to hold the data now: #56
  • WordList.QuerySuggest: this timer code needs to be removed and replaced with a different solution
  • WordList.Query.consts: There are a bunch of constants that maybe should be handed over to callers to control. For example, max suggestions could be lowered for some queries to improve performance for some cases. This will be closed out by #67

Phonet performance (AU and ZA)

The method WordList.QuerySuggest.Phonet has two nested loops that query for entries from a PhoneTable that have rules starting with a given character. Indexing these entries in the table by the first rule character may have a positive impact on performance.

Project icon

Need to get an icon for the project so it can be easier to identify and stand out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.