GithubHelp home page GithubHelp logo

Comments (2)

phurwicz avatar phurwicz commented on May 22, 2024

Thank you and I love this feedback! Would you mind helping me understand the suggestion better?

Previously I could think of two ways of drawing decision boundaries:

  • (A) a direct way like in human-learn where the classifier literally follows the polygon (or any shape) you draw;
  • (B) an indirect way like currently in hover where you draw annotations and have a custom-architecture classifier fit to the annotations. Specifically, the active_learning recipe tries to learn the decision boundary given by the “train” set in an iterative “draw-and-retrain” process.
    • What I like about this is that one can make annotations from different views and easily combine them. The “manifold trajectory” slider of the active_learning recipe tries to interpolate between the input manifold and output manifold, giving multiple views to exploit.

Just to be sure, my point of reference is the latest version of hover (0.5.0). Let me know whether you are suggesting (A) or something else :)

from hover.

phurwicz avatar phurwicz commented on May 22, 2024

I think the logical extension to a tool like this is allowing someone to define their own decision boundary of a supervised model (they call this "machine teaching" rather than machine learning). Defining their own decision boundary should end up with them having a supervised classifier at the end and being able to visualize how that classifier operates (and ideally allowing an expert human to "tune" it). Note that this is different than the current "select aspects of the dataset by drawing" functionality built in.

Now that I think more about it, hover.recipes.active_learning achieves “machine teaching” through hover.core.neural.VectorNet, where one can attach “any” neural network (subject to matching dimensions with the vectorizer) after the vectorizer function.

So when starting from scratch, one can use active_learning to draw decision boundaries through annotations and (re)train.

When working an existing model which may not be VectorNet, I suggest first deciding which layers of the model to freeze and which layers to tune. Then you can convert to VectorNet by wrapping the frozen part in vectorizer component and put the tunable part in the neural net component.

  • Speaking of this, it’s worth considering to implement utility methods for converting VectorNet from/to “pure” PyTorch when applicable (i.e. when the vectorizer is essentially a preprocessor function followed by the forward() of some nn.Module).

Does this seem on the right track?

from hover.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.