GithubHelp home page GithubHelp logo

Comments (5)

MaartenGr avatar MaartenGr commented on August 25, 2024

Thank you for your kind words, greatly appreciate it!

It definitely sounds like a useful feature to implement. Most likely, this will be a separate function to extract the indices as I want to prevent giving back too much information in the default implementation. I am not sure how much time this will take but I'll take a look.

For now, you can extract the indices using something like this:

from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer().fit([doc])
tokenizer = cv.build_tokenizer()
tokens = tokenizer(doc)
indices = [index for index, token in enumerate(tokens) if token.lower() in [word.lower() for word, _ in keywords]]

The most important thing here is that you make sure that the input for the CountVectorizer is the same as the one you used in KeyBERT.

from keybert.

pahalie avatar pahalie commented on August 25, 2024

Hey! I'm wondering how to do this for bigrams correctly?

from keybert.

MaartenGr avatar MaartenGr commented on August 25, 2024

That is actually quite difficult to do. For example, when you are removing stopwords the resulting bigrams does not take into account that there was a stopword in the original text. Take for example the text The learning of machines.. A bigram might be learning machines since we remove the stopword "of". However, that also means that the bigram learning machines does not appear anywhere in the original text as it had initially a stopword. Thus, doing this for n-grams larger than 1 requires a lot of checks to do this properly.

from keybert.

fortyfourforty avatar fortyfourforty commented on August 25, 2024

I'm also interested in getting the index range of extract keywords.
Let's say if we don't remove stop words, so the extract keywords are the same as is from the document.
How can I get the index from the beginning and end of each keyword extraction, for whatever ngram 1, 2 ,3 or more?

from keybert.

MaartenGr avatar MaartenGr commented on August 25, 2024

@fortyfourforty You would have to adopt the code I wrote above to check for every instance within a n-gram. That way, you can check whether all tokens within a keyword match a set of tokens within a document. In other words, tokenize both the keywords and document and then match them.

from keybert.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.