Comments (6)
The issue you're seeing here is caused by the collapse duplicates transformer, which is included in the set of recommended transformers. After this transformer is applied, the original input 666
becomes 6
which no longer matches the pattern 666
; there's a warning to this effect in the documentation of the transformer:
This transformer should be used with caution, as while it can make certain
patterns match text that wouldn't have been matched before, it can also go
the other way. For example, the patternhello
clearly matcheshello
, but
with this transformer, by default,hello
would becomehelo
which does
not match. In this cases, thecustomThresholds
option can be used to
allow twol
s in a row, making it leavehello
unchanged.
Thus to avoid this behavior you can either do as is suggested above and add a custom threshold that avoids transforming 6
, or disable the transformer entirely.
As the behavior is documented I'm inclined to mark this as not a bug, though I absolutely see how the library behaves unintuitively in this case and would be open to considering suggestions for improvement.
from obscenity.
This one is really quite unfortunate. The issue is again due to one of the default transformers, which skips non-alphabetic characters; this allows patterns such as foo
to match f_o_o
. Hence 8562
is transformed to the empty string which matches nothing.
While this is not a bug per se I think this is extremely unintuitive behavior and I'm inclined to remove this transformer from the set of recommended ones for English text, or perhaps even removing "recommended" transformers in general. The original intent was to provide something that works nicely out of the box, but as your report shows once anything custom is added it's easy for it to become confusing. WDYT?
from obscenity.
Also, on an unrelated note — would you be willing to comment on which parts of the Obscenity API you use in your project? In particular, I'm curious as to whether you use any of the more complex features (custom transformers, censors, etc.) The reason I'm asking is now that I have a bit more time to work on open source, I'm thinking about cleaning up the API a little and releasing a v1.0.0, but if users are tied to the existing API I'll hold off on breaking changes.
from obscenity.
Thanks for the response, that's helpful!
Back to the issue: I anticipate that I'll have time to work on releasing a fix next weekend, but for now you should be able to work around it in your application by passing a custom set of transformers instead of englishRecommendedTransformers
.
from obscenity.
Thanks for responding! That makes sense for 666
, but I think this also fails for any number. For example 8562
.
To add a bit more context to this, I'm using this in a live chat filter for my site. The chat exists on a live-stream page where the streamer can set which words they'd like to ban. This came up because a streamer had set the different parts of their address as "banned words" to avoid being doxxed in real-time. So for example, "8562" is banned
const customDataset = new DataSet();
customDataset.addPhrase((phrase) => {
return phrase.setMetadata({ originalWord: "8562" })
.addPattern(parseRawPattern("8562"))
.addPattern(pattern`|8562|`)
});
const matcher = new RegExpMatcher({
...customDataset.build(),
...englishRecommendedTransformers
});
matcher.getAllMatches("8562").length //=> 0
Though, this may just be because I have my patterns wrong... You've done a really great job with adding docs, so I may just be missing something.
from obscenity.
I think your idea makes sense.
As for the API, I don't get too crazy with it. There's probably a lot more I should be doing with it that I'm not currently.
For the most part, that code I pasted above is essentially it. We have 2 separate functions where the first one is our default banned words.
import {
DataSet,
RegExpMatcher,
englishRecommendedTransformers,
pattern,
} from "obscenity";
const joystickDataset = new DataSet()
.addPhrase((phrase) => {
// they generally follow this pattern
return phrase
.setMetadata({ originalWord: 'badword' })
.addPattern(pattern`badword`)
.addWhitelistedTerm('...') // some close related word
})
.setMetadata({ originalWord: 'otherword' })
.addPattern(pattern`oth[e]rword`)
.addPattern(pattern`|oth|`)
.addPattern(pattern`|word|`)
.addPattern(pattern`?therword`)
const matcher = new RegExpMatcher({
...joystickDataset.build(),
...englishRecommendedTransformers,
});
export default {
methods: {
isTextInViolation(text) {
return matcher.getAllMatches(text).length > 0;
},
}
}
Then we have the second function where a streamer can specify which words or phrases they want banned
bannedChatWordsDataset() {
const customDataset = new DataSet();
this.streamer.bannedChatWords.forEach((item, idx) => {
const word = item.toLowerCase();
customDataset.addPhrase((phrase) => {
return phrase
.setMetadata({ originalWord: word })
.addPattern(parseRawPattern(word))
.addPattern(pattern`|${word}`)
.addPattern(pattern`${word}|`)
});
});
return customDataset;
},
bannedChatWordsMatcher() {
return new RegExpMatcher({
...this.bannedChatWordsDataset.build(),
...englishRecommendedTransformers
});
},
isMessageInViolationOfBannedChatWords(message) {
return this.bannedChatWordsMatcher.getAllMatches(message).length > 0;
},
We do still have a few bugs around this implementation. I believe there's some words where you can't say just the word, but if you use it in a sentence, then it goes through... I have to come back to that portion later.
Stoked to hear you'll have more time on this though! It's a great project ❤️
from obscenity.
Related Issues (15)
- Dependency Dashboard
- Question around performance HOT 2
- bug: Memory leak when using an empty string HOT 2
- Incorrect docs for skipNonAlphabeticTransformer HOT 1
- Fix Typescript Types when using NodeNext module resolution HOT 1
- bug: Certain words not being censored HOT 6
- request: Censor the word "shit" HOT 2
- bug: Censoring of the n-word results in more asterisks than expected HOT 2
- bug: Strange input results in false positive HOT 2
- bug: Using .addPhrase with Angular script optimization causes error that prevents Angular from bootstrapping HOT 1
- request: French language support HOT 2
- Package obscenity has been ignored because it contains invalid configuration. Reason: Package subpath './package.json' is not defined by "exports" HOT 2
- bug: Kung Fu false positive HOT 3
- If the obscene word is at the beginning the censoring is not recovered HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from obscenity.