Comments (8)
@themeo Thanks for reporting. Please provide some example files or links to them, so that I can reproduce this issue. I will take a look at it within a couple of days.
from cermine.
Under this link is one PDF and a corresponding .cermxml file: https://app.sugarsync.com/iris/wf/D9346138_08836977_601970
I traced just one citation: ref17 , corresponding to (Gathercole, Frankish, Pickering, & Peaker, 1999). It is possible that many other citations are mis-assigned. This is a representative situation for many of my PDFs.
Below is a list of all mentions of ref17 (with the corresponding line number in .cerxml):
155: correctly detected ref17, incorrectly ref28
199: incorrect detection of ref17 and one other (3 papers cited, 5 references assigned)
258: incorrect detection of ref17 and two other (2 papers cited, 5 references assigned)
308: incorrect detection of ref17 and one other (2 papers cited, 4 references assigned)
563: correctly detected ref17, incorrectly ref28
720: incorrect detection of ref17 and one other (2 papers cited, 4 references assigned)
hope this helps!
from cermine.
Thanks. In general this style of citing papers (by author names and years) is the most problematic one, due to matching issues and possible ambiguities.
According to manual inspection I did, originally in this paper we had precision of 72%. This refers to the fraction of reported citation-reference pairs that are indeed correct and seems to be the most informative measure in this case. I was able to increase the precision to nearly 90% (commit 2df02a8). Further increase is more complicated and might result in correct extractions missing from the output, so I would leave it as is for now.
Please test it against your cases and let me know what you think.
from cermine.
Thanks, this indeed improved the matching accuracy.
With respect to this citation style and what can be improved - this is APA style - one of the most popular styles in scientific journals. It is supposed to be unambiguous. Going back to the examples I provided, CERMINE still isn't sure if:
(Gathercole, Frankish, Pickering, & Peaker, 1999) (ref17)
stands for this reference or for:
(Masoura & Gathercole, 1999) (ref 28)
So in this case it ignores the order of the authors that is disambiguating the two citations.
It also fails to detect any reference in:
(e.g., Masoura & Gathercole, 1999, 2005)
In this latter case I suspect the problem might be related to the fact that the references are not separated with a semicolon but with a colon - which admittedly is an exception to the rule.
I suspect these issues are solvable but probably going beyond regex and thus associated with extra time cost. But given the popularity of this style I think this effort might be worthwhile.
In any way, thanks for this amazing library!
from cermine.
Just to add to my previous comment. At least in my usage of CERMINE, both false positives and false alarms are bad (I extract phrases used to reference specific papers), so measuring accuracy with something akin to F1 would be a more appropriate index of usefulness in my application of CERMINE.
from cermine.
About the format: it is commonly used in social sciences and less popular in other disciplines. It general, I feel it is more human- than machine-readable format and is definitely more problematic for automated parsing than other styles. One of the problems is matching authors names, where you'll have issues with encoding, transliterations, accents, etc. Also there are a lot of variations of a citation (letters attached to the years for disambiguation, author names full or partial lists, using "et al", the way a few citations are combined together). It is not a trivial task to take all possible variants into account, and even then I am sure some things will be missed.
BTW, much easier style is for example IEEE, which uses numbers as identifiers of references within papers. It is unambiguous by design and in practice, plus it is so much easier to parse numbers that author names.
About the evaluation: I chose to calculate precision mainly because originally you reported precision-related issues (incorrectly assigned references). Also, after inspecting the example file it seemed to me precision is the biggest problem and the biggest area for improvement. I did not calculate recall, it seemed high to me, much higher than precision.
Improving the accuracy further at this point would require a different, more sophisticated approach. It would also require gathering a larger, representative set of papers, annotating citations within them, so that the solution and further variants can be evaluated against it. This way we could be sure that along with improving precision we do not decrease recall significantly. If you are interested in this task, maybe you could prepare such a set?
from cermine.
I'm afraid I'm too busy to do that. But I can jot down instances of incorrect assignment as I use CERMINE-provided data, which could be than used as test cases. Please let me know if that would be helpful for you.
from cermine.
I'm afraid it wouldn't be enough, and would still require a lot of manual work on my side. And I also have too much on my plate right now. I will close this issue, if there is time in the future, I'll reopen.
from cermine.
Related Issues (20)
- Alternative for SegmEdit
- Error while training CERMINE
- cannot build and run cermine on my computer HOT 4
- problem of resolving dependencies for CERMINE-Impl project HOT 4
- '502 Bad Gateway' error on http://maven.icm.edu.pl/artifactory/repo HOT 4
- Problems with text extracting HOT 1
- Low activity
- Can't build cermine , maven dependency link is dead HOT 3
- Filepath is made of multiple language
- Is it possible to run with word document rather than PDF?
- problem with training procedure Cermine
- How to parse single pdf file on command line cermine? HOT 4
- TrueViz extraction fails silently for some PDFs
- Start up issues HOT 1
- Help running on macOS Mojave 10.14
- Exception in thread "main" java.lang.NullPointerException HOT 1
- Has this been abandoned? HOT 5
- CharMatcher.WHITESPACE
- Extracting Line Numbers Issue
- Does not extract Publication date
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cermine.