Comments (18)
More relevant comments available in issue #15 (now closed).
from dannet.
See also #16 (connotations) which pertains to sentiment analysis work done by Sussi. These relations have also been left out for now.
from dannet.
Bart has created an SQL dump for the data that Sussi has produced.
I might be able to create an in-memory SQLite db, import the data into that, and then extract the needed table(s) using JDBC. Some more information: https://grishaev.me/en/clj-sqlite/
from dannet.
Currently attempting detective work on the SQL dump using a dockerized MySQL db:
# create and populate database
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest
docker exec -i some-mysql sh -c 'exec mysql -uroot -p"my-secret-pw"' < /Users/rqf595/Downloads/wordnetloom-wordnet.sql
# connect to container and database inside it
docker exec -it some-mysql /bin/bash
mysql -p wordnet
from dannet.
After running some SQL queries in the mysql
shell, it appears that the tables
- tbl_synset
- tbl_synset_attributes
- tbl_synset_relation
- ... and possible tbl_relation_type
are the only relevant tables.
The actual synsets are linked using makeshift binary IDs generated in the software the Sussi used to created them with. The table tbl_synset_attributes
includes two columns that bear witness to other ID types:
However, the ILI (Interlingual Index ID) isn't relevant unless we have links to DanNet from this ID.
from dannet.
Importing the Open English WordNet presents an interesting challenge as the dataset resource has a relation to every entry it encompasses: http://localhost:3456/dannet/external?subject=%3Chttps%3A%2F%2Fen-word.net%2F%3E
Another challenge is the fact that the dataset is quite minimal and doesn't have labels for any resources. The only label-like relation is for the canonicalForm.
from dannet.
Apparently, the original links to the Princeton wordnet are not included in the WordNetLoom data, so it will need to be imported via the old DanNet data and converted to Open English WordNet IDs.
https://github.com/globalwordnet/cili
from dannet.
Having looked more thoroughly into the two different types of IDs in the old link data, e.g.
production%1:23:00::
bundle%1:06:00::
equipment%1:06:00::
vs the more familiar ENG20-07523126-n
which seems to be the ID type used in the cili repo, I have had some difficulties understanding how to get from these other IDs to ones which are mapped. The unfamiliar IDs are seemingly based on the complex, make-shift database of the WordNet project, and refer to lemmas present in multiple different files (mapped to integers) in the different WordNet releases. I haven't been able to find a translation table anywhere.
from dannet.
John McCrae was very helpful and wrote me the following guide:
Hi Simon,
These are sense keys, that are used to indicate the word in its synset (i.e., there is one sense key for each member of a synset). They are supposed to be more stable than the synset identifiers (but they aren't) and are preferred by the Princeton team. The full description of them is here:
https://wordnet.princeton.edu/documentation/senseidx5wn
They are quite tricky to calculate, OEWN has a whole script for doing it here:
https://github.com/globalwordnet/english-wordnet/blob/main/scripts/sense_keys.py
You can find all the sense keys and the relevant synsets in the
src
data for OEWN in theentries-*.yaml
files, such as:https://github.com/globalwordnet/english-wordnet/blob/main/src/yaml/entries-x.yaml
For Princeton WordNet releases, they are normally in a file called
sense.index
.Regards,
John
I think the last link (the directory of YAML files) is what I need. I'll parse them all and build a mapping from these sense IDs to the OEWN synsets.
from dannet.
I have mapped the eq_synonym relations with senseidx in the 5000 old links, but not the remaining 123 as the GWA schema had no equivalent relations. I still need to map the wn20 IDs. Eventually, I should also link directly to the ILI instead by using the existing links in the OEWN.
As an aside, I think a companion dataset containing labels for the OEWN would be very valuable since the OEWN dataset currently doesn't contain any labels. This dataset can be generated based on the lexical forms present in the dataset (= lemmas).
from dannet.
In the process of linking DanNet to the Open English WordNet, I discovered a couple of errors in the OEWN dataset, one critical (ILI linking) and one less so:
from dannet.
Since the CILI resources do not have outgoing relations for the incoming relations, e.g. wn:ili
, it really makes a lot of sense to implement #53 to make navigating navigating via the CILI feasible.
from dannet.
Getting ready to release most of the old English links as a sort of preview: 039ecc0
from dannet.
I added the eq_hyperonym and eq_hyponym rels of the links in e88598d + defined a new, complementary inverse property of wn:ili
to allow easier navigation of the graph.
This illustrates some issues in the existing data where we have both synonym and hyponym relations to an English synset, e.g. http://localhost:3456/dannet/data/synset-298
from dannet.
Update on new links
I can log in directly to the Wordnetloom database at wordnetweb01fl.unicph.domain (wordties) and access it as root via the mysql wordnet
command. From there, I can run similar queries to the ones I ran on the SQL dump and it seems like the synset IDs are at least different, though they seem to be regular incrementing integers, which isn't an improvement.
I will have to investigate this database more to see if any data can be recovered.
from dannet.
Some semi-good news finally: the synset IDs seem to be able to be recovered, even though it will still require some work.
They are saved in a weird format, e.g. 88830000
for synset 30. The 888
are prefixed, but the only way to know if the remaining zeroes are part of the ID or not is to make a guess based on the position in the table (they seem to come sequentially, so a preceding 88839000
means the following 88840000
is synset 40.
UPDATE
No, they actually all start with 888
and end with 000
, so this is quite easy to convert.
This is how to retrieve the relevant IDs:
SELECT synset_id FROM synset_attributes WHERE synset_id LIKE '888%000';
from dannet.
Adapted from https://stackoverflow.com/questions/356578/how-can-i-output-mysql-query-results-in-csv-format:
synset_relation: These are the actual relations, it seems. Many of them seem to have been performed from the Princeton WordNet to DanNet, which is not ideal.
SELECT id, child_synset_id, parent_synset_id, synset_relation_type_id
INTO OUTFILE '/var/lib/mysql-files/synset_relation.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM synset_relation;
synset_attributes: This is the table with links to Princeton and the CILI in the case of the Princeton words. In DanNet's case, the synset IDs have to be reconstructed based on removing 888
from the beginning and 000
from the end.
SELECT synset_id, princeton_id, ili_id
INTO OUTFILE '/var/lib/mysql-files/synset_attributes.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM synset_attributes;
application_localised_string: This seems to contain the relation names.
SELECT id, value
INTO OUTFILE '/var/lib/mysql-files/application_localised_string.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM application_localised_string;
from dannet.
The new English link triples have been successfully imported as of eb47f48, so this is finally done!
It seems like trying to infer every triple now results in org.apache.jena.shared.JenaException: java.lang.OutOfMemoryError: Java heap space
when creating the complete dataset, so I think I might have to give up on that for now. Perhaps I should make a script to fire up a local DanNet instance that people can query themselves.
from dannet.
Related Issues (20)
- Welcome screen for wordnet.dk HOT 2
- Resolve IRIs and QNames in search field
- llm/chatgpt test dataset generation HOT 1
- Search doesn't work with capitalised variant
- More logical search UI
- Fix canonical label/lemma order HOT 1
- Title and the occupation itself have been switched
- Redirect all URLs ending with slash to the non-slash version HOT 1
- Fuge definition
- Redirect bug
- Sense label lang is not consistent
- Split "Semantic relations" section into two or more sections
- Poster ideas HOT 3
- Change licence to CC-BY HOT 2
- Caddy docker image cannot be built
- word.csv doesn't contain all words HOT 1
- Include component parts of ontotype in synsets.csv
- Convert existing LexicalEntry instances to either Word, MultiWordExpression, or Affix
- Compatibility with goodmami/wn HOT 12
- Full length definitions HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dannet.