medialab / hyphe Goto Github PK
View Code? Open in Web Editor NEWWebsites crawler with built-in exploration and control web interface
Home Page: http://hyphe.medialab.sciences-po.fr/demo/
License: GNU Affero General Public License v3.0
Websites crawler with built-in exploration and control web interface
Home Page: http://hyphe.medialab.sciences-po.fr/demo/
License: GNU Affero General Public License v3.0
Any vague idea if or when this will be implemented? Has it been scheduled or is still something which is going to happen sometime in the far future?
When you make a crawl from a url list (i haven't tested the normal crawl) and there is a dot in the end of a url (e.g. www.domain.com . ) the backend server crashes and the entire import has to be repeated.
It is an important bug because DMI link harvester from time to time leaves you with dots in the end of the url.
If http://test.fr is created before http://www.test.fr, the latter falls naturally into the first's prefix and is therefore never created as a prefix, whereas it is if they are created in the reverse order.
Possible options :
Hi Medialab,
Running sudo bin/deploy_scrapy_spider.sh to install the latest edition of hypher gives me the following error:
Copying config.json from root directory to hyphe_backend/crawler for scrapy deployment...
Traceback (most recent call last):
File "deploy.py", line 18, in
from hyphe_backend.lib import config_hci
ImportError: No module named hyphe_backend.lib
It appears that a file most be missing.
Best regards
Tobias
Useful things to add :
Some websites act in weird manners, we need to establish policies for these.
For instance:
When sending the null chain as a parameter for store.set_webentity_homepage, the homepage is actually set to "http://". The expected behavior would be to get a null homepage. The feature is to allow this way to remove the homepage.
I have a case where the right web entity is not fetched. This is my exact case:
The right web entity is not returned, a more generic one is instead.
When a webentity has been crawled and is merged into another one, its ID no longer corresponds to a webentity in the whole list so it crashes the display of its name in the crawl list (crawl.php).
Either remove those from the crawl list or handle them differently, maybe we need to keep a record of merged ids to their merged_into ones?
Add --without-erlang to the thrift ./configure. Otherwise, the compilation won't succeed on Ubuntu and such.
I've given a look a the interface and to me the n°1 priority for the user interface is the possibility to sort the discovered web entities for decreasing in-degree.
Of course it is possible to export the graph, sort in Gephi and then get back to Hyphe to crawl the most cited neighbors...
but how nicer would it be to do it without leaving the interface.
Since the interface (served by Apache) can be loaded when the server is off, it could be nice to have a specific interface reaction instead of an error message (especially since it is not really an error, at least not an unknown one).
The list of webentities is slow and it comes from that fact that it loads all webentities. It would be useful to have a page where to search web entities from Lucence. It would be more efficient. It would be the occasion to attach features to groups of web entities (group tag, group classify, group merge...)
The part about the edition of the configuration is inexistant.
In the "edit a webentity"page, it would make sense within the Content part when clicking on a parent webentity to be offered the same options as when clicking on a subwebentity, therefore adding the option to merge into current webentity would be nice.
It looks like the scrapy spider does not follow such links (example: crawling fing.org returns nothing)
Pu the button just below the selected radio button, so that it is clear that we download what is selected
After resetting, if one clicks on the "network of WEs" or "explore discovered WEs" links, the webpages complain with a popup alert box because there is no data yet, this should probably just be a less invasive simple message on the pages.
Also clicking on the validate blue button without having pressed enter in the input field should validate and not restart the operation
It would be nice if the "Hyphe" on the left of the header could link to the home page.
Also the "Webentities" menu could point also to the "Explore discovered entities" page, and the "Crawl" menu to the "crawl list" page.
Need to mark in memory structure elements coming from a specific crawl
When you make a crawl from a url list (i haven't tested the normal crawl) and there is a dot in the end of a url (e.g. www.domain.com . ) the backend server crashes and the entire import has to be repeated.
It is an important bug because DMI link harvester from time to time leaves you with dots in the end of the url.
It might be nice to monitor who accesses the crawler interface through a Google Analytics account.
To do that, the best would be to have behavioral tracking in the interface, but with an ID specifiable in the global instance configuration.
I know that it is possible to enter in the editing window of a web entity to exclude it. Still, since this is an operation that is done frequently and often just by looking at the URL of the site, it'd be much easier to it directly in the list.
This is less of a bug report and more of an attempt to open the discussion.
Currently the limits of a web entities and the limits of its crawl coincide. This is probably a good idea in most cases, but not necessarily in all cases.
Example:
In our cartography of the climate adaptation debate, we have to deal with the website of the Food and Agriculture Organisation. Of course, we don't want crawl this entire website because it is too big and only a portion of it directly concerns climate adaptation. In fact, we are lucky, because they have a sub-directory that is dedicated to climate change (http://www.fao.org/climatechange/). Great! so we only want to crawl this directory.
Still, this does not necessarily imply that we only want to limit this entity to this folder. In fact, the FAO is a relatively unitary institution. Someone who want to cite a FAO study for example may as well site the homepage of the FAO website and not necessarily the pages in the sub-directory.
What this example tries to illustrate is that sometime we might want to define a larger web-entity, but only crawl a smaller portion of it (without necessarily reduce the size of the web-entity).
Could we think of a way to do this?
In the Startpages column of the crawl by list interface, the circled exclamation points icons seem to appear in some cases even when the result is a success.
This happens for instance with the simple sample list below :
http://www.medialab.sciences-po.fr/
http://www.sciencespo.fr/
http://www.regardscitoyens.org/
Also, the description of this page is still gibberish and should probably be updated ;)
When declaring a new crawl in crawl_new.php by inputting a new url, if we do Enter with the keyboard instead of clicking on the "Declare" button, it does not work and the Chrome console displays :
Uncaught TypeError: Object # has no method 'get' _page_crawl_new.js:618
(anonymous function) _page_crawl_new.js:618
p.event.dispatch jquery.min.js:2
g.handle.h
Possibility to download a CSV, in the list of web entities, the crawl jobs, and the classification of discovered web entities
Framebreakers get rid of the iframe preview. We want to avoid that.
hint: use onbeforeunload ?
Reuse hefee's updated version of txjsonrpc found by @jrault for biblib/reference manager (https://github.com/medialab/reference_manager) :
pip install git+https://github.com/hefee/txjsonrpc.git
cf https://github.com/hefee/txjsonrpc/commit/9fb8fdf45f3b8fa827a5d2548d283f178a412bf5
extra resources:
http://www.simple-is-better.org/rpc/jsonrpc.py
https://launchpad.net/txjsonrpc
could be useful to get mutliple statuses at once, all WEs with a namespace tag, etc
could also apply to webentitylinks
It does not make much sense to be still able to cancel a crawl even when the crawling and indexing are already over.
Changing the button into another one to "recrawl" could be a nice feature
Hi Medialab,
Congratulations with your new release - hypher just seems to become better and better.
Today I experimented for the first time with your bin/install.sh script in which I encountered two minor problems:
sudo ln -s pwd
/config/scrapyd.config /etc/scrapyd/conf.d/100-hyphe || exit 1
sudo ln -s pwd
/hyphe_www_client/_config/apache2.conf /etc/apache2/sites-available/hyphe || exit 1
The script had already been run once the files already existed and the script died with an error. One should in other words add some if-check to see if the symbolic link already exists.
Maybe this could be added as well or the end text of the install script could simply be changed.
Best regards
Tobias
Using SSL and user/pass for the API could be better to securize the API server
For instance, if we try to modify the values of the tags of the category A or C here, it impacts the tags in category B without affecting the others : http://jiminy.medialab.sciences-po.fr/hyphe-demo/webentity_edit.php#we_id=ed90cc14-4dc1-422d-b388-c1bbfaa38e76
Looking at the code, it looks like the category variable is only grabbed when trying to update the category name, so the latest category is being used
Examples:
./hyphe_backend/test_client.py store.declare_webentity_by_lru_prefix_as_url http://www.test.fr/test/
./hyphe_backend/test_client.py store.get_webentity_by_url http://www.test.fr/test #FAIL ?
./hyphe_backend/test_client.py store.get_webentity_by_url http://www.test.fr/test/ #OK
./hyphe_backend/test_client.py store.get_webentity_by_url http://www.test.fr/test/a #FAIL
./hyphe_backend/test_client.py store.declare_webentity_by_lru_prefix_as_url http://www.test2.fr/test2
./hyphe_backend/test_client.py store.get_webentity_by_url http://www.test2.fr/test2 #OK
./hyphe_backend/test_client.py store.get_webentity_by_url http://www.test2.fr/test2/ #OK
./hyphe_backend/test_client.py store.get_webentity_by_url http://www.test2.fr/test2/a #OK
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.