neurostuff / neurostore Goto Github PK
View Code? Open in Web Editor NEWThe NeuroStore/Neurosynth application
Home Page: https://compose.neurosynth.org
The NeuroStore/Neurosynth application
Home Page: https://compose.neurosynth.org
Starting from a clean slate, I can follow the directions up to this step before getting the error:
NameError: name 'sqlalchemy_utils' is not defined
It looks like whenever I generate a file within the versions
directory, my file does not have the line import sqlalchemy_utils
,
whereas the file committed here does.
is it that I do not have some environmental variable defined?
Per today's call, DOIs are the best target for linking coordinates from Neurosynth and collections from NeuroVault, but this hasn't been implemented yet. Would it be feasible to link them automatically?
I am having some issues with keeping the database in a good state (I think) when I run:
python manage.py db migrate`
and I get output that looks like this:
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 162, in _catch_revision_errors
yield
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 227, in get_revisions
return self.revision_map.get_revisions(id_)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in get_revisions
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in <listcomp>
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 322, in get_revisions
return tuple(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 323, in <genexpr>
self._revision_for_ident(rev_id, branch_label)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 386, in _revision_for_ident
raise ResolutionError(
alembic.script.revision.ResolutionError: No such revision or branch 'e6192a6cd3b4'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 50, in <module>
manager.run()
File "/usr/local/lib/python3.8/site-packages/flask_script/__init__.py", line 417, in run
result = self.handle(argv[0], argv[1:])
File "/usr/local/lib/python3.8/site-packages/flask_script/__init__.py", line 386, in handle
res = handle(*args, **config)
File "/usr/local/lib/python3.8/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask_migrate/__init__.py", line 180, in migrate
command.revision(config, message, autogenerate=True, sql=sql,
File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 214, in revision
script_directory.run_env()
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 184, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/migrations/migrations/env.py", line 87, in <module>
run_migrations_online()
File "/migrations/migrations/env.py", line 80, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 846, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 509, in run_migrations
for step in self._migrations_fn(heads, self):
File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 190, in retrieve_migrations
revision_context.run_autogenerate(rev, context)
File "/usr/local/lib/python3.8/site-packages/alembic/autogenerate/api.py", line 442, in run_autogenerate
self._run_environment(rev, migration_context, True)
File "/usr/local/lib/python3.8/site-packages/alembic/autogenerate/api.py", line 453, in _run_environment
if set(self.script_directory.get_revisions(rev)) != set(
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 227, in get_revisions
return self.revision_map.get_revisions(id_)
File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 194, in _catch_revision_errors
compat.raise_from_cause(util.CommandError(resolution))
File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 308, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 301, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 162, in _catch_revision_errors
yield
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 227, in get_revisions
return self.revision_map.get_revisions(id_)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in get_revisions
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in <listcomp>
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 322, in get_revisions
return tuple(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 323, in <genexpr>
self._revision_for_ident(rev_id, branch_label)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 386, in _revision_for_ident
raise ResolutionError(
alembic.util.exc.CommandError: Can't locate revision identified by 'e6192a6cd3b4'
and the only surefire way I have been able to resolve the issue is by [removing the volume](docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d) and starting over from:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
Is there a better way I should be trying to correct for these kinds of errors?
When a user searches for studies to include in a dataset, there should be some way to track what those search terms were and what results were included from those search terms.
from meeting with @62442katieb , one suggestion was to allow the tracking of the search process that led to the inclusion/exclusion of specific studies.
often, reviewers want to know what the inclusion/exclusion criteria was for a meta-analysis.
this is low priority for now but an idea for the future.
https://github.com/OpenAPITools/openapi-generator
We can generate a client library for the site from the speciifcation so we don't have to roll our own. I tested using typescript-fetch on the specification in the submodule commit and its almost there. Errors generated were:
Errors:
-attribute components.schemas.analysis.items is missing
-attribute components.schemas.study.items is missing
Can generate the library using the --skip-validate-spec. Haven't tested importing and using the generated code yet.
this task is to get more familiar with the API, and react via the open api generator: https://github.com/OpenAPITools/openapi-generator
total_count
would indicate all clones as well as parent study, calculate the unique count as well.
allow people to login so they can edit studies and have the studies be linked to them.
conceptual issues:
implementation issues:
The MetaAnalysis
database currently does not appear to reference the Dataset
database, but in my mind I think MetaAnalysis
should have an explicit relationship with the Dataset
database, since the meta-analysis will perform operations on a dataset object. I'm probably missing something in my thought process.
neurostore/neurostore/models/analysis.py
Lines 8 to 24 in 3684138
Should data
in MetaAnalysis
have the same content as nimads_data
in Dataset
?
neurostore/neurostore/models/data.py
Lines 24 to 35 in 3684138
If nimads_data
already contains the studies/analyses/points/images necessary for the meta-analysis, does this section contain redundant information?
images = association_proxy("metanalysis_images", "image")
points = association_proxy("metanalysis_points", "point")
image_weights = association_proxy("metanalysis_images", "weight")
point_weights = association_proxy("metanalysis_points", "weight")
Related to #56, let's look into creating a fully fledged auth service.
https://supertokens.io/ looks promising
Looks like we're using flask-restful for setting up API routes.
The project is pretty dead, and doesn't actually create any swagger-docs.
In fact, the author of the project even suggests alternative, as he himself uses other projects: flask-restful/flask-restful#883
I say we abandon ship now, and get on something maintained, before we get in too deep. Even just using Flasks regular MethodView would work.
This will make it easier to have a dropdown menu of clones for a particular parent study
How should the API endpoints on neurostore/neurosynth be organized?
Option1:
Option2:
Option 3:
Option 4
Under this division, Store and Synth would keep separate User tables with joint identities.
Users would got to /auth
and get JWT which store and auth can both authenticate.
We can look into making /auth
a proper authentication service.
change nginx to neurostore_nginx
change pgsql to neurostore_pgsql
This prevents conflicts between all services running concurrently with nginx-proxy.
should be able to create conditions
one could tag analyses with certain terms to make them more easily searchable, could also be useful for collections like all the studies automatically ingested with neurosynth to be tagged with neurosynth.
Since this is private repo, we'd have to pay, so we won't see it for now, but the yml file is ready.
change /login
& /register
to /auth
I'm looking into using webargs
(based on Marshmallow) for validation, with a way that works with the current abstraction hierarchy.
https://neurostore.org/api/studies/?nested=true should display the analyses as embedded (does not currently).
This should help with adoption, if we focus on having many real-world examples
minor trip up for me when trying to run tests was not knowing to look for the password here in the instructions to create a test_db and run tests.
I followed up to:
docker-compose build
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
in initializing backend, but when I visited http://localhost/ I got "webpage not found":
and when I specified the 8000 port specifically, I was told the connection is refused.
I'm trying to use wsl2 on windows so something there may be the source of my issues, but I wanted to confirm you can access a webpage on your machine before I go down the wsl2-windows rabbit hole.
Docker version 19.03.12, build 48a66213fe
In Docker and (later) travis. No need to worry about other users since it's mostly just us.
Use only SSO from various providers Flask-Dance
To communicate between services we have a few options:
For now, let's commit to not having a neurostuff OAuth
Using stoplight.io (our workspace) to create a manual specification of API endpoints.
but link the parent from the clone.
Once you have a first pass of what studies you want to have in your dataset (through search terms?), then you may further filter studies with explicit inclusion/exclusion criteria.
listing all studies does not give you a straightforward answer on the number of studies that are available.
The goal is to generate a typescript api library that works! ref #41
would a dataset endpoint essentially be a list of studies like in /api/studies
, but for a specific subset?
What would a simple example of NIMADS data look like?
Something like
{
"name": "my dataset",
"description": "my collection of studies about ice cream",
"publication": null,
"doi": null,
"pmid": null,
"nimads_data": {
"dataset": [
{
"analysis": [
{
"condition": [],
"created_at": "2021-04-13T04:26:25.223113+00:00",
"description": null,
"id": "http://neurostore.org/api/analyses/5xqi2gdvTQbK",
"image": [],
"name": "20356",
"point": [
{
"analysis": "http://neurostore.org/api/analyses/5xqi2gdvTQbK",
"coordinates": [
63.0,
65.0,
1.0
],
"created_at": "2021-04-13T04:26:25.223113+00:00",
"id": "http://neurostore.org/api/points/SwFJeP9xTQnS",
"image": null,
"kind": "unknown",
"label_id": null,
"space": "UNKNOWN",
"value": []
}
],
"study": "http://neurostore.org/api/studies/3vnm4gctCpAH",
"weight": []
}
],
"description": null,
"doi": "10.1016/S0926-6410(97)00020-7",
"id": "http://neurostore.org/api/studies/3vnm4gctCpAH",
"metadata": null,
"name": "Functional magnetic resonance imaging of category-specific cortical activation: evidence for semantic maps.",
"pmid": null,
"publication": null
}
],
},
"user_id": 123,
"user": "james"
}
questions:
nimads_data
be in JSON-LD form?
api/studies/<some-study>?nested=true
be a good approximation for now?/api/studies
?related to #32
To make an analysis reproducible, you need information about the statistical model, the (MRI) data, and the software used to execute the model on the data, this information is kept in a bundle (represented in json) which is proposed to be kept on neurostore as additional information attached to a particular analysis... TODO additional explanation
So we can run queries for the study editor using neurostore.org
, but build analyses and view results on neurosynth.org
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.