GithubHelp home page GithubHelp logo

prereview / rapid-prereview Goto Github PK

View Code? Open in Web Editor NEW
13.0 9.0 12.0 35.42 MB

An application for rapid, structured reviews of outbreak-related preprints

Home Page: https://outbreaksci.prereview.org/

License: MIT License

JavaScript 76.32% CSS 13.17% HTML 0.17% PostScript 9.58% Shell 0.56% Dockerfile 0.19%
preprints reviews

rapid-prereview's Introduction

OSrPRE logo

Welcome to Outbreak Science Rapid PREreview!

What is it?

Outbreak Science Rapid PREreview is an application for rapid, structured reviews of outbreak-related preprints. The platform allows any researcher with an ORCID iD to provide a quick high-level evaluation of preprints via a series of questions to assess the originality and soundness of the research findings. Aggregated data from these reviews is visualized to allow readers to identify the most relevant information. This tool has the capacity to be transformative for on-the-ground health workers, researchers, public health agencies, and the public, as it can quickly unlock key scientific information during an outbreak of infectious diseases.

Our team

Outbreak Science Rapid PREreview is a project born from the collaboration of PREreview and Outbreak Science.

PREreview is an open project fiscally sponsored by the non-profit organization Code for Science & Society. PREreview's mission is to increase diversity in the scholarly peer review process by empowering all researchers to engage with preprint reviews.

Outbreak Science is a non-profit organization aimed at advancing the science of outbreak response, in particular by supporting early and open dissemination of data, code, and analytical results.

Funding

This collaborative project is mainly funded by the Wellcome Trust Open Research Fund, but has also received support from the Mozilla Foundation.

Information about this repository

CircleCI

code style: prettier

Outbreak Science Rapid PREreview focuses on providing the best infrastructure to request / provide / analyze feedback (structured reviews) on existing preprints relevant to the outbreak community.

The feedback should be of use to:

  1. the outbreak community (academics)
  2. workers, editors, journalists (visualization etc.)

Outbreak Science Rapid PREreview does not focus on:

  • coordinating research effort / data analysis / calling for research during emergency situations
  • becoming / being a preprint server

Join PREreview Slack Channel

Development

Getting started

Required software

  1. git is used for versioning in this project.

  2. Docker is used to manage services for local development.

This repo also contains configuration files for Visual Studio Code's Remote Containers which reduces the need to manually execute Docker commands; see the Visual Studio Code manual for more information about how to use these.

Creating the environment

  1. docker-compose -f .devcontainer/docker-compose.yml up --build

This command will keep running in the shell to display log output from all services; you can stop the server by typing Control+C.

Running commands in the container

  1. docker-compose -f .devcontainer/docker-compose.yml exec web bash

The source folder will appear in the container as /workspace; change to that directory before running any npm commands. You can edit these files with your preferred editor and the container will stay updated.

Viewing logs

  1. docker-compose -f .devcontainer/docker-compose.yml logs

You can optionally name a service whose logs you want to view; the default is to show logs for all services. Service names are defined in docker-compose.yml and include 'web', 'cache', 'db'.

You should have everything needed to follow the rest of this README.

Dependencies

At the root of this repository run:

npm install

Troubleshooting

If you are having permission issues with npm checkout https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally

App (web server)

Please note the section above labelled 'Running commands in the container.'

npm run init

to setup the databases.

To seed the app with some data run:

npm run seed

After, run:

npm start

and visit http://127.0.0.1:3000/

If you want to start from an empty state (or reset the DB to an empty state) you can run:

npm run reset

Web extension

Development

To work (or test / demo) the extension you can:

  1. start a dev server (run npm start)
  2. follow the instruction below depending on whether you want to work with Chrome or Firefox.
Chrome
  1. Run npm run extension:watch that will build and watch the extension in the extension directory. ! DO NOT EDIT THE FILES THERE or do not tack them on git, with the exception of manifest.json, fonts/, icons/ and popup.html.
  2. Navigate to chrome://extensions/, be sure to toggle the "developer mode", click on "load unpacked" and select the content of the extension directory.
Firefox
  1. Run npm run extension:watch-firefox that will build and watch the extension in the extension directory. ! DO NOT EDIT THE FILES THERE or do not tack them on git, with the exception of manifest.json, fonts/, icons/ and popup.html.
  2. Navigate to about:debugging, and click on "Load Temporary Add-on" and select the extension/manifest.json file.
Troubleshooting

Never run npm run extension:watch and npm run extension:watch-firefox at the same time as they will overwrite each other. If you did:

  1. kill all the node processes (ctr+c in each shell)
  2. run killal node to be sure you no longer have node processes running
  3. restart the web server npm start and one of the extension watcher npm run extension:watch OR npm run extension:watch-firefox

Production (publish to web stores)

Chrome
  1. Run npm install
  2. Set the version property of the extension/manifest.json file
  3. Run npm run extension:build
  4. Run npm run extension:pack
  5. Upload the created extension.zip file to the Chrome web store
Firefox
  1. Run npm install
  2. Set the version property of the extension/manifest.json file
  3. Run npm run extension:build-firefox
  4. Run npm run extension:pack-firefox
  5. Upload the created extension-firefox.zip file to the Firefox web store

Note: to include the unbundled source code of the extension (asked by Mozilla add on) run npm run extension:pack-src and include the following text when you upload the generated extension-src.zip:

The extension is built with webpack (config is webpack-extension.config.js). See more details on the README.md file. The source code is also available on GitHub: https://github.com/prereview/rapid-prereview/

Demoing the platform

This supposes that you have followed the instruction from the rest of this README.

First time

Suggested steps:

  1. Start the local services using docker-compose -f .devcontainer/docker-compose.yml up.
  2. In a shell attached to the 'web' container:
    • run npm run seed or npm run reset to either seed the database with sample data (or start from a clean state)
    • run npm start to start the web server
  3. In another terminal, run npm run extension:watch and update the extension in your browser (see section above for instructions)
  4. You can now visit http://127.0.0.1:3000/ and give a demo

When you are done with the demo you can use docker-compose down to shut down the server.

Updating your local install
  1. cd into this repository
  2. run git fetch followed by git merge origin/master
  3. Connect a shell to the web container and run npm install
  4. Follow the First time instructions (see above)

Storybook (components playground)

If you want to work on component in isolation run:

npm run storybook

and visit http://127.0.0.1:3030/.

To add stories, add a file that ends with .stories.js in the ./src/components directory.

Tests

Once cloudant and redis are running run:

npm test

Usage stats

Several CouchDB views can give access to usage statistics. For instance, logging in to Cloudant and visiting /rapid-prereview-docs/_design/ddoc-docs/_view/byType?group_level=1 will report a breakdown of the counts per types.

Deployments

We use Azure and IBM Cloudant.

Architecture

We use Azure App Service and run 2 apps:

  • rapid-prereview for the web server
  • rapid-prereview-service for a process that takes care of:
    • maintaining our search index (listening to CouchDB changes feed)
    • updating the "trending" score of preprints with reviews or requests for reviews on a periodical interval

These 2 apps are run on the same service plan (rapid-prereview-service-plan).

The databases are hosted on IBM Cloudant (CouchDB) and are composed of 3 databases:

  • rapid-prereview-docs (public) storing the roles, reviews and requests for reviews
  • rapid-prereview-users (private) storing all the user data (and the links user <-> role)
  • rapid-prereview-index (private) storing the preprint with reviews or request for reviews search index

We use Azure Cache for Redis to store:

  • session data
  • cached data for the payload of the public API.

We use Sendgrid (from the Azure marketplace) for emails.

Process

Cloudant

Be aware that all the following will source the production environment variables (see the Azure section below for information to get them).

  1. Run npm run cloudant:init to create the databases and push the design documents
  2. Run npm run cloudant:set-security to secure the databases
  3. Run npm run cloudant:get-security to verify the security object

To update the design documents run: npm run cloudant:ddocs.

To seed the production database (for demos only) run: npm run cloudant:seed (!! note that this performs a hard reset and delete all data in the databases before seeding).

To reset the production database (for demos only) run: npm run cloudant:reset (!! note that this performs a hard reset and delete all data in the databases).

Azure

Visit https://portal.azure.com/ All the resources we used are defined in a rapid-prereview resource group.

  1. Install Azure CLI (see https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest)
  2. Run az login to login to the CLI
  3. Get the private files not checked in to GitHub: ./get-private.sh (if you later update those files, run ./put-private.sh to upload them back)
  4. Run npm run build (after having run npm install)
  5. Run ./deploy-app.sh to deploy the app and ./deploy-service.sh to deploy the service

To see the logs, run ./log-app.sh or ./log-service.sh. We use pino for logging.

Apps can be restarted with ./restart-app.sh and ./restart-service.sh.

To reset the redis cache run: npm run azure:reset-cache. Be aware that this will source the production environment variables.

To reset all redis data (including sessions) run: npm run azure:reset-redis. Be aware that this will source the production environment variables.

Some basic info about the service health can be found at https://rapid-prereview-service.azurewebsites.net/

Backups

Backups are stored in a blob storage container on azure.

  1. Install Azure CLI (see https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest)
  2. Run az login to login to the CLI
  3. Run npm run backup Be aware that this will source the production environment variables.

rapid-prereview's People

Contributors

chaituvr avatar dasaderi avatar dwins avatar georgiamoon avatar halmos avatar harumhelmy avatar jheretic avatar leonardosfl avatar majohansson avatar rudietuesdays avatar sballesteros avatar willianveiga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rapid-prereview's Issues

User notifications (email)

Updated per discussion with @dasaderi and @SamanthaHindle on slack

User notifications (via email)

  • when a user review has been moderated (note that this is triggered only after moderators validated user reports and not each time a user reports a potential issue).
  • when a user has been blocked (lose write access to the platform) by a moderator.
  • when a review is added to a preprint for which the user had requested feedback (via a request for reviews).

Implementation

Backend

  • Get a default email address from ORCID (see https://members.orcid.org/api/tutorial/read-orcid-records )
  • Allow user to update that email (or set it in case where no emails could be found in the public ORCID record)
  • Add logic to verify the email address (only for emails not imported from ORCID). Notifications will only be sent to emails that are verified. Verification will be done by sending an email with a "magic" link to the user. If the user receives the email and clicks on the link the email address will be considered verified.
  • Add ability to set notification preferences (see frontend section below)
  • Send the emails with sendgrid on Azure. Emails will be sent as plain text only (so unstyled).

Frontend

  • UI to display and update the email address (part of the settings). Note that the email address is never displayed publicly and is only used for notification purposes.
  • UI to keep track of the verification status of an email
  • UI for the notification preference settings (YES / NO) to assess whether or not notifications should be sent when new reviews are added to a preprint for which the user requested feedback

prefix all CSS classes and ID or use a global reset

We need to protect our content script styles from the page own CSS.
We can either add a prefix to all our classes (probably simplest solution) or try a global reset betting on the fact that extension CSS is injected last (to be verified across browsers).

Modify Rapid CoC prior to launch

@dasaderi to modify here in a new coc branch and label changes with "TODO @sballesteros".

  • Specify that content will be moderated after it's posted. Community members will be able to flag potential violations and admins will evaluate and decide if the content needs to be removed and/or the user needs to be blocked.

  • Specify that the site is for outbreak-related content only.

favicons

Probably the same icon as the extension popup icon ?

Remaining items to go to production

  • switch to production domain (outbreaksci.prereview.org)
  • switch to official ORCID API instead of our development mock
  • open source the repo (and double check license)
  • port extension to Firefox
  • publish chrome extension to chrome web store
  • publish firefox extension to firefox web store
  • update email addresses to latest
  • update to latest design for trending score display
  • delete the current seed (with fake content / users) and replace with a new one without fake reviews (or start blank or with just request for reviews submitted by us ?)
  • finish admin and moderation features
  • update Code of Conduct to reflect latest moderation proposal (the community helps to flag content)
  • update facet panel and questions to latest ?
  • test on chrome
  • test on mobile chrome
  • test on Firefox
  • test on mobile Firefox
  • test on Safari
  • test on mobile Safari
  • test on Edge

Best starting point(s) to engage users to start writing / reading prereviews in the context of the Rapid PREreview project

Context

We want to assess the best starting point(s) to engage users to start writing / reading prereviews in the context of the Rapid PREreview project (see #4 ).

We currently have 3 (non mutually exclusive) approaches in mind:

  1. "Homepage approach": User starts from the PREreview preprint discovery page or a dedicated page of the PREreview and/or Outbreak Science websites (for instance a page allowing the user to select the article object of the review with a URL, citation or DOI).
  2. "Extension approach": User starts from a browser extension that the user can download from the PREreview website and/or Outbreak Science website (or browser extension stores).
  3. "Disqus approach": User starts directly from a preprint service website

User behavior

To get started we would like to focus on a high level discussion centered on how users currently:

  1. Discover new preprints (e.g., preprint service homepage?, google scholar? twitter?)
  2. Share (and refer to) discovered preprints (e.g., share link to the PDF? share preprint "canonical" homepage / URL / DOI ?)
  3. Discover comments (and/or) reviews made about preprints (e.g. bioarxiv uses Disqus)
  4. Share (and refer to) comments (and/or) reviews made about preprints (e.g. email notifications from users followed on Disqus ?, comment / review URL or DOI)

And if these behaviors are expected to be different in the Rapid PREreview context.

User acquisition effort

We would also like to discuss the user acquisition effort associated with each approach:

  • effort needed to get the user to adopt a new homepage
  • effort needed to get the user to install a web extension
  • effort needed to get preprint services to integrate a Rapid PREreview script (Disqus approach)

Appendix

To provide some common vocabulary for the discussion we will consider several UI patterns:

  • "Shell"
    shell
  • "Extension popup icon"
    popup
  • "Right panel overlay"
    disqus
  • "System notification"
    notification

data for chrome and firefox webstores

Screen Shot 2019-12-01 at 11 56 44 AM

Screen Shot 2019-12-01 at 11 56 37 AM

  • provide info to chrome and firefox stores and resubmit the application (@sballesteros)

Reviewer comments:

Chrome

Your item did not comply with the following section of our Program Policies:
"Spam and Placement in the Store"
Item has a blank description field, or missing icons or screenshots, and appears to be suspicious.

Firefox

Jack Thompson wrote:
Please expand this add-on listing. Describe the purpose and features/interface changes/actions of this add-on in more detail and/or have images demonstrating the features. For suggestions on how to create a better listing see https://developer.mozilla.org/Add-ons/Listing .
Thank you.

Probabilistic facets

Let's consider a preprint with 10 Rapid PREreviews for the question:
"is the data available?" (possible answer being yes, no, not applicable (n.a)) - see #6 for the current questions.

In the faceted search we want to be able to filter by preprints with data available but we may not have a definitive answer. We could have the following answers:

  • 7 yes
  • 2 no
  • 1 n.a

Our job is to best represent that and provide the best search interface. For the search interface maybe our facet should be (better terms to be found):

  • likely (> 60% yes)
  • unlikely (< 40% yes)
  • unclear (in between 40% and 60% yes)
    (we could be as granular as required with the percentile and add more: e.g., very likely etc..)

Then our query becomes:
"find all the preprints likely to have the needed data available".

The unclear facet is interesting as it can provide an incentive for reviewers to add reviews to remove uncertainties (or check if the divide is justified).

Data model, API and choice of database

Data model

Everything is in JSON-LD.

Profiles (internal, for operational purposes)

const user = {
  '@context': 'https://rapid.prereview.org',
  '@id': 'user:orcid', // the ORCID of the user
  '@type': 'Person',
  name: 'Romain Gary', // from ORCID (ORCID is source of truth)
  givenName: 'Romain', // from ORCID (ORCID is source of truth)
  familyName: 'Gary', // from ORCID (ORCID is source of truth)
  // etc.

  // The roles (persona). For roles, Rapid PREreview is the source of truth
  // A user can have / manage different persona (roles)
  // Roles can be public or anonynous. Anonymous roles can be
  // made public at any time
  hasRole: [
    // Public role (linked to the ORCID)
    {
      '@id': 'role:romain-gary', // visible display name (user selected and globally unique within Rapid PREreview)
      '@type': 'PublicReviewerRole',
      startDate: '2018-01-01T00:00:00.069Z',
      gravatar: {
        '@type': 'ImageObject',
        contentUrl: 'http://example.com/romain-gary.jpg' // probably base64 encoded to avoid need for blob store
      }
    },
    // Anonymous role (not linked to ORCID or public role)
    {
      '@id': 'role:shatan-bogat', // visible display name (user selected and globally unique within Rapid PREreview)
      '@type': 'AnonymousReviewerRole',
      startDate: '2019-10-02T08:14:50.069Z',
      gravatar: {
        '@type': 'ImageObject',
        contentUrl: 'http://example.com/shatan-bogat.jpg'
      }
    },
    // Anonymous role made public (ended)
    {
      '@id': 'role:emile-ajar', // visible display name (user selected and globally unique within Rapid PREreview)
      '@type': 'AnonymousReviewerRole',
      startDate: '2018-01-01T00:00:00.069Z',
      endDate: '2019-10-02T08:14:50.069Z',
      gravatar: {
        '@type': 'ImageObject',
        contentUrl: 'http://example.com/emile-ajar.jpg'
      }
    }
  ]
};

Note: after discussion with @dasaderi and @majohansson on slack, for now we will follow the approach of PREreview where there is only 1 persona (by default private / anonymous) and the user can only make that persona public later on. Once the persona is public there is no way back to private.

const registerAction = {
  '@context': 'https://rapid.prereview.org',
  '@type': 'RegisterAction',
  agent: 'user:orcid'
};
const createRoleAction = {
  '@context': 'https://rapid.prereview.org',
  '@type': 'CreateRoleAction',
  agent: 'user:orcid',
  object: 'user:orcid',
  payload: {
    '@id': 'role:emile-ajar',
    '@type': 'AnonymousReviewerRole'
  }
};
const updateRoleAction = {
  '@context': 'https://rapid.prereview.org',
  '@type': 'UpdateRoleAction',
  agent: 'user:orcid',
  object: 'role:roleId',
  payload: {
    gravatar: {
      '@type': 'ImageObject',
      contentUrl: 'http://example.com/emile-ajar.jpg'
    }
  }
};
const deanonymizeRoleAction = {
  '@context': 'https://rapid.prereview.org',
  '@type': 'DeanonymizeRoleAction',
  agent: 'user:orcid',
  object: 'role:roleId'
};

Reviews and Request for reviews (public and replicated)

const reviewAction = {
  '@id': 'action:reviewActionId',
  '@type': 'RapidPREreviewAction',
  agent: 'role:roleId', // can be an anonymous role
  actionStatus: 'CompletedActionStatus',
  startTime: '2019-10-02T08:14:50.069Z',
  endTime: '2019-10-02T08:14:50.069Z',
  object: 'preprint:doi',
  resultReview: {
    '@id': 'review:reviewId',
    '@type': 'RapidPREreview',
    dateCreated: '2019-10-02T08:14:50.069Z',
    about: {
      '@type': 'OutbreakScienceEntity',
      name: 'zika'
    },
    reviewAnswer: [
      {
        '@type': 'YesNoAnswer',
        parentItem: {
          '@type': 'YesNoQuestion',
          text: 'Is this a good data model ?',
          suggestedAnswer: [
            {
              '@type': 'Answer',
              text: 'Yes'
            },
            {
              '@type': 'Answer',
              text: 'No'
            },
            {
              '@type': 'Answer',
              text: 'Not applicable'
            },
            {
              '@type': 'Answer',
              text: "I don't know"
            }
          ]
        },
        text: 'Yes'
      },
      {
        '@type': 'Answer',
        parentItem: {
          '@type': 'Question',
          text: 'A question asking for a free text answer'
        },
        text: {
          '@type': 'rdf:HTML',
          '@value': '<p>hello HTML</p>'
        }
      }
    ]
  }
}
const requestReviewAction = {
  '@context': 'https://rapid.prereview.org',
  '@id': 'action:requestReviewActionId',
  '@type': 'RequestForRapidPREreviewAction',
  agent: 'role:roleId', // can be an anonymous role
  actionStatus: 'CompletedActionStatus',
  startTime: '2019-10-02T08:14:50.069Z',
  endTime: '2019-10-02T08:14:50.069Z',
  object: 'preprint:doi'
};

Preprint (internal, for indexing purpose)

const preprint = {
  '@context': 'https://rapid.prereview.org',
  '@id': 'preprint:doi',
  '@type': 'ScholarlyPreprint',
  url: 'https://example.com/homepage',
  identifier: 'doi',
  doi: 'doi',
  name: 'Preprint title',
  preprintServer: {
    '@type': 'PreprintServer',
    name: 'bioRxiv',
    url: 'https://www.biorxiv.org/'
  },
  encoding: [
    {
      '@type': 'MediaObject',
      encodingFormat: 'application/pdf',
      contentUrl: 'http://example.com/preprint.pdf'
    },
    {
      '@type': 'MediaObject',
      encodingFormat: 'text/html',
      contentUrl: 'http://example.com/preprint.html'
    }
  ],
  datePosted: '2019-10-02T08:14:50.069Z',
  dateScoreLastUpdated: '2019-10-02T08:14:50.069Z',
  score: 21, // this is a time sensitive quantity so need to be updated frequently (we only keep updating the ones that are greater than a value (as score -> 0 as time flows) ...
  potentialAction: ['action:reviewActionId', 'action:requestReviewActionId']
};

API

POST /action

GET /resolve?id=doi

Used to return metadata about a preprint see #9

GET /action/:actionId

GET /action?q=search

GET /preprint/:preprintId

GET /preprint?q=search

GET /user/:userId

GET /question/:questionId

GET /role/:roleId

Display stats about a role

GET /feed

Changes feed is only for the public data (reviews and request for reviews)

Database

Main goal is to provide a super fast and smooth faceted search experience at the lowest possible operational complexity (and cost).

Candidates:

[extension] scope the reach ui attribute selectors

we overwrite some of them so that could clash if the host page where the content-script is injected also uses @reach/ui.
We can use normal classes instead of the attribute selectors.
We would need to exclude those classes from the postcss-plugin-namespace plugin as some reach component are rendered in portals that are typically appended to the body so outside the realm of our scoping.

Getting preprint metadata from unique identifier (DOI or OAI)

Context

arXiv preprints (and maybe others) do not have DOI but instead work with the Open Archives Initiative (see http://www.openarchives.org/)
We need a way to grab the metadata on the fly from a unique identifier entered by the user.

Getting the metadata

DOIs

We query:

arXiv ID:

See:

Others

Are there other cases that we should consider ? if so what are they?

extension popup

From discussion with @halmos on slack:

[Section when user is on a Reviewable Pages]
- X Reviews / X Requests
- Add Review (clicking maximize the shell + toggle on right tab)
- Add Request (clicking maximize the shell + toggle on right tab)
----
[Preprints by score section]
- title + number of review / request
(+ pagination to view beyond)
----
[PREreview Global Section]
- PreReview Home
- My Account
- Login/Logout (optional based on auth stuff)

Delete a rapid review (not a CoC violation)

Likely we will have to deal with users who submitted the review as a mistake. We as admin need a way to delete content that is not then labled as CoC violation.

@sballesteros does this require development separate from what we already have?

extension popup icons

Screen Shot 2019-10-30 at 5 09 26 PM

We need 2 icons to replace the R from the screenshot above:

  • one in default state (the extension in not active) <- when user navigate pages irrelevant to Rapid.
  • one in active state <- when user is on a page that can be reviewed.

In the active state the icon will be complemented by a badge with a count (number of reviews + requests) using the setBadgeText and setBadgeBackgroundColor methods.

See https://developer.chrome.com/extensions/browserAction setIcon section for the details on format / resolution etc.

What main "service" does Rapid PREreview provide

!! work in progress (draft)

Rapid PREreview focuses on providing the best infrastructure to request / provide / analyze feedback (structured reviews) on existing preprints relevant to the outbreak community.

The feedback should be of use to:

  1. the outbreak community (academics)
  2. workers, editors, journalists (visualization etc.)

Rapid PREreview does not focus on:

  • coordinating research effort / data analysis / calling for research during emergency situations
  • becoming / being a preprint server

See also:

structured review templates ownership

!! work in progress (draft)

Who can create / update structured review templates ?

  • only outbreak science / Rapid PREreview
  • any users
  • special users

How many structured review templates do we expect ?

Rapid PREreview form

Update: first round of mockups is available in the designs directory (be sure to download the PDF to view all the 3 screens as the github preview doesn't seem to always render the first one).

Structured fields

type required used in faceted search label & options
keywords no no keywords
yes/no/n.a/unsure yes yes Do the findings support the conclusion
yes/no/unsure yes yes Would I recommend this
multi yes/no/n.a/unsure yes yes Relevant to:
  • policy
  • clinic
  • ...
yes/no/n.a/unsure yes yes Novelty of the findings
yes/no/n.a/unsure yes yes Is there a basis for future work
multi yes/no/n.a/unsure yes yes Needs specialized attention:
  • stats
  • methods
  • model
  • ethics
  • data quality
multi yes/no/n.a/unsure yes yes Availability
  • data
  • code
  • not applicable
yes/no/n.a/unsure yes no Does the paper discuss limitations

Free form fields

type required label
textarea (~200 words) no Technical comment on methods, data, limitations
textarea (~200 words) no Editorial comment on novelty, importance, relevance

Sources:

MVP

Update: first round of mockups is available in the designs directory (be sure to download the PDF to view all the 3 screens as the github preview doesn't seem to always render the first one).

Note: in what follows all the authentication / user account / profile creation & display is done by PREreview.

In terms of "release strategy" main idea would be to release this work under a rapid.prepreview.org subdomain under an experimental/beta flag to get user feedback asap, iterate on that feedback and then integrate the good parts to PREreview

Main page

The goal of the main page is to establish Rapid PREreview as the place where the outbreak science community comes to request, get and provide rapid feedback on their preprints.

The page is centered around:

  • a searchable list of preprints for which users have provided or expressed a desire to get feedback through Rapid PREreviews
  • clear call to actions to:
    • create a new Rapid PREreviews
    • get feedback on existing preprint content (request for Rapid PREreviews)
    • install the web extension to do the same thing even more easily (see section further down)

List

To start as simple as possible, the first implementation of the list will sort items by date of last Rapid PREreview created.

Later we could refine that with a score better suited to provide visibility to preprints with a high demand for feedback. Tentative definition:

score = (v+r) / (t+1)^g

where,
v = number of votes of an item
r = number of Rapid PREreviews of an item
t = time since request submission or first Rapid PREreview (unit (hours, days, weeks) to be determined)
g = tuning factor (to be determined)

Displayed data / controls for each item:

  • preprint title
  • preprint server
  • preprint DOI
  • number of Rapid PREreviews (and/or reviewer name (or anonymous alias) with link to their PREreview profile)
  • number of upvotes (request for Rapid PREreviews)
  • visualization of the aggregated data collected from the structured reviews
  • date of first & last review made
  • call to action to add a Rapid PREreview
  • call to action to request a Rapid PREreview or express the desire to see more reviews (upvote)

Search options:

  • Facets (filter by):
    • preprints with requests for Rapid PREreviews (votes)
    • preprints with Rapid PREreviews
    • Preprint server
    • Dates of last posted review (last week, last month, etc.)
    • Structured data collected by the Rapid PREreview creation form (see #6 for the list of facets)
  • Full text search:
    • Author name / username / anonymous alias of the Rapid PREreview creator
    • Rapid PREreview textual content (if any)
    • preprint title
  • Other indexes
    • preprint DOI
    • preprint URL

When users perform a search the end of the list will re-iterate the call to actions to start a new Rapid PREreview or request for Rapid PREreview in case the desired content wasn't found.

Users opting to view details about an item are taken to the Rapid PREreview display page (see section further down)

Note: this list (and associated search indexes) is only generated from the data collected during the rapid PREreview creation process to minimize complexity and avoid having to maintain a sync engine / index of all the different preprint servers content. However, PREreview already provides and maintains an index of all the different preprint servers so we could merge with PREreview at a later stage. In the same way, if the maintenance of the index is difficult, PREreview could adopt the simpler approach described here.

Create new Rapid PREreview call to action

Users clicking on the call to action are asked to provides a DOI (or URL) of the preprint to review. At that point we try to get as much metadata as we can from this information (see #9) and transition the user to the Rapid PREreview creation and display page (see section further down). As a fallback, if we couldn't get metadata about the preprint the user will be asked to enter the DOI and title manually.

(if user is not logged in, user is sent through the PREreview login workflow and redirected to the page on login success)

Request for Rapid PREreviews call to action

Logged in users can:

  • add entries by providing a DOI (or URL) using the same workflow as described above
  • upvote existing entries (see score definition above)

Install web extension call to action

(see section further down for description)

Rapid PREreview creation and display page

This page is designed to work as a "fallback" in cases when users haven't / cannot / don't want to install the web extension (see dedicated section below). It is however fully functional (and styled).

The rendering logic depends on the Content Security Policy set by the preprint server hosting the content to review:

  • if framing is allowed, an iframe displaying the preprint content is rendered and complemented by:
    • a panel (or shell) containing the list of existing Rapid PREreviews
    • a shell containing the Rapid PREreview creation form (see #1 for screenshot of a "shell")
  • if framing is not allowed (e.g., Content-Security-Policy: frame-ancestors 'none') then a UI with a full window version of the list of existing Rapid PREreviews and the Rapid PREreview creation form is displayed (and complemented by metadata about the preprint (title etc.)).

The list of existing rapid PREreviews will be complemented with a summary view (interactive data visualization) allowing the user to quickly visualize aggregated results.

In both case a (permanently dismissible) call to action to install the Rapid PREreview browser extension is displayed.

When the user is not logged in (or is logged in but has already posted a Rapid PREreview), the UI to create new Rapid PREreviews is not displayed (a call to log in is added when the user is not logged in).

Note: this page is relatively similar to the one provided by PREreview with the main difference that the UI is designed to be able to also work in an extension context where we do not have control on the host page. It will be relatively easy to backport the structured review functionality to PREreview (allowing users to select the Rapid PREreview template from PREreview) or to modify the PREreview page to use a similar overlay UI and leverage the web extension described below.

Web extension

Note: development of the web extension will only start after the other parts of the MVP have been completed.

The goal of the web extension is to bring the features of Rapid PREreview where / when the user need it (therefore reducing the effort needed to write or request Rapid PREreviews). The extension is designed so that if a user visit a preprint, he/she is informed (in a non intrusive way) that he/she can Rapid PREreview it immediately (and without context switch) or ask for it to be reviewed (see upvotes and scores above).

Popup icon

the popup icon is part of the browser and lives next to the browser URL and as such is visible at all time

The icon is "highlighted" (different color / icon etc.) when:

  • user is on a preprint content page that has been reviewed by members of the Rapid PREreview community (in this case the icon is complemented by a badge indicating the number of Rapid PREreviews)
  • user is on a preprint content page that could be reviewed with a rapid PREreview
  • user is on a preprint content page object of a "request for rapid PREreview" (in this case the icon is complemented by a badge indicating the number of requests (upvotes))

Popup UI

Clicking on the popup icon opens a popup window/menu (located right below the popup icon) containing (when relevant):

  • An invitation for the user to start to write a Rapid PREreview for the page currently visited by injecting new overlay UI to the page (see content script section below)
  • An invitation for the user to request for Rapid PREreview for the page currently visited
  • The list of preprints for which users have requested Rapid PREreviews sorted by score (see definition above)
  • A feedback section to improve the quality of the extension and allow users to easily:
    • report URLs where the popup icon should have activated but didn't
    • report URLs where the popup icon should not have activated but did

Content script

features injected by the extension to the webpage currently visited by the user

The content script will share its code with the Rapid PREreview creation and display page (see section above) and inject 2 UI elements (working as overlay) to the page visited:

Rapid PREreview editor

Rapid PREreview creation form displayed in a shell

Existing Rapid PREreviews panel

Panel (or shell) containing a list of existing Rapid PREreviews for the visited page content

Note: the web extension could easily be generalized to PREreview if proven useful (and we will track usage / adoption data).

See also:
#1, #3, #4, #6


History:

List of preprint servers for which the extension should activate

@dasaderi @majohansson do you have a list of preprint servers that you think will be used for Rapid / Outbreak science ?
I think we will add a feedback mechanism in the extension, but to start, i would like to have a hard coded list of the preprint servers for which the web extension should be active (i.e allow users to add /request reviews).

To provide more details: we inject the shell on recognized domains using a list like:

  "content_scripts": [
    {
      "matches": ["*://*.arxiv.org/*"],
      "js": ["content-script.js"],
      "css": ["content-script.css"],
      "run_at": "document_end"
    }
  ]

(see https://developer.chrome.com/extensions/content_scripts#declaratively if you are curious)

My guess is that we can start with 10-20 domain that we (you!) know (arxiv, biorxiv etc.), and crowdsource the rest.
If you can add the domain of those preprint servers that would be great! If not I can do some research.

Add "Give feedback" button to beta

  • @dasaderi to make feedback form to link

  • @sballesteros and @halmos to add "Give feedback" button to platform by December 3rd

  • add beta somewhere to make it obvious it's a beta.

Should we have it so that users that try it don't actually end up adding rapid reviews until it's fully launched?

switch icon to force activate extension

Right now the extension is only activated based on a fixed list of preprint server. That should be fine for most cases, however if that were to be a problem, we could add a Switch toggle to allow user to force activate the extension on pages not covered by our list. We would store those extra pages in local storage so the user only has to do that once.
That would allow the user to immediately use the extension without waiting for us to patch the list etc.

add production domain to ORCID

callback URL is:
https://outbreaksci.prereview.org/auth/orcid/callback
(in addition to the azure one https://rapid-prereview.azurewebsites.net/auth/orcid/callback)

Admin level features (moderation)

  • Allow admins to retract reviews not respecting the CoC
    This can only happen for the free form questions. If we detect that a free form question violates the CoC, do we keep the review data as part of the summary stats (the yes/no questions can't be in violation by design) or do we totally discard the full review? Another way to see that is to decide on the granularity of the review moderation process. Do we moderate at the answer level or do we moderate at the review (as a whole) level ?
  • Allow admins to ban personas not respecting the CoC
    Do we keep the review data of banned personas as part of the review summary stats (barplot)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.