GithubHelp home page GithubHelp logo

iiif-stories's People

Contributors

jronallo avatar zimeon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iiif-stories's Issues

Discovery of Images

As a CMS Content Creator,
Who does not know an Image identifier,
I want to search for, browse, or filter Images available on an Image Server
So that I can embed them into my article/page

Generally speaking, this applies to questions that might come up during custom module development in popular frameworks like WordPress or Drupal. It's unclear whether such development would require multiple IIIF-compliant services. For example, one way to filter images for basic browsing would be to use the Presentation API's notion of "Collections". If one wanted to easily browse Images available on a Image API-compliant server, then it seems each Image would be bundled into it's own Manifest, and then those Manifests would be bundled within Collections. Correct?

download any arbitrary sequence as a PDF

From: Tom Crane [email protected]
Date: March 18, 2015 11:24:33 AM PDT
To: [email protected]
Subject: [IIIF-Discuss] How best to assert presence of PDF version of sequence?
Reply-To: [email protected]

In the Wellcome Library viewer, pretty much any sequence can be downloaded as a PDF.

How should the manifest advertise this? This is likely to be a feature that many viewer apps can implement easily, they just need to present a download link to the end user. The same pattern applies to other representations of the sequence for download, e.g., ePub, and there should be a universal way of doing it that a viewer application can look out for.

Is it a service on the sequence?
Is it a seeAlso?
Is it an annotation on the sequence?

The annotation seems a good fit as we can easily assert @type and format and have a consistent pattern for different media types, in fact a viewer can look for media types explicitly in a certain type of annotation to see what's available for download. But the annotation is a bit indirect, included in an annotationList rather than directly in the manifest.

It's not quite the same scenario as using annotations on a canvas for transcription, though not very far from it.

Is anyone already doing this and we can just use the same vocabulary?

Transcription use case: transcription/translation of manuscript

A fairly simple one, using Mirador, to embed transcription, translation (and potentially other layers of annotation such as commentary on the images). In this case we have transcriptions and translations of a medieval manuscript which it is hoped to present as digital edition, but within a general presentation environment (Mirador) which can display images of manuscripts, books, objects etc. that may normally have no annotations present. As annotations are created, these would be added, but in most cases none would be present.

The star catalogue starting on f.61v of DCL MS Hunter 100 (see http://bit.ly/1LmHVWv) has some transcriptions added for a few folios, and the first section on the Great Bear also has a translation from the Latin into English. This is currently done in json in a subdirectory of the manifest (for example http://bit.ly/1TLRSy0) which seems to be the lowest tech way of doing it.

It would be useful if the manifest / browser could work with XPATH so you could have a parallel document such as a TEI transcription of the whole document and pull the relevant bit from it for each zone of text in the image. This would be easier to maintain than hundreds of JSON files and the TEI source document could be reused in other contexts.

Where there are multiple sequences of annotations it would be useful to be able to distinguish them. In the Great Bear example the translation is in the same window as the transcription because the two sets of co-ordinates overlap (as they would with transcription and translation of the same text).

It should be possible to identify separate sequences of annotation and follow them independently and probably switch them on and off. A single image - for example Botticelli's Mystic nativity - could accumulate a huge number of layers of annotation on various themes that would end up concealing the image altogether. A means of filtering most out at any one time would be useful.

The viewer software should be able to recognise when it has loaded images that have annotations. Perhaps a specific json value should be available in the manifest "hasAnnotation", or the software could detect the presence of annotations, and indicate to the user that annotations are present so they have the choice to switch viewing on. Otherwise it is a rather random "switch annotations on and see if there is an annotation on the page you are currently viewing" kind of approach.

Parker on the web: full access to images limited to subscribing institutions

Submitted by Ben Albritton, Stanford University Libraries, on 8/6/2014

The "Parker on the Web" use-case - in which full access to image data is available to subscribers and authenticated applications, and size-capped access to the same image data is available to non-subscribers - is as follows:

  • applies to image data only
  • authentication is handled by the app currently - the app has full access to all data, and subscribed users (from approved ip ranges) can access all content in full through the app; non-subscribed (anonymous) users can access a size-capped static image
  • desired state would be:
    • IIIF request from an authorized app or user would return up to full image, with all parameters enabled (rotate, etc.)
    • IIIF request from a non-auth'd app or user would return up to a specified zoom level
  • We specify rights at the object level for each object in the Stanford digital repository. The default "world" rights delivered by the server s/b be the capped image size; if the app or user is authenticated to the system, more becomes available.
  • If a non-auth'd app or user requests something larger than the capped size, would be nice to return the capped-size image and some sort of error message (ie. "subscription required for full access to this image").

Transcription use case: displaying positional information

In many cases for typewritten documents, transcriptions are obtained via OCR and are available in a format that contains positional information along with the extracted text (the format I am most familiar with for this is ALTO).

It would be good to be able to support this or similar formats in order to show transcriptions laid out in a similar manner to the source document (e.g. these screenshots from an item from the Qatar Digital Library at http://www.qdl.qa/en/archive/81055/vdc_100023722174.0x00000b#transcription after clicking the 'Apply Page Layout' button in the Transcription section, compared to the original image):

screen shot 2015-07-21 at 12 55 08
screen shot 2015-07-21 at 12 55 27

The other way in which this positional information could be applied would be to provide the transcription as an overlay, or for search term highlighting. E.g. in this screenshot from the Wellcome Player for http://wellcomelibrary.org/player/b18024130#?asi=0&ai=1&z=0.0588%2C0.5962%2C0.944%2C0.5102 where I have searched for the word "Pacific":

screen shot 2015-07-21 at 12 58 15

Tradamus: create, publish digital critical editions

Public release summer 2015 (June)

Tradamus and IIIF and JSON-LD

This web application creates and manipulates IIIF objects to generate dynamic critical editions. The primary use cases are manuscripts and medieval, but any text or text+image "primary source" is supported. The underlying structure of our objects (sf wiki) is maintained in the front-end as JSON, is tied to IIIF objects and available in JSON-LD:

Edition
A collection with a label, permissions (of some sort), a creator/owner, and two sequences (outlines and witnesses)
Outlines
A resource with a label, index and two sequences (bounds and decisions)
Bounds
An annotation list of selectors that capture parallel sections of text (for collation)
Decisions
An annotation list of annotations which point to parallel sections of text across multiple witnesses and impose a reading
Witnesses
A set of manifests of the traditional sort (canvases with images and text transcription), text only (canvases with transcription, but no images), and non-digital placeholders (canvases with nothing but sporadic annotations)
Canvas
In the simplest case, a single manuscript image of one side of a single folio, annotated with a transcribing motivation with character strings, selecting the region they transcribe
In the case of non-digitized manuscripts or imported transcription documents, no images, but a single list of annotations with a transcribing motivation that may transcribe one or more actual pages from a quasi-real physical object
In the least described case, no continuous transcription is available, but is implied by specific variants in extant collation tables or user input from an offline witness that is only temporarily accessible or undigitizable
Annotations
Parallels are used internally to select the ranges in the witnesses to which decision annotations are tied
Content annotations are made by users and may be attached to witnesses or assembled edition texts (by way of annotating decisions)
Publications
An internal format (not a IIIF object) to render the publishable version of the edition (especially as it is a series of decisions) into some viewable format such as an HTML template, pdf, etc.
These publications, though not IIIF themselves, utilize templates that are likely to include embedded Mirador-style viewers

As a user I want to be able to browse through the semantic structure of a Newspaper resource or I want to be able to query a specific component of a Newspaper.

The degree of digitisation and OCR performed on a data provider side will determine the granularity of representation that will need to be retained. Digitised Newspapers can have an image representation for the title, issue, page, and article level, a non OCR text representation at the level of the issue (e.g. a PDF file); while the full-text could be at page level, article, lines or even words level.  
The structure of the digitised Newspapers should be identifiable from the metadata describing it.

There are two options to address this requirement in the Europeana context:

  • either representing the different levels as a concept using a controlled vocabulary such as the MARC genre list, the ontology BIBO or RDA.
  • or representing the different levels as resources re-using classes from existing standards (using rdf:type).
    The IIIF community can help defining a series of resources types. One key aspects as the discussion will be to decide whether these types are aligned with the current resources described by IIIF (Manifest, Sequence, Canvas…) or closer to the semantic structure of Newspapers (title, issue, pages…).

Transcription use case / features in ConservationSpace R1 / R2

Features in Existing ConservationSpace Image Annotation Tool:

• Ability for drawing geometric or free-hand regions, points and/or text as layers on top of images in different colors . (This is a basic paint program toolkit that could be used for every image in the system. Note that the text layer is not the same as an "annotation" which is part of the ontology model whereas the regions/points/text are merely components of the image.)

• Ability for every user-specified regions/points/ text on an image to add textual annotations. Annotation "type" (or “category”) can be specified by the user, and annotations could be tagged also. Users may use a "Reply" function to create sub-annotations. There is basic text editor integrated with the annotations.

• Highlighted annotated areas on an image, when the textual annotation is opened for reading, and vice versa (highlighted textual annotation, when the annotated area is selected)

• Ability to Zoom in and out of the images with the user-specified regions, points, text scaling with the image. Full screen mode of the image.

• Ability to search for and sort/filter annotations associated with the image

• Ability to update/delete regions, points, text, annotations

Screenshot of Existing ConservationSpace Image Annotation Tool
cspace-existing-annotation-tool

The components planned for development in Release 2 of ConservationSpace include:

• Image manipulation - The image manipulation functionality is planned to provide image editing techniques and basic image transformations like: crop image, change the image brightness and contrast, flip and rotate image, scale and resize image, and set image coordinate scale.

• Image comparison - Image comparison is the capability to compare two or more images side by side and to annotate them and link annotations.

• Image overlay - Image overlay is the functionality of putting images one over another and experimenting with their opacity and visibilty until get to the desired effect

As a digital object repository developer, I want to dynamically create object presentation manifests with SPARQL

I am implementing a Fedora 4 repository for manuscripts and museum objects with a data model that will be based on the Presentation 2.0 API. The objective is that having the data structured similarly will facilitate serialization of presentation objects directly from a SPARQL query. For example, a query could construct a presentation with arbitrary criteria, "where <range r> sc:hasCanvases ?canvases", so that only a specific part of the manuscript is presented.

The functional complexity exists in developing SPARQL query methods that can iterate through a JSON-LD list RDF structure. For reference, the RDF representation of JSON-LD list for a IIIF sequence looks like this:
<> sc:hasCanvases _:c002 .
_:c002 rdf:first http://localhost:8080/fcrepo/rest/edition/00027/canvas/c001 .
_:c002 rdf:rest _:c005 .
_:c005 rdf:first http://localhost:8080/fcrepo/rest/edition/00027/canvas/c002 .
_:c005 rdf:rest _:c008 .
...
So any JSON-LD list is basically a chain of RDF blank nodes. I am not sure that there is anything that can be done from the IIIF perspective, but thinking forward, interoperability = embracing linked data semantics. Simply parsing the manifest as a normal JSON, seems to be the current client implementation approach. This may work, but certainly does not provide the flexibility and interoperability that the API is trying to achieve.

I have actually been able to construct a multiple canvas ordered IIIF manifest sequence from a SPARQL query. The output format of the manifest can be viewed here . So, I believe that this story is indeed possible. I see the concrete implementation of this as a "manifest generator" service that assembles, executes and delivers the manifest based on variable criteria.

One title, one day – one extraordinary issue

As an eager reader of newspapers who is moving through several issues of a newspaper from "day to day", I would like to have a distinct notification in case there exists an extraordinary issue (like if the king is dead) of the newspaper on any given day so that I am able to view also that issue and at the same time be able to choose to continue to "move along the time axis".

Transcription use case: difficult-to-read palimpsest with multiple transcriptions, linked views

My use case is an erased manuscript (palimpsest) that is very difficult to read. One implication is that one should not assume that a text has only one transcription. Multiple transcriptions were attempted in the past and we hope to add new transcriptions based on advanced imaging technology. I would like a transcription window that defaults to the latest editorial consensus transcription, but has a pull-down selector for historical and dissenting transcriptions. Basically, this would be like "image choice" as in Mirador 1.0 and 2.1 (but not 2.0).

I see it as a basic strength of Mirador to offer a variety of layouts and add or remove items as desired. I would like a user to use "Add Item" to add manuscript images, non-textual/paratextual annotations (quire numbers, modern page numbers, holes in the parchment), transcriptions, translations, and commentary (each with choices). That last category, commentary, may be tricky in its vagueness. My users would want to distinguish commentary on the physical features of the scribal artifact and its transcription from commentary on the literary content. Mirador may not be the place for commentary on literary content.

It would be very helpful if the scrolling and panning were linked, such that moving down the page in the image frame correspondingly moved down the page in the transcription frame. I think the most inclusive way to do that would be to identify a span in the physical description, transcription, translation, or commentary as related to a rectangular area on the image canvas. A rectangular area would be a general category that would cover the specific functions required. The basic unit of a description of a physical feature would be an arbitrary rectangle; the basic unit of transcription would be a line in the manuscript; the basic unit of a translation would be a verse (determined by scholarly convention and including several lines). A line may contain the end of one verse and the beginning of another, and a verse may span multiple lines, columns, or pages. It would be acceptable to label the portion of a verse that appears in one column or page with a lowercase letter "a" (e.g., 1a) and its continuation on another column or page with a lowercase letter "b" (e.g. 1b). It would be lovely if moving the transcription frame changed the pan and zoom of the image frame such that the top-most line in the transcription frame caused the corresponding rectangle of the image frame to align to the top and fill the width. Style points if the transcription or paratextual annotation can specify the image choice on the canvas that best illustrates the point (advanced imaging will produce many perfectly aligned [registered] images for each canvas).

I would prefer Mirador perform limited functions well and be interoperable but not overlapping with other tools. For my use case I would be happy if Mirador offered many options for viewing and few or none for creating annotations, transcriptions, translations, and commentary. For example, I see T-Pen and Tradamus (and my handy text editor) as non-overlapping tools for discrete parts of the workflow. I'm not opposed to creating content in Mirador but nailing viewing issues like image choice and initial view are much higher priorities.

Translation and commentary annotations

Yale has a need for translation and commentary annotations that will target transcription annotations, as well as (optionally) the canvas on which the text appears. As the translation and transcription may be at different levels (line, sentence, paragraph, page), the the translation will need to be able to target multiple transcription annotations. Commentary may need to target multiple transcription and translation annotations, as well as canvases.

Dynamically generating in-frame scales

Harvard Library: Within IIIF-backed applications like the Mirador viewer, we would like to be able to include dynamically generated scales (rulers) based on two readily available pieces of data: image size (pixel dimensions), and capture resolution (ppi). Both values are automatically collected during the digitization process, and encoded within each of the images we produce.

Note: In order to accurately record the capture resolution value, one must set it (typically) within the image capture software.

mirador_scale

Transcription use case: translations of transcriptions

For the Qatar Digital Library (www.qdl.qa) we have scanned documents in both English and Arabic. Although we do not currently have them, we would like to create translations of transcribed documents. If these were available, it would be good to be able to display a translated document.

Note that this would get even more complicated if combined with the request for overlaid transcriptions in #17 due to the left-to-right and right-to-left reading directions of each language.

Transcription use case: Auto translation of transcription hook up

For some archives, where the target language for the researcher is not available, it would be nice to have support for automatic translation from service such as Google translate API to translate the existing transcription and be able to view it in a chosen language.

One title – two issues each day

As an eager reader of newspapers is moving from "day to day" through a newspaper that publishes two issues each day, and I am currently viewing the first issue one day, I would like to "move along the time axis" in a way where a click on the "next" button will present issue number two that day, and a new click on the "next" button will present the first issue the next day, an so on, so that I am able to view both issues by "moving along the time axis".

Public IIIF Annotation and Object Store

Center for Digital Humanities at Saint Louis University is planning to implement a public store service for a variety of objects, provided they match a standard format. The goal is to provide a place for easily discoverable linked open data and encourage the adherence to standards through simple APIs usable in a broad range of research projects. Currently there are a couple handfuls of use cases we are working with over the next few years including applications in genealogy, paleography, musicology, canon law, poetry, and others (which may be individually outlined here as possible).

At the moment, we are considering the following possible objects to store:

  1. oac:Annotation with anything expanded within, including selectors
  2. rdf:List, ore:Aggregation, sc:AnnotationList, sc:Sequence for groups of things
  3. sc:Canvas for defining something to annotate
  4. sc:Manifest for intentional sequences of sc:Canvas

Beyond accepting these types of objects in any valid form, we may also some descriptions of types that are not in existing useful vocabularies to define new oa:Motivations, for example. It may also be an opportunity to strongly suggest certain attributes or relationships to help connect data from different areas of research.

One title, one day – two geo-versions of an issue

As an eager reader of newspapers who is moving through several issues of a newspaper from "day to day", I would like to have a visual notification in case there are published more than one versions of the issue, one for a given geo region, and one for another region, so that I am able to view both issues and at the same time be able to choose to continue to "move along the time axis".

Dynamically load annotation lists from third-party annotation servers

Annotations and annotation lists are likely to be stored in locations that are unknown to the manifest provider. There should be a mechanism for discovering and loading relevant annotation lists from annotation servers that are either referenced in the manifest or provided to the viewing software by an end user. For example:

  • a professor might configure her Mirador instance to load any available annotations lists from her institutional annotation storage account; she then accesses manifests from multiple repositories and is able to see the annotations she has made on each of them
  • a repository might find it cumbersome to update each manifest as OCR text is generated; instead they choose to reference the annotation server once in the manifest, and viewer software queries the server for relevant annotations as it displays each canvas

Transcription use cases: viewing and consulting transcriptions

I have just a few thoughts regarding the viewing and consultation of transcriptions with images.

When the goal is to merely to see/consult the transcription alongside the text, rather than actually produce the transcription, I think the vertical (rather than horizontal split) can be a useful view.

But it should be kept in mind, first, that the size of the the manuscript image still needs to be big enough to actually be useful and second, that we should still be trying to limit the distance the eye has to travel in order to make the connection between the transcription and the corresponding place on the image.

I think about this most especially in the case of a two column text. If a viewer showed a full page image of manuscript and then put the transcription off to the right of that image, this would create a tiresome viewing experience. If the transcription text corresponded to column "A", the eye would have to travel across column "B" in the image to the desired point in column "A". This kind of movement becomes burdensome and it is easy to lose one's way.

Further, it's important to realize how much text is often contained in a single column. Often this can be three or four pages in printed form. Thus, whatever we can do to help link small sections of a transcription to coordinate sections within the image the better. Likewise, the closer we can visually place those linked components the better.

Below are a few animated GIF of different way's I've played around with this.

lbp1-ms-text-comparison

This gif is just a simple viewer that allows you to place the images of a paragraph from diverse witnesses or a transcription next to each other. Notice how the line breaks in the transcription are preserved, this helps a user find the corresponding place in the image.

lbp1-column-text-view2

This viewer is trying to give you a vertical split where the column text is paired next to the column in the manuscript. As you hover over a paragraph, the corresponding section of the manuscript is highlighted. From a user perspective, this makes it really easy find the section of interest within the manuscript image. Again, adding hard returns in transcription that correspond to the line breaks in the manuscript witness also helps facilitate this kind of consultation.

lbp1-capture-coordinates

I've also used this column-image-text view to capture coordinates for a given paragraph. In this gif, I move the gray box over a paragraph unit on the image, then I click capture coordinates, then I click on the corresponding paragraph, which captures the paragraph id. This can then be submitted to a database.

lbp1-view-images

This is the new framework I'm working on. Unlike Mirador, the text is the primary focus for me, and I think of the images as a kind of annotation. Still I try to provide both a horizontal division and vertical division when viewing text and transcription. The bottom-window shows the manuscript image side-by-side with the diplomatic transcription, but can also be aligned underneath the corresponding paragraph in the main text view. I'm also working on a side window, which pushes the text over to the right and then could show just a single column of a manuscript image or diplomatic transcription, or comments/annotations, or commentary.

lbp2-collation

It might also be useful to provide the possibility of showing differences between various transcriptions of different witnesses. In this Gif, I am selecting two different diplomatic transcriptions and comparing them side by side, with a collation tool. Providing this kind of collation next to an image would allow the user to quickly focus on areas of potential interest or significance in the manuscript image.

Transcription use cases: creating a transcription

I'm working on my own transcriptions this morning, and I thought I would provide some screen shots and commentary of how I do this.

Generally, I always begin by dividing the screen horizontally like so:

screen shot 2015-07-18 at 8 20 57 am

Then I have to work hard to find the place where I left off. (Because this take some effort, it's really frustrating to lose your spot).

screen shot 2015-07-18 at 8 20 30 am

In this screen shot, I have found my spot. This is a manuscript with two columns and to do my work I need to zoom in almost to the point that a single column takes up the width of the screen. This is one of the reasons that a horizontal split is preferable to a vertical split. There is nothing more annoying than only being able to see half a line and then having to slide the manuscript over to finish a line.

This is also why a manuscript with only a single column can be a real challenge. Here again, it is desirable to be able to fit an entire line within the width of the screen. But now space is at a real premium.

screen shot 2015-07-18 at 8 46 53 am

In short, I'm constantly trying to negotiate a size that a can be read and that will limit the amount of "mouse clicks and swipes".

Once I find my place as well as the desired zoom level, it is ideal to be able to get the viewer to move down the column as efficiently as possible. This is one place where working with PDFs currently has a real advantage over mirador. In the Mirador setup, I first need to take my hands off the keyboard, use the mouse to click on the mirador window, then click again AND HOLD, to drag the manuscript up so that I can see the next few lines, and then click the oxygen window to begin typing again. Inevitably, I sometimes forget to HOLD after I click, or I accidentally two-finger swipe, and then I lose the zoom level I fought so hard to preserve. All of this takes at least three clicks each time I need to adjust the line. And because the zoom level needs to be pretty high, I need to frequently move the screen down.

One thing that might be nice is that every time I hit a hard return or add in my transcriptions, the image viewer would move down a little bit. This would be pretty cool, saving me some of these adjustments.

In any case, when I have the same set up with a .pdf, the amount of clicks and annoyance are much less. The pdf responds to mouse events differently that OSD. A two finger swipe does not change the zoom, but moves the image. And because the Mac's two finger swipe can work on open windows that are not in focus, I can remain in Oxygen, and two-finger adjust the image, and then seamlessly continue typing. This requires no clicks and only one mouse movement. It's much more efficient.

Finally, another reason I think the horizontal split is preferable to vertical and should be privileged in the design for actual transcription production is that this view requires the least amount of "eye-movement". If the image is on the left and the text is on the right, the lateral eye-movement just feels more exhausting than when moving your eyes up and down. The horizontal division allows you to place the line of the image very close to the line you are transcribing. This is something you see in the T-Penn transcription tools, which I think might be a little bit overkill, but nevertheless significantly reduces the amount of eye movement required.

Yale museums: access thumbnail only unless authorized

Yale's museums limit access to a thumbnail when works are under copyright or donor restriction. Requests should be restricted to a thumbnail of the entire image (e.g., /full/!250,250/ ), unless the logged-in user has been granted access to the full image for research purposes.

Transcription at varying levels of granularity

Many of the medieval use cases (T-PEN, DM, etc.) address line-by-line transcription. In other cases the use may wish to transcribe at the paragraph or even page level. At Yale, we have cases where scholars will be transcribing printed editions that contain minor textual variations. The transcription will likely take place at the page level, with an existing electronic version of the text being pasted into the transcription annotation and then edited to match the text of the item/image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.