iiif / trc Goto Github PK
View Code? Open in Web Editor NEWTechnical Review Committee issue review
License: Apache License 2.0
Technical Review Committee issue review
License: Apache License 2.0
Original Issue
IIIF/api#1786
Pull Request
IIIF/api#1816
Preview
https://preview.iiif.io/api/1786_image_preferredFormats_prop/api/image/3.0/#55-preferred-formats
Summary
The publisher of an image may have one or more preferred formats that they would like to encourage clients to use. The reasons for this preference may be aesthetic or technical. Examples given include:
Proposed Solution
Add a preferredFormats
property to the Image Information Document (info.json
). This will be an array of format parameters (e.g., "preferredFormats": [ “png”, “gif” ]
) indicating the publisher’s preferred formats for the image.
The 2.1.1 spec says:
The quality parameter determines whether the image is delivered in color, grayscale or black and white.
For example, .../full/max/0/gray.jpg
would return a grayscale image.
This can be confusing if the source image is grayscale, or uses a color space but happens to not have any non-gray pixels. Should a server offer the color
quality? What should it return?
The editors propose that potential ambiguity about whether a server should offer color
, and what it should do with requests for color
, could be resolved by using the term full
to avoid the implication that a color
quality image should contain non-gray-shade pixels, or use particular color encodings.
Possible qualities become:
bitonal
: The image returned is bitonal, where each pixel is either black or white.gray
: The image is returned in grayscale, where each pixel is black, white or any shade of gray in between.full
: The image is returned with as much color information as available.default
: The image is returned using the server’s default quality (e.g. full, gray or bitonal) for the image.A full
image may contain only gray shades, if that is all that is available: it is legitimate for the server to list full
in this case even if the response would be identical to that for gray
. No prescription about the color space to be used in the returned image is implied.
There may be reasons for a server to return something different for default
than for full
(IIIF/api#1839 (comment)), so default
remains, and will continue to be the standard request, as in the current API.
This appears to be quite a significant change, but the usage of explicit color
is, we think, very low to non-existent.
In 2.x we have the viewingHint
property, that carries hints as to how the publisher would like the rendering client to process the information in the manifest. There are 7 such hints defined.
In 3.x, the behavior
(alpha / beta) property replaces viewingHint
, and there are now 16 (more than double) the number of hints. Several are due to the addition of time-based media (audio and video), others are just new features that have come up.
In trying to rationalize and describe the behaviors, several issues came up. The biggest was how to know when a behavior applied with respect to the structure of the manifest. For example, if the manifest has the paged
behavior, then the ranges would also have the paged
behavior. But if a collection had the auto-advance
behavior, would that mean that every manifest and canvas in the collection also inherited it? Secondly, it was not clear which behaviors could be used together and which were mutually exclusive. And finally, there were some obvious situations where behaviors needed to be valid, but were not.
We defined inheritance rules, based on the both expected interactions and practicalities of document dereferencing. It was noted that behaviors on Ranges should only be activated when the range is somehow selected, but that the selection mechanism is UI dependent, and thus implementation dependent. The behavior table was changed from being a random order, to grouped by type of behavior and which behaviors are disjoint with each other. The disjoint-ness of behaviors is explicit in each behavior's description.
Other related solved issues:
None known.
This recipe describes the two book related behaviors continuous and individual with two use cases and example manifests showing suitable content that fit in with the recipe. It also briefly mentions the other two image behaviors unordered and paged.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
This recipe introduces multi lingual labels into Manifests. It links to the relevant language codes and goes through some of restrictions and gotchas when working with multiple languages. The example includes multi language in the label, metadata, summary and requiredStatement fields.
The original issue covered multilingual as well as multi value but through discussion it was decided to split out the multivalue into a separate issue to keep this one more focused.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
A question was raised in the issue linked above as to whether the URIs for creative commons should be HTTPS or HTTP. They are currently HTTPS URIs, as that is what is listed as the "license deed". Also, all of our recommendations are to use HTTPS whenever possible. However the description of the licenses says there are three layers -- lawyers, regular people, and machines. As the presentation API usage is an enumeration for machines not directly for humans, it should use that layer ... and that layer uses HTTP.
Thus we believe that HTTP is the canonical form of the license URIs for software infrastructures, and as this is primarily a software-driven enumeration of values (the URIs), then the presentation API should require the HTTP form to be published.
However, for presentation to end users, if a client wants to create a link to the license itself, then it SHOULD rewrite the URI to use the HTTPS scheme, as that is the canonical form of the URI for humans (the "license deed").
We have created an issue for this in the creativecommons github and suggested two possible solutions:
If either of these are okay and implemented before the final version of the 3.0 APIs and thus creativecommons recommends using HTTPS, then we will instead use the https URIs, and would re-update our examples.
This recipe introduces the related property and focuses on making a PDF version of the digital object available. It mentions that renderings are possible on other levels. Renderings are a very useful property and this recipe will be built on with a planned 3d rendering recipe in development. Only Mirador is listed as an example as the UV doesn't work at this time although it did work with version IIIF 2 rendering.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
At issue in IIIF/api#1760 is whether the logo
property value should be defined as an array that may contain alternative versions of the logo image. Clients would be required to display one of the alternative images, rather than all of them. Use cases that would be supported by allowing multiple logo images include:
In the Alpha draft spec the logo
value MUST be an array, as suggested here. More recently the Editors updated the Beta draft to allow a maximum of one logo (IIIF/api#1698) - that is, that he value MUST be a JSON object. The proposal reverses this structural change.
Alpha draft:
https://iiif.io/api/presentation/3.0/#logo
(Note that the logo
examples in section 5.2 and Appendix B of the Alpha draft are inconsistent with the requirement to use an array.)
This recipe aims to introduce the rights
and requiredStatement
to the reader. It builds on top of the basic image recipe and describes what is allowed with an example manifest. It highlights the relevant parts of the IIIF specification where appropriate.
A +1 is the recipe is OK to go through to the master
A -1 is that is not OK and a comment in this issue should say what needs to be done
Original Issue
IIIF/api#1679
Background
The unordered
behavior indicates that the items "included in resources that have this behavior have no inherent order, and user interfaces should avoid implying an order to the user."
Issue
This behavior was originally valid only for Manifests and Ranges, but there may be cases where the order of Manifests within a Collection is arbitrary and this behavior could be applicable.
Solution
Allow the unordered
behavior for Collections.
API issue: IIIF/api#1607
It doesn't make sense to have a single exception to the rule that Presentation API resources MUST have an id
property. Hence change id
requirement for embedded AnnotationPages from SHOULD to MUST to avoid and the exception which would catch implementers out, also add guidance that URIs might always not be dereferenceable.
This recipe introduces the accompanying canvas. The main presentation is an audio file and the accompanying content is an image. The image is a score and is related to the audio file but during the development of this recipe it was decided to use the audio as the main content to avoid confusion where a user might expect the audio to be in sync with the image. This is of course possible with IIIF but would be a different recipe.
The accompanyingCanvas unfortunately isn't currently supported by any viewer.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
This recipe covers the following use cases:
While an audio file of a poetry performance may be divided into a track for each poem, scholars may wish to use annotations to indicate aspects of the performance of a particular poem.
A researcher might want to annotate the following types of information:
Since annotations could be available at the same time the manifest is generated, or might be a separate process that references the item manifest, both scenarios are shown.
There is a third use case where manifests are unaware of annotations on them, but the systems that display the item are aware of the annotations and pull them in, using the target block in the annotation.
API issue: IIIF/api#1633
The Presentation API specification was not clear whether multi-part
collections could have sub-collections. Discussion surfaced use-cases where we would want multi-part
collections to have sub-collections (e.g. journal with Volumes and Issues, hierarchical archives at NLW) and perhaps the understanding that multi-part
collections might even be more likely to have sub-collections that non multi-part
collections. Changed specification to tighten up the definition of multi-part
to explicitly mention the possibility of sub-collections: they "consist of multiple Manifests or Collections which together form part of a logical whole or a contiguous set ...".
Original issue: IIIF/api#1609
Often times IIIF users may want to know that a resource exists before requesting the entire resource which could be quite large. Using HEAD
to do this in IIIF could be a recommended pattern. Originally thought of applying to Prezi, but suggested to also apply to Image API.
Add to end of section 6 prezi something along the lines of "It is RECOMMENDED that implementations also support HTTP HEAD requests.", a bit more involved in Image API but perhaps add intro to Section 7 which suggests HEAD as well as GET (and OPTIONS)
Motivated by discussion of IIIF/api#1759 and described in IIIF/api#1767, the editors propose that we add a design pattern that talks about our goals of internationalization, along the lines of:
Design for Worldwide Use
IIIF specifications encourage internationalization efforts by requiring that text values of properties indicate their language, to support and encourage use around the world. IIIF assumes that text values of properties may have multiple values in multiple languages, rather than this being a special case.
This recipe introduces placeholderCanvas
with a typical use case for when it might be useful. Unfortunately this feature isn't yet implemented in any viewers so only the JSON-LD is linked to in the example.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
Note also this recipe includes a broken link to Audio Presentation with Accompanying Image
which is a recipe we hope to bring to the next TRC meeting.
The IIIF Text Granularity Technical Specification Group was formed to examine how to express the granularity of text annotations associated with IIIF images, such as the output of OCR or human transcription.
The TSG determined that the need could be satisfied by use of a single property (textGranularity
) applied to individual annotations. A survey of OCR software resulted in the identification of a number of common text granularity levels, which are also defined in the document.
The TSG determined that a standalone IIIF API would not be required; instead, the community extension mechanism introduced in IIIF Presentation API 3 would be used for this feature. As the document is the result of a formal TSG process, it will be hosted in the IIIF namespace. As a community extension, the document will not be semantically versioned.
Original Issue
IIIF/api#1643
Issue
How should clients interpret multiple behaviors (either specified directly or inherited per #13 - an orthogonal issue) that might interact in some way, in order to give a consistent user experience?
Background
Early drafts of Presentation API v3 called out certain cases of possible interaction, such as repeat
with and without auto-advance
, but left many cases open to interpretation. There was concern that not specifying how clients should interpret combinations would lead to inconsistent user experiences for a resource viewed in different environments.
Solution has two parts
paged
and continuous
). This is testable and a validator can throw errors if violated.This recipe is closely related to Recipe 15: Begin playback at a specific point that was approved in the last TRC meeting. Recipe 15 uses the start
property to start a video recording at a particular point. This recipe uses the same start
property with images to skip a blank page at the start of the manifest.
The manifest example is the same as the Book recipe with the addition of the start
property. Support for start
with images is only currently supported with Mirador 3.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
Note also this recipe includes a broken link to Multiple Related Images (Book, etc.) which is a recipe that has been approved by the TRC but requires a small amount of work before it is merged.
API Issue:
IIIF/api#1787
The intention of this issue is to stop, and reverse, the intrusion of Presentation API features into the Image API.
It began with a suggestion to add more information to the Image API, specifically the capability to make use of the provider
data now available in the Presentation API:
https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3.0/#provider
=> IIIF/api#1735
While this makes the two specs more harmonious, it also adds to the content burden of the Image API. It also has a practical implementation for clients of the Image API - tile renderers like OpenSeadragon and Leaflet. They no longer just have the job of showing image pixels. It makes it harder to build clients of either API if both APIs impose additional content to display, which could in theory be cumulative or conflicting.
The Editors feel that we should resist and reverse this trend, so that an info.json contains only the information required to serve pixels, and essential rights information that must accompany those pixels, but not text or secondary images related to the pixels. That is the job of the Presentation API, and Presentation API clients should be responsible for displaying that type of additional information.
This leaves just rights
in https://preview.iiif.io/api/image-prezi-rc2/api/image/3.0/#55-rights-related-properties
As a publisher, if you want to have a client display requiredStatement
and/or logo
, then make a Presentation API Manifest and include them. You can link to the Presentation API representation from the info.json, using the properties in 5.7 - https://preview.iiif.io/api/image-prezi-rc2/api/image/3.0/#57-linking-properties
While a publisher might object that they really need to include this information in their published Image API endpoints, in practice Image API components/clients like OpenSeadragon simply will not display that information, or would do so in a way that required consideration of styling. It would get messy. It would be harmful to the adoption of IIIF if it became mandatory for Image API components/clients to attempt to render this content. That is the job of the Presentation API.
Original Issue
Background
The auto-advance
behavior is defined in the Alpha draft as being applied when reaching the end of a Canvas with a duration.
Issue
Ranges may reference parts of Canvases and there are situations in which it may be desirable to have playback auto-advance from one part to the next.
An example provided during our requirements gathering was that of an interview recorded on existing gaps between other recordings spread across a cassette tape. As a content publisher I might wish to define a Range that will auto-advance between the Canvas segments so that users can listen to the interview without manually advancing the playback.
Solution
Allow the auto-advance
behavior to apply to segments of Canvases within a Range.
This is an important foundation recipe that introduces the IIIF Image API into a manifest. As such the recipe expands on the benefits of linking to a IIIF Image service over using a straight image as shown in the first recipe. It also covers some performance optimisations and briefly raises some cross version issues.
As well as this recipe the pull request also includes changes to the first recipe:
https://preview.iiif.io/cookbook/0005-image-svc-single-image/recipe/0001-mvm-image/
To encourage implementation of a IIIF Image Service rather than a non IIIF image. The change to the first recipe is in the use case section and you can see the original here:
https://iiif.io/api/cookbook/recipe/0001-mvm-image/
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
Original Issue
Pull Request
Background
In the image API, if an image is unavailable in color because the source is grayscale, there is no way to infer this from the compliance level. The solution is to require that all qualities available, other than default,
are always listed in the extraQualities
property in info.json.
As part of this discussion, it also became clear that, for the sake of consistency and developer happiness, and due to a limited number of use cases, bitonal
should always be optional. The discussion on the PR makes that rationale clear.
Issue IIIF/api#1639 contains discussion of how to provide a "label for the organization that has the logo." In the alpha draft, attribution
was removed in favor of requiredStatement
. With this change it became unclear what string might serve as a label to associate with the logo
image.
The proposal, reflected in the beta draft, is to introduce a provider
property. See https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3.0/#provider for a full description. Each Agent
contained in the list of providers SHOULD have a logo
property and MUST have a label
; the label can be displayed with the logo, resolving the issue that prompted IIIF/api#1639 .
API issue: IIIF/api#1759
In the current beta draft of Presentation 3, the homepage
property can have at most one value:
https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3.0/#homepage
If a publisher uses different URLs to deliver web pages in different languages, it would not be possible to link a manifest to all of these pages. For example:
https://example.org/objects/3456/en
https://example.org/objects/3456/cy
The proposal is to define homepage
as an array, similar to other linking properties such as rendering
, to permit the following:
"homepage": [
{
"id": "https://example.org/objects/3456/en",
"type": "Text",
"format": "text/html"
"language": ["en"]
},
{
"id": "https://example.org/objects/3456/cy",
"type": "Text",
"format": "text/html"
"language": ["cy"]
}
]
In JSON-LD terms, homepage
becomes a @set
.
The design patterns for IIIF sometimes lead to conflicting requirements, and the role of the editors and TRC is to try to resolve those conflicts to the best practical solution. This issue is one of those cases. In particular, Internationalization is important, as is Reuse of Standards and Best Practices.
We have taken the reuse of standards very seriously, however the W3C Web Annotation data model defines the label
for an AnnotationCollection to be a string, and not have any language features. This is documented in the current draft, and the label definition collision discussed explicitly. This conflicts with the first I in IIIF and the world-wide use design principle.
Further and more problematic, label
is also used in requiredStatement
and metadata
, not just as a human readable string for the AnnotationCollection itself. The definition of label
as a string would be active for everything that is "below" the AnnotationCollection, such as the content resources being annotated. Thus, the definition would be applied to the label
field in those properties, resulting in IIIF defined structures being inconsistent and not-internationalizable.
If there was a metadata
property on an Annotation, it would come out looking like:
{
"type": "Annotation",
"metadata": [ {"label": "string", "value": {"en": "value here"} } ]
}
Which is much worse than having a single class (AnnotationCollection) that can't have an internationalized label.
The proposal is that we should enforce the definition of label
to be as per the IIIF context - a language map allowing ease of internationalization - on all resources in IIIF content. This can be accomplished technically by redefining label
after importing the Web Annotation context, as contexts are processed sequentially to produce the active context.
The documentation can then use label
consistently, and section 4.7 can be updated to explain the rationale for overriding the Annotation context.
In defense of the Annotation specification work, language maps in JSON-LD 1.0 are not as functional as they are in 1.1 and would have introduced even worse inconsistencies. Scoped contexts do not exist in 1.0 and there is thus no opportunity to scope the definition of label
only to as used on an AnnotationCollection. A future WG for Annotations in the W3C could publish a 1.1 context that fixed these issues with very little effort, and use this decision as grounds for doing so.
This is an example of a simple book modelled as a IIIF manifest. It introduces the reader to a manifest with multiple canvases and gives advice on which properties are important, for example a canvas
label. It also briefly introduces behavior
which is important for the example but references another recipe to guide readers to more information.
During the development of this recipe there was some discussion on thumbnails and whether to include this in the example. There are a number of ways to do thumbnails in IIIF and support varies between viewers. We decided on balance not to include them in the example but to point strongly to the yet to be written thumbnail recipe. There are two yet to be written recipes on thumbnails which will be important to get completed soon as they will be relied on for future recipes. Issue #16 is for the basic description on how to reference a thumbnail and issue #12 acknowledges the different options and provides advice on how to make the most efficient choice.
A +1 is the recipe is OK to go through to the master
A -1 is that is not OK and a comment in this issue should say what needs to be done
API Issues: IIIF/api#1605 and IIIF/api#1615
Previous drafts of the Presentation 3 API introduced a property called posterCanvas
:
https://iiif.io/api/presentation/3.0/#postercanvas
This is a common AV scenario - a placeholder image to accompany a radio broadcast in a web player, or a still from a video to look at before play commences. It's a canvas rather than a plain image, as:
However, it turns out that these scenarios fall into two camps, and there's not necessarily enough contextual information to know what the publisher intends.
The V&A had a use case for music accompanying a manuscript, but not aligned to any time duration (the manuscript's canvases are 2D, not time-based). Should a client stop playing the posterCanvas audio once the manuscript pages are visible, or does the music play on? Other use cases emerged, all useful, all real things that people want to do, but would be hard for a viewer/player to interpret without additional information.
So (with a little reluctance) we reached the conclusion that two separate properties are required to meet the use cases:
placeholderCanvas
and accompanyingCanvas
...rather than clarifying the expected client experience with additional behaviors.
These properties are described in the beta draft:
https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3.0/#placeholdercanvas
and
https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3.0/#accompanyingcanvas
See especially this comment: IIIF/api#1605 (comment) for more discussion.
The second part of this issue is simpler. It's a clarification that, although
placeholderCanvas
and accompanyingCanvas
are valid properties of a Canvas, and the value of these properties is a Canvas, it is not permitted to have either of these properties on a Canvas that is itself a placeholderCanvas
oraccompanyingCanvas
, to make life easier for clients.
Original Issue
IIIF/api#1763
Pull Request
IIIF/api#1815
Preview
https://preview.iiif.io/api/1763_anno_page_processing/api/presentation/3.0/#55-annotation-page
Summary
The beta draft contained conflicting statements regarding the processing of Annotation Pages:
Embedded Annotation Pages SHOULD be processed by the client first, before externally referenced pages.
Clients SHOULD process the Annotation Pages and their items in the order given in the Canvas.
Proposed Solution
Change text to:
Clients _SHOULD_ process the Annotation Pages and their items in the order given in the Canvas. Publishers may choose to expedite the processing of embedded Annotation Pages by ordering them before external pages, which will need to be dereferenced by the client.
Original Issue
IIIF/api#1680
Issue
Behaviors such as individuals
and continuous
are valid at the Collection level, but paged
is not.
Background
The paged
behavior could be used to indicate that a Collection should be presented in a page-turning interface if one is available. For example, when a multivolume work is represented as a paged
multi-part
Collection, clients might provide a page-turning interface that can easily advance from one Collection member to the next.
Solution
Allow the paged
behavior on Collections.
The specification draft had become inconsistent. We decided that the type
is required for thumbnail
(IIIF/api#1147) and indeed for all objects (IIIF/api#1185), but the descriptions of thumbnail
and logo
still say it is optional. There should be clarity that type
is required.
We considered (on IIIF/api#1835) whether to simply omit mention of type
because it is covered elsewhere, or whether to repeat that it is required. The proposal is to explictly state that type
MUST be present.
This recipe shows where HTML tags are allowed in a IIIF Manifest and gives a working Manifest so readers can see how it looks in the Universal Viewer. It also details which HTML tags are allowed and links to the specification if readers want more details.
A +1 is the recipe is OK to go through to the master
A -1 is that is not OK and a comment in this issue should say what needs to be done
In version 2.x, feedback about attaching services to the object(s) that the service relates to was that this was sometimes painful to use when the service is shared by many resources, as only the first occurrence of a service was included in full -- other occurrences simply have the URI, and the client is expected to search through the entire document to find the previous reference. This is especially true for Authentication services, as the information often pertains to multiple content resources, and the service description is extensive.
This feedback was taken to the JSON-LD 1.1 Working Group in the W3C, and made it into the specification. The solution adopted is to allow a key at the root of the document tree called @included
. It can be aliased to other names, in the same way as @id
and @type
.
Proposal was discussed by editors and then approved on the technical community call of 2019-10-23
Allow services to be recorded in full in a services
property at the top level of a response (e.g. Manifest
or Collection
). This is an alias for @included
, maintaining the correct semantics. The resource that has the service still requires a pointer to the service, through its URI and class (id
and type
), such that the client knows which resources have which services.
This addition improves developer happiness, as the client no longer needs to traverse potentially the entire document structure searching for the original reference to a service. It improves the performance from O(n) [the number of nodes in the document] to O(1) [the two known positions where the service can be referenced from].
Example in the documentation: https://preview.iiif.io/api/1873_services/api/presentation/3.0/#b-example-manifest-response
Issue
IIIF/api#1741
Pull Request
IIIF/api#1814
(line 236)
Preview
https://preview.iiif.io/api/1741_image_pct_n/api/image/3.0/#47-canonical-uri-syntax
Summary
Question of whether to maintain alternative upscaling form for pct:n
. Keeping the ^pct:n
form requires clients to be explicit that upscaling is intended and the status codes for the ^pct:n
format allow a distinction between upscaling not supported and other syntax errors.
Resolution
Decision is to keep both forms and to clarify the status codes to be returned for a non-upscaling request that requires upscaling (e.g., pct:110
), and a request for upscaling when upscaling is not supported (e.g., ^pct:110
sent to a server that does not support upscaling).
This recipe introduces the start
property with a video file. The video file used is a counting clock and the intention is that when the video starts playing in a viewer it starts 120 seconds in. Unfortunately this isn't yet implemented in a viewer but unlike other recipes during the recipe development process we decided to keep the link to the Universal Viewer in the recipe. The reason for this is that its clear to a reader the start function isn't working properly but the video does play and this shows a recipe reader the behaviour they can expect at this time. For other recipes that aren't supported by a viewer the results in the viewer would be misleading or confusing.
Note this recipe is using start with time based media but there is a recipe that is being worked on for an image based manifest with the start property.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
Note also this recipe includes a broken link to Load Manifest Beginning with a Specific Canvas
which is a recipe we hope to bring soon to a TRC meeting.
This recipe introduces the thumbnail property for a Manifest. Initially this recipe was going to also cover canvas thumbnails but during discussions it was decided that this should be a separate recipe as there are a few options for linking a thumbnail to a canvas. This recipe shows a manifest with an image with color bars and then a manifest thumbnail of the same image which has been cropped to remove the color bars.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
Note the recipe includes a link to a Thumbnail Selection Algorithm implementation note which is yet to be written. This Implementation note will focus on canvas thumbnails and discuss the most performant ways to publish thumbnails and also how clients should consume them.
This is the first recipe to come from the Maps community group and it is looking to set a precedent for using GeoJSON-LD within an annotation to link a word on an image to the Geographic place France. Although linking IIIF resources to geographic places has always been possible in IIIF it hasn't been part of the IIIF specifications so a recipe is a good way to encourage interoperability between different providers.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
Note the recipe links to a number of related recipes that are yet to be written and these currently show in square brackets in the Related Recipe section.
This pull request includes 3 very closely related recipes:
Table of contents for A/V content
Table of Contents for Multiple A/V files on a Single Canvas
Table of Contents for Multiple A/V files on Multiple Canvases
These three recipes introduce the table of contents with A/V material and also discusses the different ways to represent A/V material either as one canvas or multiple canvases. As part of the single/multiple canvases discussion it notes the difference in viewing experience with each (gapless playback for single canvas).
This recipe is the first to introduce more complex A/V examples and builds on the basic audio and video examples which are already in the master cookbook branch.
A +1 is the recipe is OK to go through to the master
A -1 is that is not OK and a comment in this issue should say what needs to be done
This recipe shows how to paint an image that is a different size to the canvas on to a canvas. This might be useful if you have a lower quality image that you know will be replaced by a higher quality image so you create the canvas with the larger dimensions. By doing this any annotations that are linked to the canvas will still work even if the image is replaced.
This recipe introduces the notion of canvas dimensions being different from image pixels and is a foundation for other recipes that draw different sized images to different locations on the manifest.
We welcome comments on the recipe and as well as voting +1, confused face or -1 feel free to add comments to this issue. If this issue is approved then the author will take account of the comments before we merge the branch in to the master cookbook branch.
If the recipe is rejected by the TRC then we will make the changes requested and resubmit it to a future TRC meeting. If you feel that your comments are substantial enough that the recipe should be looked at again by the TRC after the changes have been made please vote -1 (thumbs down).
Changes to the recipe will only be made after the TRC voting process has concluded.
In 2.x we have the viewingHint
property, that carries hints as to how the publisher would like the rendering client to process the information in the manifest.
In 3.x, the behavior
(alpha / beta) property replaces viewingHint
, as the hint might not be about "viewing" per se. With A/V, it could be entirely audio. It also lends more weight to the recommendation, as opposed to just a "hint" that could be ignored completely with no undesirable effects.
When implementers came to try and understand the specification, there were questions (as raised in #1612) about when behaviors specified on classes higher in the hierarchy should be interpreted as being in effect for the child classes. For example, if a Manifest is paged
, does that imply that the Ranges are also paged
? If a Collection is unordered
, does that mean that all of its included Manifests are also unordered
?
After much discussion on community technical calls and in smaller groups, the following inheritance rules were proposed as the simplest set that generated the most intuitive interactions: https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3.0/#behavior
Without these rules, clients could entirely legitimately either inherit unintended behaviors (making all manifests unordered) or not inherit intended behaviors (Ranges not inheriting paged from the manifest). If these inheritance rules are not part of the specification, then two implementations can have entirely different interpretations and be able to claim IIIF compliance. A viewer that shuffles the order of the canvases (pages) in manifests linked from any unordered collection would be a correct implementation, and this is clearly not desirable. Please note that implementation notes or recipes are intended to provide assistance in modeling and understanding, not to provide requirements, and should be able to be ignored completely when implementing.
It was considered too onerous for publishers to be required to be explicit about every behavior on every resource, and thus inheritance rules are required to avoid this situation.
One of the core principles of the version 3 specification shifts is to balance the focus on both publishing and consuming implementations. Thus the additional clarity around what clients are expected to do, and the precision about the expected values for different properties. This issue is one of the client developer requested clarifications, and is important for ensuring consistency of the end user experience between different implementations.
This is a test ticket to test how the process would work.
For example we might be evaluating the following ticket:
Should we alias @none to none (or similar)? IIIF/api#1739
In 2.x, we use the verbose pattern in JSON-LD to associate language with a string:
{"@value": "title in english", "@language": "en"}
As described here.
This is confusing as we have a value
property, and annotation bodies can have a language
property. It is frustrating for developers as they have to use the description['@value']
pattern rather than the easier description.value
pattern in their code.
In 3.x, to resolve this issue, we have agreed to use language map feature of JSON-LD, which allows JSON objects where the language is the key, and the string is the value instead of the @value/@language
combinations. This turns the above example into:
{"en": ["title in english"]}
However a lot of data either does not have a language, or the language is unknown. JSON-LD 1.1 introduces a new keyword @none
for this scenario.
This means that there will be language maps that look like:
{
"en": ["title in english"],
"@none": ["title without a known language"]
}
And developers would still have to use the ['@none']
pattern. As a lot of data does not have explicit languages, this means that it is solved in theory but in practice we're not much better off.
There is an easy solution, thankfully. We can define none
as an alias for @none
, and use that instead. none
will never be a real language, as the codes are all 2 or 3 letters. With this addition, we resolve the problem. The above example would become:
{
"en": ["title in english"],
"none": ["title without a known language"]
}
and alert(label.none[0])
would work as expected.
none
doesn't collide with other extensions: http://tinyurl.com/yd7cnfe7IIIF/api#1785 and PR IIIF/api#1799
https://preview.iiif.io/api/issue-1785/api/image/3.0/compliance/#32-size (required for w,h
in level 1 column) vs existing https://iiif.io/api/image/3.0/compliance/#32-size
In version 2 of the Image API the w,
and ,h
forms of the size
parameter were canonical. In version 3 we have changed to make the form w,h
canonical (see IIIF/api#1434 and linked issues). In level 0 implementations servers don't need general support for w,h
requests, only for values explicitly specified in the sizes
list. In level 2 implementations, support for arbitrary w,h
requests has been required in past versions and continues to be required.
The 0.1 alpha draft of Image 3.0 has w,h
marked optional in the compliance document which is inconsistent with the canonical syntax.
Per discussion in IIIF/api#1785, the editors propose making support for w,h
requests mandatory at level 1. This resolves the inconsistency with the canonical syntax. It implies level 1 support for more deforming transformations than just the minimally deforming w,
and ,h
requests. However, we are not aware of cases where this would make support for level 1 difficult.
Original Issue
IIIF/api#1824
Pull Request
IIIF/api#1814
Preview
https://preview.iiif.io/api/1741_image_pct_n/api/image/3.0/#47-canonical-uri-syntax
Summary
The rules for producing a canonical URI do not address the new upscaling forms (^max
, etc.).
Proposed Solution
Add language to the size section of the Canonical Value table to address upscaling requests.
The Editors propose the final release of the Image API 3.0 and Presentation API 3.0 specifications. We believe that all issues relevant to the final release have been addressed. This issues closed in the most recent milestone are available at https://github.com/IIIF/api/milestone/23?closed=1
The Editors have requested assistance from the community to track feature implementations as required by our release process (see IIIF/api#1878). We believe that the existing implementations are sufficient to justify a final release of the 3.0 specifications and that the move final release status will encourage further implementation work.
The API change logs have been brought up to date and are located at:
This recipe introduces viewingDirection
and gives examples of right-to-left
and top-to-bottom
. The example content has been donated by the UCLA and fits with the example. This recipe also links to viewers that show this viewing hint in action.
A +1 is the recipe is OK to go through to the master
A -1 is that is not OK and a comment in this issue should say what needs to be done
This is the first time the Change Discovery specification has gone through the TRC and it is hoped we are nearing a stable version of the specification and there has already been some implementation experience with early versions of this API.
The Change Discovery API allows machine to machine communication to enable harvesting of IIIF content. It is based on the W3C Activity Streams standard.
The Discovery TSG would like the TRC to support the version of the specification linked above. Once this is approved we will encourage implementation and start moving the specification to a full 1.0 release.
This recipe aims to show a basic Newspaper example using IIIF Version 3.0, Presentation and Image API. It consists of 1 Newspaper title, 2 issues with 2 pages per issue. It contains links to annotations and ALTO for page OCR text. It discusses using navDate
, the Newspaper Hierarchy and how to link to OCR data in IIIF.
It doesn't cover all of the possibilities with Newspapers and these will be tackled in future recipes. These advanced topics include Newspapers with articles, Text Granularity and linking to other forms of OCR. This work has been started in IIIF/cookbook-recipes#102 but is not ready for review yet.
A +1 is the recipe is OK to go through to the master
A -1 is thats not OK and a comment in this issue should say what needs to be done
We note this is an integration recipe and may be OK to have snippets which is different to simple recipes.
Preview - Presentation https://preview.iiif.io/api/image_prezi_rc3/api/presentation/3.0/
Preview - Image https://preview.iiif.io/api/image_prezi_rc3/api/image/3.0/
Presentation API Change Log - https://preview.iiif.io/api/image_prezi_rc3/api/presentation/3.0/change-log/
Image API Change Log (may require further updates) - https://preview.iiif.io/api/image_prezi_rc3/api/image/3.0/change-log/
The Editors propose to publish (i.e., merge to master
) the image_prezi_rc3
branch, which contains changes made since the June 2019 Beta release. We believe that all normative changes made since the Beta have already been reviewed and approved by the TRC in accordance with our policy.
The Editors would suggest that this version be labelled a 'release candidate' (or similar). We believe that it is desirable to indicate to the developer community that the specification is stable and unlikely to change significantly prior to final publication.
image_prezi_rc3
branch.A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.