GithubHelp home page GithubHelp logo

3d's People

Contributors

azaroth42 avatar edsilv avatar juliewinchester avatar kirschbombe avatar luguenth avatar mikeapp avatar tomcrane avatar vincentmarchetti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3d's Issues

Create a collection of open-access 3D resources for demo usage

For creating interoperable demos of IIIF 3D progress, we need a collection of open-access (CC0 or other permissively licensed) 3D models. Participants should provide links to possible models for this purpose with a statement of specific licensing (with sourcing, if possible) applied to the models. Once we gather a good collection, we can rehost copies of these models in a central repository for use in demos.

3+ institutions publishing 3D models and willing to participate in 3D TSG to implement experiments

Taken from @azaroth42's IIIF/api#1992, with some updates

Requirement: 3 or more institutions openly publishing 3d models that are willing to participate in a TSG and implement experiments

Rationale: If there's no content, there's no need for a specification. If there's less than three institutions actually willing to engage, then there's no community to make that specification or content.

Status: I think we're well past this :)

Update: The current 3D TSG draft charter has a list of 13 institutions supporting for the formation of the charter, with specific contacts listed for almost all. We should consult with the pre-TSG working group to determine which of these institutions are currently publishing 3D content and would be willing to implement 3D IIIF experiments.

Create FAQ / Scope for project

We need a place to reference when recurring questions are raised, such as whether we're creating a metadata standard, or using a particular 3D model format.

Annotate 3D boxes based on polygonal selections in 3D models semantically

As a Computational Assyriologist

I want to annotate 3D boxes based on polygonal selections in 3D models (3D polygons) and semantically annotate their contents

So that I can record and indicate interesting paleographic sign variants for paleography research and machine learning tasks

Showcase: https://fcgl.gitlab.io/annotator-showcase/
Data Paper to represent 3D annotation in the W3C Web Annotation Data Model: https://openarchaeologydata.metajnl.com/articles/10.5334/joad.92
Background on the user story: https://cdli.mpiwg-berlin.mpg.de/articles/cdlj/2022-1

If you are interested, I can shortly present the use case in one of your working group meetings.

Create glossary of terms

We need a place to reference when using terminology that is not familiar for people joining the 3D community.

Possible differences in how models are placed in 3D scenes between 3D viewers used in demos

In multiple IIIF 3D TSG meetings, including the meeting that happened today on July 12th 2023, a topic of discussion has been raised concerning possible differences between how models are placed within a 3D scene's coordinate system between Smithsonian's Voyager and at least some other Three.js viewer tools or viewer demos. We should explore this issue further, document details, and try to resolve it if possible.

Specifically, the issue seems to relate to this viewer demo using Voyager to display multiple models. My understanding is that @edsilv has tried to use the coordinate positions used in this demo within other Three.js-based demo harnesses, with the baseball bat model ending up in a different position compared to the Voyager demo. See the picture included for an example of this.

Screen Shot 2023-07-12 at 12 59 09 PM

The most explicit documented discussion of this difference was provided by @gjcope in this issue comment. I think we need more detailed documentation of the issue, and hopefully we can try and determine whether this can be resolved? I'll also say that my understanding is that our approach for the IIIF spec manifests is also to have "child node" position determined relative to parent node position in some cases, so perhaps the Voyager behavior is the ideal behavior? But I think I need to understand the situation better before commenting further.

@edsilv : Could you provide a link to a comparative demo implementing the "bat on ground" behavior, and detail the data you're using from the Voyager demo to produce that?
@gjcope : After Ed provides a bit more detail, are you available to help us try and work through this? We can also perhaps talk more during the next IIIF 3D TSG meeting, but hopefully we can make some progress on GitHub in the meantime.

Thanks to both of you!

Load objects with very large textures

As a Curator at a charity with a large tapestry as one of our main visitor attractions

I want to be able to capture the intricate detail of the tapestry surface and display it in a creative 3D online presentation

So that the public can view/study the tapestry to build engagement/attract visitors.

Analyze and describe the requirements around intersections with Authentication and Search APIs for 3D content

Proposed Requirement: A document analyzing and describing the requirements around the intersections with Authentication and Search. Especially pertinent, given the current TSGs for those topics.

Rationale: All of the IIIF APIs should work together when needed. If there are authentication or search requirements that are specific to 3d, now is the time to work on them, not after the current TSGs have concluded.

Status: Unknown

Create 3D viewer demos that support IIIF-ish draft manifests

In #11 (comment), we have two example draft IIIF Presentation API V4 spec manifests that describe 3D resources. In the same way that we have created 3D viewer demos that load label annotations from a common simple JSON manifest, we should create 3D viewer demos that support and use these IIIF-ish 3D manifests to load content.

Demo harness: 3+ viewers using a common JSON annotation format

This is a task to create a milestone technical set of experiments that relates to user story #14, possibly others.

Acceptance criteria:

  • Establish a common JSON format that specifies two simple label annotations on a 3D model
  • Create a set of 3+ viewer demos with each viewer loading the same 3D model (astronaut GLB) and the 2 label annotations

Current progress:

  • Currently, we have 5 working demos (Aleph, Google Model Viewer, Sketchfab, Smithsonian Voyager, and X3D)
  • See VIEWER_JSON_DEMOS.md in the repo for links and further details

Progress needed:

  • X3D demo should load second annotation point and should load from external JSON file
  • Sketchfab demo should be based in code sandbox and should load standard annotations from external JSON file
  • All demos should be made more comparable for future work

After these points are finished, this task can potentially be closed and further work described as a new task (possibly in a GitHub project).

User stories and/or base use cases

Taken from @azaroth42's IIIF/api#1992, with some updates.

Requirement: Documentation enumerating base use cases and relevant user stories for use of 3D in IIIF.

Rationale: IIIF process is based around use cases. We need to know what we're trying to do, before trying to do it :)

Status: Some required use cases are listed at https://github.com/IIIF/iiif-3d-stories/issues. Additionally, the 3D Community Group has previously used a Google Form to collect user stories from the community. A summary of this process and results were described as part of the 3D Community Group update at the IIIF Fall Working Meeting 2020, and a log of the responses has been uploaded.

Next Steps:

  • Evaluate the use cases currently in the iiif-3d-stories GitHub to verify all should remain there, and then rephrase remaining use cases as user stories
  • User stories from the Google From log should be reviewed and appropriate stories should be contributed to the iiif-3d-stories GitHub
  • At some point the group should determine via community consensus when enough stories have been gathered to draw an "initial scope" for the 3D TSG work

Technology demo of 2 3D models in different formats being rendered in the same space

Taken from @azaroth42's IIIF/api#1992, with some updates

Proposed requirement: Technology demo of two 3d models in different formats being rendered in the same space is available

Rationale: In the same way that we can render jpgs and pngs together, we should be able to render multiple models. Otherwise there's no need for the IIIF level of interoperability - the individual format's viewer is all that's needed, and we cover that already with rendering.

Status: I believe that this is possible today using (for example) the threejs loader paradigm and libraries

Update: While this was definitely always possible using something like three.js and manual model loading, it's worth noting a few different technical proof of concepts created recently that demonstrate multiple assets in different formats being rendered in the same virtual space using IIIF in some capacity.

  • The Infinite Canvas by @edsilv: Combines a 3D asset, an A/V clip, and multiple still 2D images in a single navigable 3D space, all using IIIF manifests for each asset.
  • Mozilla Hubs gallery demonstrating three 3D assets from different sources by @JulieWinchester, @edsilv, and @RonaldHaynes: This was created for the IIIF June 2021 Annual Conference to demonstrate that it is possible to combine 3D assets from different data resources (in this case MorphoSource, Royal Pavilion & Museums Trust, and the British LIbrary). All of these assets are hosted via IIIF, and IIIF was used to locate the assets, but the actual models were downloaded and uploaded to Hub, IIIF manifests are not used directly. Currently, two of the asset links seem to be broken. But the space is navigable either in flat computer monitors, mobile AR, or in a VR headset and multiple users can interact with each other and with the objects in the space.

Next Steps:

  • Fix broken assets in Hubs space.
  • Add an additional 3D asset or two to Infinite Canvas, to really go above and beyond in meeting requirements here?

Analyze and describe requirements around annotation in a 3D environment

Taken from @azaroth42's IIIF/api#1992

Requirement: A document analyzing and describing the requirements around annotation precision in a 3d environment. For example, is a cube enough, or do the use cases require arbitrary volumes?

Rationale: Annotation is a core functionality of IIIF, and there aren't 3d annotation targeting specifications we can simply reference, thus knowing the requirements explicitly around annotation is very useful to set out in the right direction. This is needed for both comment annotations (look at this arm of the statue) and positioning models in the space (this statues goes here, that statue goes there).

Status: Unknown

Create a first-pass draft of IIIF Presentation API specifications supporting 3D

This work will involve taking what has been learned so far in TSG meeting conversations and experimental prototyping and using it to put together an early initial "first-pass" draft of a IIIF Presentation API specific that would support 3D in at least some of the ways described in the core user story issues in this issue tracker.

Lighting user story placeholder

Based on the work from the Basel working meeting on October 26, 2023 (see IIIF/api#2258), we probably want to create a user story for the use of lighting in the Presentation API 3D spec. This issue is a placeholder for a potential user story, and should be discussed during a future 3D TSG meeting. See d

Aleph viewer demo with astronaut GLB and label annotations

This is needed to complete #17. The acceptance criteria is the same as the criteria from #17 for initial demos. Will copy those criteria below.

Acceptance criteria for initial demos:

  • Example demo using a viewer, perhaps in code sandbox, but could use other environments
  • Demo should load the astronaut GLB
  • Demo should create 2 label annotations:
    • Label 'visor' on the astronaut's visor (should add specific coordinates on face mask)
    • Label 'glove' on the astronaut's left glove (should add specific coordinates on left glove)
  • Important: the annotations should be loaded from a JSON-format object, whether provided as its own file (and applied on initial load) or as input from a text field box (and applied dynamically in response to button input)

incorrect placement of backgroundColor in draft manifest

In the draft manifest showing example of a backgroundColor being defined on the Scene ( manifests/1_basic_model_in_scene/model_origin_bgcolor.json ) it looks like the json backgroundColor value is being defined as a property in the Manifest, not the Scene

If and when this is corrected, I suggest breaking the equality between the red and blue values, so set for example

"backgroundColor": "#FF00FE",

This will enable a unit test to detect if an implementation is rearranging the color values.

Analyze and describe requirements around intersection of 3D content and non-3D content in the same rendering environment

Taken from @azaroth42's IIIF/api#1992

Proposed Requirement: A document analyzing and describing the requirements around the intersection of 3d and other formats in the same canvas / space. For example, what should happen when you include audio, video or image content along with a 3d model in a 3d space.

Rationale: The existing functionality of the Presentation API should be available, in the same way that adding AV material in v3 can be done in the same environment as the content available in v2.

Status: Unknown

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.