This repository contains the use cases and proposed API for Bullet Chatting.
We also have several demos:
Other documents are in the bullet-chatting-docs repo.
Bullet Chatting Proposal
Home Page: https://w3c.github.io/danmaku/
License: Other
This repository contains the use cases and proposed API for Bullet Chatting.
We also have several demos:
Other documents are in the bullet-chatting-docs repo.
As I mentioned several times, e.g., during the MEIG F2F meeting at TPAC, I've been wondering about a possible furure use case for bullet chatting like putting bullet chats on 360 videos and specifying the position/geolocation/orientation of the bullet chats on the 360 screen using WebVMT, etc.
The Independence of space characteristic describes a system of layers that does not seem to appear anywhere else, but that seems actually important to understand how things get rendered in practice.
Are the following assertions all true?
In particular, is 4 true? I'm asking because the proposed API includes an allowOverlap attribute. Or should 4. be more "does not overlap unless there are too many comments to render and overlap is allowed"?
If 1-3 are true, I believe it would be useful to introduce the term "layer" more formally in the spec.
Issue by LongTengDao
Jul 18, 2019, 8:33 PM GMT+8
Originally opened as w3c-proposal-incubation/w3c-proposal-incubation.github.io#4
原生化,以提升性能?
还是只是某种具体实现(或许是最佳实践)的公开文档?
https://github.com/w3c/danmaku/blob/master/TF/Bullet_Chatting_TF_Draft.md
Currently the scope is "TBD". We should make it clear.
value scroll
and reverse
should respect language script direction. As in most ltr languages it should scroll as described in current specification. But it should be reversed in rtl languages. CSS direction
should be used in this context.
Discuss: What should the render do if comments use languages with different script directions?
From https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0007.html :
In a typical bullet chatting implementation for live streams, are comments distributed to clients in real time, e.g., over a WebSocket connection, and displayed immediately when received, without being synchronized to the media timeline?
As each client may be at a different playback position on the media timeline, is any timing information used to ensure the comments are relevant to each viewer? For example, what happens if I'm watching the live stream but 5 minutes behind the most recent playback position?
As the draft described, there are 4 modes of bulletchat element. The example only shows how to handle scroll
mode bulletchat elements when allowOverlap is set. May a bulletchat[mode="top"]
overlap bulletchat[mode="scroll"]
when allowOverlap is set to true?
It is normal that bullet comments are used on a <video>
element. And user may seek the video to some timestamp. This require an API to seek the bullet comments too. Is there any API for this action?
From https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0007.html :
Accessibility is always an important consideration for any W3C specification work. Have you considered how to make the Bullet Chat comments accessible? This would be an issue with canvas based rendering, for example.
In Uniformity of modes, what is the "survival time" of a mode?
Do you mean to say the duration of a bullet chatting comment?
Or is this talking about the lifetime of the set of layers that have the same mode? If so, is it different from the sum of durations of all bullet chatting comments?
Current specification describe that "the position and order of each Bullet Chatting is fixed each time it is rendered". When "allowOverlap" is set to false. The position of each bullet chatting is relay on the previous chatting's. User agent have to calculate the position of all chatting's from the very beginning of the list for a certain chatting, which may cause performance issue. And if client try to insert a chatting before current timestamp, should the user agent repaint all chatting's currently shown to fit this change? As I know, There is not any website rendering chatting's like this currently. So, why this characteristics is listed?
感觉改为『滚动弹幕』和『逆向滚动弹幕』会更清晰一点。(实际上正文里已经有地方使用的是『逆向滚动弹幕』的写法,建议统一。)
对应的,mode属性reverse
建议改为scroll-reverse
。
I'm still wondering how a possible "pause" mechanism affects rendering of a bullet chatting experience.
According to the spec, one of the characteristics of a bullet chattting experience is "Deterministic of rendering", but if an application supports a "pause" feature for an individual bullet chatting comment, wouldn't the rendering determinism be affected when a user pauses a comment?
In the spec, it may just be a matter of completing the definition, for instance:
<li><dfn>Rendering determinism</dfn>: Provided that the bullet chatting
<a>container</a> and the <a>bullet chatting comment</a> are fixed, and
in the absence of user interaction (e.g. to pause a <a>bullet chatting
comment</a>), then the rendering position, order and timing of <a>bullet
chatting comment</a> are always the same.</li>
(I also suggest to rename the characteristic as "Rendering determinism" which sounds more English to me)
play-state
, duration
, delay
are three styles listed in current draft. Why they are styles instead of attributes?
play-state
looks like HTMLMediaElement.paused
to me, which would be an attribute.
It is normal that delay
of each comments are different, and using style for it is confusing.
It would be helpful for people who are unfamiliar with Danmaku to make an informative reference to a page which explains this; preferably in English.
I found this page helpful: http://danmaku.weebly.com/ but feel free to link to a better one.
I fail to parse the definition of the timeline term. Currently, it is "The on-demand barcorder is a real-time insertion or custom timeline for video playback time, live broadcasts, and other scenarios".
For instance, "barcorder" does not mean anything to me, and if it's a typo for "barcoder", I don't understand what a "barcoder" has to do with bullet chatting.
Do you mean to say that there are two types of bullet chatting comments, those that get rendered as soon as possible without any reference to the media timeline, and those that start at a specific point on the media timeline?
Also, what is the relationship/difference between this and the concepts of "appearance time" and "duration" in Basic properties?
(From https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0000.html)
Do we have use cases that do not involve web application running in a user agent? If so, we can probably mention them in the use cases document.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.