GithubHelp home page GithubHelp logo

community-protocols's People

Contributors

benjamind avatar hunterloftis avatar justinfagnani avatar keithamus avatar kevinpschaaf avatar matthewp avatar nolanlawson avatar rictic avatar westbrook avatar zemzelett avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community-protocols's Issues

Context API

The Lit team has been prototyping a Context-like API based on events.

The basic idea is that components that need some contextual data will fire an event to request the data. The event will carry a callback use the pass the data to the requesting component. Components that can provide the data will respond to the event and call the callback with the data. The callback can then trigger arbitrary work in the requesting component, including a re-render.

The details needed for interop are things like the event name, callback property name and signature, and any keys that are used to identify the context and or data object.

[defer-hydration] Requesting hydration of a disconnected component

While hydration usually applies to elements prerendered to the document, there are occasionally use cases where a prerendered element may be hydrated while disconnected from the document. I get that probably sounds like an oxymoron, however I think of five such use cases:

  1. An element is pre-rendered with defer-hydration and does not hydrate. The user then removes this element from the DOM, removes the defer-hydration attribute while disconnected, and then reconnects the component.
  2. An element hydrates not with its own content, but instead from cloning a template stored separately in the document. This allows prerendering a component template once, and then reusing that template for many different instances of the component.
  3. An element is constructed in memory with attributes, properties, and child nodes added to it which can be read during hydration. This effectively allows "client-side props" to be supplied before hydration and used in that process.
    • In particular, I think this is useful for the process of "reflecting JS state back onto the component DOM". Sometimes "hydration" isn't just reading DOM state into JS, but also taking client-side state not known at render time (logged in user for example) and rendering it. This requires a component to receive this state from the rest of the page before it hydrates and if it's done in-memory, then hydrating a disconnected element becomes a problem to solve.
  4. HTML fragments performs an XHR fetch of HTML content and parses it to disconnected DOM elements. These elements should be hydrated so they will be in a valid state to manipulate even before being appended to the DOM.
  5. An element is pre-rendered inside a <template /> element and then cloned, hydrated with some input props, and then appended to the DOM.

While the use cases are definitely nuanced, I think there's value in a community protocol for web components to expose some kind of functionality to trigger hydration even when they are disconnected from the DOM. Hydration is often critical to initialize a component and make it functional. A counter component which exposes an increment() method can't really be implemented prior to the initial count being hydrated from prerendered HTML. It should be possible to construct a component, set some initial properties, hydrate it, interact with the initialized component (increment the count one extra time for example), and then append it to the DOM. Essentially, I want to be able to write something like:

const fragment = new DOMParser().parseFromString(`
  <my-counter>
    <div>The count for user named <span>-</span> is: <span>5</span>.</div>
  </my-counter>
`.trim(), 'text/html');
const counter = document.adoptNode(fragment.body.firstElementChild);
customElements.upgrade(counter);

// Some client-side props to use during hydration.
counter.user = { name: 'Doug' };

// Component not yet in a valid state, hasn't been hydrated.
// counter.increment();

// Hydrate the initial count.
hydrateTheComponentSomehow(counter); // How should this work???
// DOM now holds `<div>The count for user named <span>Doug</span> is: <span>5</span>.</div>`.

// Component should be in a valid state.
counter.increment();
// DOM now holds `<div>The count for user named <span>Doug</span> is: <span>6</span>.</div>`.

document.body.appendChild(counter); // Display to the user.

(I might be misusing document.adoptNode() and customElements.upgrade(), I find their nuances very confusing, but I don't think it's actually that related to this use case.)

A community protocol around triggering hydration for disconnected components would be valuable for libraries and tools which process prerendered HTML in various ways and convert them to hydrated components. From what I've seen, hydration tends to happen when the component is first connected to the document, but this means the component is in an invalid, unhydrated state until it is appended to the document and there is no way around that restriction. It is reasonable for the component to be non-functional when in this invalid state, but that means it can never become valid until it is appended to the document and displayed to the user.

Here are some potential ideas for how this protocol could work:

1. Use the defer-hydration attribute

We already have a defer-hydration attribute proposal which can trigger hydration on removal for components in the document. It seems reasonable that a component could hydrate itself when this attribute is removed, even if the component itself is not connected to a document. This is possible, though a little weird since you have to write:

const counter = document.createElement('my-counter');
counter.setAttribute('defer-hydration', '');
counter.initialValue = 5;
counter.removeAttribute('defer-hydration'); // Trigger hydration here.
counter.increment(); // Increments to `6`.
document.body.appendChild(counter);

It's pretty strange to set the defer-hydration attribute on the custom element just to remove it to trigger hydration. It's doubly confusing that counter is upgraded and its constructor runs during document.createElement(). Meaning any component which hydrates from its constructor would break this protocol because defer-hydration cannot be set in time to prevent it. It can also be unintuitive to author a component and expect to handle the admittedly very specific case of removing defer-hydration while disconnected from the document.

This also raises the question of "should a component hydrate when connected to the document without defer-hydration, or when defer-hydration is removed"? To support disconnected hydration, we need to take the latter approach, while I imagine most custom elements probably go with the former today.

2. Define a .hydrate() method

We can define a .hydrate() method which triggers hydration if it has not run already. This gives an opportunity for users to construct an element, modify its attributes, children, and properties arbitrarily, and then trigger hydration when ready to get the component in a valid, usable state before appending to the DOM.

If the component does not have a hydrate() method, it will be observed as undefined and any component which uses is can interpret this as though the component does not require hydration.

const fragment = new DOMParser().parseFromString(/* ... */);
const counter = document.adoptNode(fragment.body.firstElementChild);
customElements.upgrade(counter);

counter.initialValue = 5;
counter.hydrate?.(); // Trigger hydration if required.

counter.increment(); // Increments to `6`.
document.body.appendChild(counter);

I think this is the most straightforward approach, but I get the concern that this is yet another property to implement for every component.

There is an argument to be made that .hydrate() should optionally return a Promise, so the component can do async work as part of its hydration. I think there's a separate conversation to be had about whether or not hydration is fundamentally synchronous or asynchronous. However, given that the current defer-hydration proposal requires synchronous hydration, I think this should be limited to match.

3. Hydrate on component upgrade.

For prerendered components in the main document, hydration typically happens on connectedCallback() which is usually invoked when customElements.define('my-component', MyComponent) is executed (unless the component or its usage specifically opts out of hydration). This is effectively done at the same time as web component upgrade. The component class is defined, all of its instances of the page are upgraded, and then connectedCallback() is invoked which is the ideal moment for most non-deferred hydration. We could do the same thing for disconnected elements, and use customElements.upgrade() as the trigger for hydration. I see three challenges here:

  1. There is no upgradeCallback() hook, but the constructor function can serve this function, albeit in a confusing way IMHO.
  2. Constructing an element in the same document upgrades it immediately in document.createElement('my-component') before there is any opportunity to provide any properties, children, or attributes.
    • You can work around this by constructing in a different document (document.implementation.createHTMLDocument().createElement('my-component') but this is very nuanced and likely expensive given that you need to create an entirely new Document, just to throw it away.
  3. Setting native JS properties on an un-upgraded component, then triggering upgrade is very likely to delete those properties due to nuances in the spec. This means you either can't use native JS fields in the hydrating custom element, or it needs to be upgraded prior to those properties being set. This also implies that the component must hydrate without any of those properties.

This approach feels completely impractical to me. document.createElement() upgrades too eagerly and the native JS properties nuance forces any properties to be assigned after hydration, which is very limiting and easy to mess up IMHO.

4. Hydrate lazily.

In the original example, you can argue that counter.increment() should just lazily hydrate, since the component must be in a hydrated state in order to increment the current count.

const fragment = new DOMParser().parseFromString(/* ... */);
const counter = document.adoptNode(fragment.body.firstElementChild);
customElements.upgrade(counter);
counter.user = { name: 'Doug' };
// Component not yet in a valid state, hasn't been hydrated.

// Lazily hydrate and increment the count.
counter.increment();

document.body.appendChild(counter); // Display to the user.

I can see the value here since it means you don't necessarily need to think of the component as in a valid or invalid state.

Personally I'm not a fan of this approach because it means that every operation on the component must have knowledge of whether it is in a valid state and automatically correct that. This makes things particularly complicated for component libraries which may want to abstract away hydration timing, yet any user-exposed function needs to check if the component is hydrated, or the library needs to "magic away" that problem. Simple patterns like extending a class which handles hydration becomes a lot more complicated since there aren't easy hooks to hydrate automatically.

class MyComponent extends LibraryComponentWithHydration {
  doAThing(): void {
    super.hydrateIfNecessary();

    // ...
  }
}

This approach also means that it is very easy to accidentally trigger a potentially expensive hydration step without realizing it. It is also not possible to pay that cost early (in the counter example, you can't hydrate without also incrementing). The component could expose its own implementation-specific hydrate() method, but if this is not part of the agreed-upon community protocol, then it can't be used in a generic fashion without knowledge of the specific component.

Personally I like approaches 1. or 2. the best since they seem the most feasible and ergonomic. Curious to hear what others think or if anyone else has encountered this particular problem and has any interest in coming up with a solution.

[Proposal] DOM Scope Request Resolution

I've noticed that various capabilities that are being discussed as protocols involve the resolution of requests inline with an elements DOM scope:

The Context Protocol is broadly used in projects I work with (I work at Adobe, so Photoshop, Illustrator, Express, et al), and some version of Slottable Requests appear to be a quality performance win for those same projects. I've also seen in a similar way the need for collections of DOM elements to be built based on their shared DOM scope (main complexity beyond el.querySelectorAll(...) being the elements living in different DOM trees, but the same DOM scope), which if it were a shared issue could likely benefit from a shared protocol, as well. That means, there are probably other DOM Scope relative requests that an overarching resolution protocol could benefit the development of. If that were the case, the Context and Slottable Request Protocols, if they are at some point "Accepted" as official protocols could exist as specializations of this protocol, rather than similar but not the same implementations of DOM scope request resolutions.

Some topics that could be more thoroughly investigeted in the context of an overarching protocol:

  • events vs DOM walking
  • some events vs all events
  • events vs well known method names
  • scalar performance of transport method
  • etc

For now this is more of a stub issue, but in the next few weeks, I plan to dig into this more deeply. Happy to get thoughts on things you'd like to see at that time in the comments below.

[progressive-hydration] conditions for first level hydration timing

First Level hydration

This is a proposal for a "syntax" of conditions to trigger hydration of a component.

It has 3 separate "states":

  • server (only render it server-side => do not ship any js) [probably framework specific]
  • client (do not touch server-side and ship js) [probably framework specific]
  • hydrate (render server-side and at some point do loading! + rendering on client-side)

For server/client there are no additional "options"... but for hydrate, there are multiple modifies you could combine

Mode Option Description
server render server-side and do not hydrate (default)
client do not touch server-side and render client side
hydrate ๐Ÿ‘‡ render server-side and at some point do loading + rendering on client-side
onClientLoad [1] as soon as possible
onClick [1] as you click on the element
onMedia [2] as soon as a media query is met
onVisible [2] as soon as component + optional padding becomes visible
onHover [2] as you hover over the element + optional padding (click triggers hover => touchscreens)
onIdle [3] as soon there is a free slot in the main thread
onDelay [3] after x ms

[1]: global events: implemented via a single global event handler
[2]: element events: every element needs its own event handler
[3]: modifiers: modify how/when the hydration happens AFTER all conditions are met

Hydrate condition combinations

Each of the options can be combined via && or ||.

Example Description
loading="server" non interactive components like layouts or graphical components (= default)
loading="hydrate" most components should hydrate as soon there is a free slot in the main thread
loading="hydrate:onIdle" same as ๐Ÿ‘†
loading="hydrate:onClientLoad" above the fold element that should become interactive as soon as possible
loading="hydrate:onMedia('(max-width: 320px)')" mobile burger menu that triggers a drawer for navigation (only hydrate on screens smaller then 320p)
loading="hydrate:onMedia('(min-width: 640px)') && onClick" chart that only becomes interactive on desktop after a click
loading="hydrate:onMedia('(prefers-reduced-motion: no-preference)') && onClick" a visual animation that plays on click only if there is no prefers-reduced-motion
loading="hydrate:onVisible && onIdle" heavy chart that becomes interactive when element becomes visible
loading="hydrate:onVisible(100px)" heavy chart that becomes interactive when element + 100px padding becomes visible
loading="client" components that do something that can not be server rendered (for example need to access cookies or localStorage)

sadly this does not prevent "useless" combinations like loading="hydrate:onVisible && onClick && onHover".

Inspired by withastro/roadmap#108 and slinkity/slinkity#20

Example of user code

<h1>Rocket Blog</h1>
<inline-notification>Do this</inline-notification>
<!-- ๐Ÿ‘† will be only server rendered -->

<my-hero loading="hydrate:onClientLoad">
  Welcome ...
</my-hero>
<!-- ๐Ÿ‘† server render + hydrate as soon as possible -->

<my-list loading="hydrate"></my-list>
<!-- ๐Ÿ‘† server render + hydrate if main thread is idle  -->

<my-chart loading="hydrate:onVisible"></my-chart>
<!-- ๐Ÿ‘† server render + hydrate as element becomes visible  -->

<my-heavy-chart loading="onVisible || onMedia('(min-width: 768px)')"></my-heavy-chart>
<!-- ๐Ÿ‘† server render + hydrate -->
<!-- desktop: hydrate immediately (matches media query) [could add && onIdle] -->
<!-- mobile: hydrate as element becomes visible -->

<my-heavy-graph loading="hydrate:onMedia('(min-width: 768px)') && onVisible || onClick"></my-heavy-graph>
<!-- ๐Ÿ‘† server render + hydrate -->
<!-- desktop: hydrate as element becomes visible -->
<!-- mobile: hydrate on click (to safe bandwidth) -->

<my-login loading="client"></m-login>
<!-- ๐Ÿ‘† only client render -->

[context] Do we *need* `multiple` and `dispose()`?

The current context proposal suggests that context requests can include multiple to request that the callback be invoked multiple times as the value changes, and use a dispose value to communicate that it would like to unsubscribe from the context. This seems like very nuanced semantics for APIs which already exist. Promises, arrays, generators, EventTarget, subscribables, signals, etc. all provide different forms of "update this value over time" with varying semantics between the provider and receiver. As a rough approximation of some of these semantics:

  • Promise - Resolves or rejects at some unspecified time in the future at most once.
  • Array - Provides many values at exactly one time.
  • Generator - Provides many values in a pull-based model.
  • EventTarget - Provides many values in a push-based model with explicit unsubscription.
  • Subscribables - Provides many values in an asynchronous push or pull-based model with explicit unsubscription.
  • Signals - Provides many values in a synchronous pull-based model with implicit unsubscription (I think, I'm not very familiar with signals).

I propose an alternative: Remove dispose and multiple. Instead, do nothing. Let the provided value itself be wrapped in the appropriate container with which to define the usage contract. A single value might be Context<number>, while a value updated over time would be Context<Subscribable<number>>.

This approach provides all the same flexibility without any additional implementation complexity.

The one trade-off here which I see is that it requires the provider / context definition to choose the reactivity model of the context value and the context requester just has to make that work. It also means all context receivers need to use the same model (or else subscribe to different contexts from the same provider, which is awkward but workable).

I can see an argument that the provider should always provide updates over time and it's up to requesters to decide if and how much of those updates they want to accept. However, I don't think argument completely holds because requestors specifically tell the provider whether they expect updates over time via multiple, and the very nature of dispose() acts as an "unsubscribe" which implies the provider can stop publishing updates to a context if no one is subscribed to it. Both of these indicate that the behavior of the provider is dependent on the options defined by the requestors and their runtime behavior anyways. As a result, I don't think we can look at providers through the lens of "always provide all the context and let requestors deal with it how they choose".

I think it would be better to just let providers choose the reactivity model they are willing to support and letting context requestors work with that API. What do others think about this? Am I misunderstanding the purpose of multiple and dispose()?

[Context] add learnings from the front

Some teams (e.g. the Lit team) have been registering learnings around the Context Protocol of that are not currently included in the documented version. Letโ€™s get those included back via PRs!

Some features I know about:

  • lazy context registration
  • late changing context hosts
  • Iโ€™m sure there are more

@justinfagnani can you help get the most up-to-date info on this?

[context] Fully event driven context protocol

Hi, I recently discovered this proposal and it already has quite a nice approach.

However I've found that a few things in current state could potentially be improved by going fully event driven.

Basically right now only consumers dispatch events to "request" context which bubbles up to providers and then providers have to use attached callbacks to communicate back with consumers.
This introduces a bunch of problems like providers must store consumer's callbacks at all times which will force consumers to not being garbage collected and thus needing to invent another API surface to dispose/disconnect consumer's callback from a provider (which also opens up a room for a badly implemented providers that introduce leaks).

Since "request" is already an event - why not making a "response" an event as well (probably "provide" is a better name for event)?

This would require providers to only store consumer "HTMLElement" references which, if held in WeakRefs, would automatically be garbage collected whenever they are removed from DOM and no more other references are held, which in turn would fully eliminate the need for a disposal API.
This would allow to resolve #21.

Also the API with "dispose" callback seems quite awkward to work with (second optional argument in a callback, hello nodejs? xD ).
If the "provide" would happen as an event - there is no need at all for any cleanup API as consumer can simply remove event listener and be done with it.
We could even go deeper and define an optional event (like "context-unsubscribe" or "context-remove") to communicate to a provider that consumer is not interested in particular context anymore so provider may optionally do extra cleanup/optimisation.
This would allow to resolve #24.

I've just tried to implement fully event based context in my project and it works exactly the same as with callbacks in terms of calls order and sync guarantees but it allowed me to reduce API surface to a bare minimum (just events for context requests and provides).
I also implemented a fancier cleanup with extra event dispatched when a consumer is not interested in a context to further optimise communication but it is totally optional and may not be part of this spec.

You may check out my implementation here.

What do you think about fully event based Context API?

Scoped slots API

Web Components with slots permit to provide more reusable components and to write less props for more features.

BUT,
when we want some contextual data, we are lost and we have to create slots depending to initial data..

Vue introduces Scoped Slots which adds the possibility to build slot using contextual props, aka Scoped props.

See more : https://vuejs.org/v2/guide/components-slots.html#Scoped-Slots

It's possible to improve that on Web Components ?

[context] Some concerns, and an idea of how they could be addressed

This issue is fundamentally similar to #39 in that the author of that issue and I both agree that the protocol should be more event-driven, but we have very different ideas of how to go about it. In order to motivate my idea, I need to outline the problems I think it would solve. Most of this is me outlining the problems I think currently exist, and given there could exist alternative solutions to my concerns, I thought it apt to make a separate issue about the concerns and then also propose a solution.

Concerns about the current design

The callback doesn't make sense if the consumer isn't subscribing

If a consumer wants to synchronously request a context value one time, they have to do something like this:

getContextValue(context) {
  let value;

  this.dispatchEvent(
    new ContextEvent(
      context,
      (providedValue) => {
        value = providedValue;
      },
    ),
  );

  return value;
}

We know, with knowledge of the protocol, that this works, given that a provider will immediately call the callback, but it's actually not obvious from reading the code that this works; it's actually surprising that this works.

I also think it's just awkward. I suggest that the reason it's awkward is because a callback is simply an incorrect representation of the semantics. It's a callback that will be called, synchronously, only once; I don't think it makes sense.

The biggest problem with this, and possibly the biggest problem with the whole design, is that a provider can call a consumer's callback multiple times when the consumer only expects it to be called once. The proposal directly grapples with this, introducing the idea that providers are capable of being 'bad actors' and that consumers can be 'defensive'. I feel strongly that the fact this is even possible is not a necessary trade-off, but a flaw in the design; it doesn't make any sense that this is possible.

Another problem is that it's possible for the provider to cause a memory leak by retaining a consumer's callback when the consumer did not expect it to be retained. The proposal discusses this, of course, but I think it just shouldn't be possible.

The primary advantage I can see of using the callback for this in the current design is that it means the provider provides the value to the consumer in the same way regardless of whether the consumer is subscribing, but I think there's a better solution.

The current approach to unsubscribing is odd

In order to unsubscribe at a time of its choosing, the consumer has to fish the unsubscribe function out of the callback in the same way they have to fish the value out of the callback, which is awkward. I, again, think it's awkward because the consumer unsubscribing in this way is not a good representation of the semantics.

We want to return values to the consumer, but we also want to return an unsubscribe function to the consumer. The callback seems like a convenient way to do this, but in fact, it overloads the callback with responsibility; providing the value and providing the unsubscribe function are distinct responsibilities.

I think that consumers should unsubscribe using an AbortSignal. This would not only be a better representation of the semantics and a better separation of responsibility, but this approach is already in use with EventTarget and it would be good use of built-in APIsโ€”which, after all, is what web components are about.

I note that it would be possible to allow this by having the consumer attach an AbortSignal to the context-request event within the constraints of the current design, but

  • Then there would be three properties that are all tied to subscriptions, the subscribe flag, the callback, and the signal.
  • The provider would be expected to test whether the signal is already aborted before they begin providing the context.

As an aside: does the provider pass the unsubscribe function every time they call the callback, or just the first time? The proposal doesn't actually say. If it has to be passed every time, I think that's undesirable for both the provider and the consumer.

How does a consumer know whether there exists a provider?

From what I can tell, if a consumer wants to know whether there exists a provider, the consumer needs to do something like this (which I note is basically the same as the code above):

doesProviderExist(context) {
  let providerExists = false;

  this.dispatchEvent(
    new ContextEvent(
      context,
      () => {
        providerExists = true;
      },
    ),
  );

  return providerExists;
}

This is awkward and confusing for the same reasons as above.

The proposal does not address how a consumer can detect whether a provider exists, which, to me, seems extremely important: a consumer requests data but has no idea if they'll ever receive it. Surely the consumer would want to handle the case where they won't.

My idea

I really like the idea of common, community context protocol, and I want this initiative to succeed, but I have serious reservations with it as it is currently specified. Here is my idea for an alternative design:

The provider implements a getContext(context) method that returns the value of the specified context in the provider, and also emits a context-change event (that doesn't bubble) whenever the value of one of the contexts it provides changes.

When a consumer requests a context, they emit a context-request event that bubbles up the DOM in order to 'discover' whether there exists a provider for the specified context. If a provider can provide the specified context, it stops the propagation of the event and sets itself as a property on the event (event.provider = this). This is similar in many ways to the existing context-request event described, but achieves something fundamentally different.

After the consumer emits the event, they can retrieve the provider from the event and handle the case where no provider exists. If the provider exists, they can call provider.getContext(context) to retrieve the value of the context. That's it. If the consumer wants to subscribe to the context, they can call getContext() to get the value of the context at the time of subscription, and then attach a context-change listener to the provider to become aware of any future values. My instinct is that event should only specify the context that changed, and not its new value, because this would give the consumer more than one correct way of retrieving the value (getting it from the event or getting it from the provider directly).

I think this is an overall simpler design that also addresses the flaws in the current design. The consumer easily detects whether a provider exists, and can retrieve a context once without any unnecessary complexity or pitfalls. If a consumer optionally wants to subscribe to a context, they can do so by attaching event listener, without concerns about how the provider will call the listener, whether the provider will retain it, how they will extract values from the callback, or how they will unsubscribe. I believe that this design would address the problems with the proposed solution in #39 while ultimately addressing the concerns originally raised in the issue. It would also comprehensively solve #21, which I feel still has not been genuinely solved.

Funnily enough, the proposal actually mentions a potential "alternate API" in which consumers emit an event to discover providers, but does not contemplate any reason why this might be advantageous other than type-safety concerns, which the proposal (in my opinion) rightly dismisses. From my perspective, this is a little frustrating.

Trade-offs

One trade-off with this design is that the consumer gains a reference to the provider, thereby gaining at least some knowledge of who the provider is. However, I think this design actually decreases coupling between the provider and consumers. In the current design, providers are forced to be aware of consumers and to deal with their callbacks. In my proposed design, providers have literally no knowledge of consumers. In TypeScript, the type of the provider would be represented in such a way that the consumer could not become coupled to the provider.

Another trade-off is that providers would need to specify in their context-change events what context changed, and consumers would need to inspect the event to verify that it's actually relevant to them. In the general case, just as most context-request events are not relevant to any given provider, most context-change events would not be relevant to any given consumer.

Example TypeScript

type ContextKey<Value> = (string | symbol | object) & { __context: Value };
type ContextValue<T extends ContextKey<unknown>> = T extends ContextKey<infer Value> ? Value : never;

type ContextProvider = {
    getContext<T extends ContextKey<unknown>>(context: T): ContextValue<T> | undefined;
    addEventListener(type: "context-change", listener: (this: ContextProvider, ev: ContextChangeEvent) => any, options?: boolean | AddEventListenerOptions): void;
    removeEventListener(type: "context-change", listener: (this: ContextProvider, ev: ContextChangeEvent) => any, options?: boolean | AddEventListenerOptions): void;
};

class ContextRequestEvent<T extends ContextKey<unknown>> extends Event {
    context: T
    provider?: ContextProvider

    constructor(context: T) {
        super("context-request", {
            bubbles: true,
            composed: true,
        });

        this.context = context;
    }
}

class ContextChangeEvent extends Event {
    context: ContextKey<unknown>

    constructor(context: ContextKey<unknown>) {
        super("context-change");
        this.context = context;
    }
}

interface HTMLElementEventMap {
    "context-request": ContextRequestEvent<ContextKey<unknown>>;
}

I've taken some liberties in how I've written this that aren't necessarily important. One issue I have with the code in the proposal is that createContext is a JavaScript function that exists only for TypeScript purposes; it returns a nice generic type, but it doesn't actually do anything. The Context type in the proposal requires us to specify the type of the key even though we don't actually care what its type is; we need to specify the type of the key just so that the key is assignable to the type of the context e.g. "my-context as Context<string, number>. If we were to restrict the type of keys ahead-of-time, even with a very loose restriction like string | symbol | object, specifying the type of the key becomes unnecessary e.g. "my-context" as ContextKey<number>.

Closing thoughts

I don't know whether any of this alternate design is actually of any use. Given that the proposal is now candidate statusโ€”as of, apparently, two weeks agoโ€”it probably can't be redesigned in a meaningful way at this point, which is disappointing. I've only just discovered the WCCG and this proposal recently; if I had found this earlier, I would have participated earlier.

Regardless, I think the issues I've described are of real concern, and I would be keen to try and address these issues in any way that is feasible.

I'm very keen to hear any thoughts anyone has.

[defer-hydration] Controlling hydration with the "defer-hydration" attribute

This proposal allows us to control hydration and hydration ordering by server-rendering a defer-hydration attribute, and removing it on the client when we want to trigger hydration. This decouples element definition ordering from hydration ordering, and allows sections of the page to remain non-hydrated until an outside signal.

We have this implemented in the Lit SSR library.

I put an initial draft of the proposal in #15

[context] `createContext` function recommendation

The current recommendation for createContext has

### `createContext` functions
It is recommended that TypeScript implementations provide a `createContext()` function which is used to create a `Context`. This function can just cast to a `Context`:
```ts
export const createContext = <ValueType>(key: unknown) =>
key as Context<typeof key, ValueType>;
```

This effectively makes typeof key to always be unknown.

Lit's implementation has

export function createContext<ValueType, K = unknown>(key: K) {
  return key as Context<K, ValueType>;
}

which allows taking an type argument for the context key. In order to take advantage of this though, both type arguments must be provided. If only ValueType is provided, K defaults to unknown as TypeScript does not do partial inference of type arguments.

Now in practice, I don't think having an explicit KeyType be provided to Context is quite necessary. The extraction of the ValueType does not require it:

```ts
export type ContextType<Key extends Context<unknown, unknown>> =
Key extends Context<unknown, infer ValueType> ? ValueType : never;
```

I'm a bit torn as it feels correct to allow explicitly typing the KeyType as in the Lit implementation, but if it's functionally moot, that creates confusion like lit/lit#4601

One issue with having KeyType be unknown is that it can look confusing. With unknown & {__context: ValueType}, it just looks like {__context: ValueType} in type intellisense.
e.g.

const key = Symbol('foo');
const foo = createContext<{foo: string}>(key);
//    ^? const foo: {
//           __context__: {
//               foo: string;
//           };
//       }

[context] Declarative alternative

I'm pretty surprised to see such an event-based implementation for context.

I'd love to see an alternative proposal offered for context that is declarative in nature. Existing APIs like onchange seem to offer pretty good hooks we could use to do a lot of the change detection. But meanwhile, we could be embedding more of the context on the page itself.

Different topic but for example WICG/webcomponents#1013 is talking about instantiating DOM Templates with microdata: that microdata or something sort of like it feels like what context could or should be.

There's no concrete proposals here. But I see the current shape & see issues like #39 which push to make context ever more ephemeral, with an event-based request-response protocol, and I just wish so much the declarative DOM could be leveraged to let us not need to add a bunch of new custom protocols. The challenge of context, to me, is embedding data in the dom & using it: that let's us do most of the work with what we already have, rather than invent so much new stuff.

[progressive hydration] self hydrating custom elements

Overview

As an alternative / complementary approach to #30 , I had been thinking about what it could look like if instead of the framework / runtime being the handler of the hydration, syntax, DSL or to avoid being an opinionated wrapper around Intersection / Mutations observers.

<my-element hydrate="xxx"></my-element>

What if custom elements had the opportunity to self define their own hydration logic? The premise is that a custom element would define a static __hydrate__ method (or whatever) that could be used to encapsulate its own hydration, loading, etc logic, and then the SSR framework mechanism (e.g. community protocol) would just need to extract this logic and inject that it into the runtime.

Example

Given this sample component

const template = document.createElement('template');

template.innerHTML = `
  <style>
    h6 {
      color: red;
      font-size: 25px;
    }

    h6.hydrated {
      animation-duration: 3s;
      animation-name: slidein;
    }

    @keyframes slidein {
      from {
        margin-left: 100%;
        width: 300%;
      }

      to {
        font-size: 25px;
      }
    }
  </style>

  <h6>This is a test</h6>
`;

class TestComponent extends HTMLElement {
  connectedCallback() {
    if (!this.shadowRoot) {
      this.attachShadow({ mode: 'open' });
      this.shadowRoot.appendChild(template.content.cloneNode(true));
    } else {
      const header = this.shadowRoot.querySelector('h6');

      header.style.color = this.getAttribute('color');
      header.classList.add('hydrated');
    }
  }

  // the fun stuff happens here :)
  static __hydrate__() {
    alert('special __hydrate__ function from TestComponent :)');
    window.addEventListener('load', () => {
      const options = {
        root: null,
        rootMargin: '20px',
        threshold: 1.0
      };

      const callback = (entries, observer) => {
        entries.forEach(entry => {
          if(!initialized && entry.isIntersecting) {
            import(new URL('./www/components/test.js', import.meta.url));
          }
        });
      };

      const observer = new IntersectionObserver(callback, options);
      const target = document.querySelector('wcc-test');

      observer.observe(target);
    })
  }
}

export { TestComponent }

customElements.define('wcc-test', TestComponent)

What's nice is that anything could go here since you have full access to the browser, like for IntersectionObserver, MutationObserver, addEventListener, etc. Plus, the runtime overhead is entirely sized by the user, so no extra JS gets shipped except for what the user themselves chooses to include.

So for this scenario, you could just use it as

<wcc-test color="green"></wcc-test>

and in action, it would look like this

wcc-ssr-self-hydration.mov

Observations

So looking to the above recording, we can observe that we get an alert when the hydration logic runs, even though test.js has not loaded. when we scroll down to the intersecting point, test.js loads the custom element, which then initiates the color change and CSS animation.

I think whatโ€™s neat is that at a top level, you could still set attributes on static HTML, maybe to preload some data or state, if youโ€™re already running a single pass over the HTML. So could make for a really nice combination of techniques and potentially open the door up to more complex strategies like partial hydration, or resumability, which is even nicer when you think about that you could include a <script type="application/json"> inside a Shadow DOM... ๐Ÿค”

Feedback

Some good call outs so far to investigate:

  1. Attach / override a __hydrate__ method to a custom element's base class
  2. Understand the cost of Intersection / Mutation Observers (to help inform best practices, usage recommendations)

IndexedDB Observer / Event protocol

I think the IndexedDB Observers proposal seems quite useful. I think it would provide a strong basis for a common state management, at least for some aspects of state, when required. Sharing state across iframes / windows tabs could also be quite elegant, more so than using postMessage. See this discussion for more details.

Unfortunately, the proposal seems to be stalled, even within Chrome, and doesn't seem to have attracted attention from other browser vendors.

My (not very confident) guess is that "polyfilling" the IndexedDB Observers proposal would be quite difficult, as it requires intercepting a number of native calls.

But let's imagine that everyone using web components used the same library from userland whenever they stored something in IndexedDB. I suspect using the raw IndexedDB api's is rather rare, and that most access to IndexedDB is done via a library, such as localForage or idb-keyval. Would it not be possible to add an observer api on top of each of these common libraries?

I'm not a big fan of trying to promote that everyone use the same library. I'm thinking it might be possible to add some subscription type functionality on top of any indexedDB library, starting with localForage and idb-keyval, but make that publish/subscribe functionality conform to a common api, which would make cross-component state management based on indexedDB possible, even across components that utilize different libraries

I don't know the api's well enough to know if this is possible, but I'm first wondering how much interest there would be in something like this? And does this seem feasible?

[pending-task] Should PendingTaskEvent have a `type` field?

There will definitely be different types of tasks, but can we standardize on any thing useful?

  • What categories of tasks are there?
  • Would a type field be useful across unknown tasks / consumers?
  • Are there a maybe couple of very broad categories that could be agreed on? Maybe to align with some of the main-thread scheduling work?
  • Should pending-task be limited to a subset of tasks to begin with? (ie, only UI-ready-blocking tasks, not say async rendering tasks?)

Teleport API

I would like to start a discussion regarding Teleport API aka unified overlay system for custom elements.

Motivation

Some types of components, like full-screen modals, dropdown menus, popups etc, need the overlay system that helps to escape CSS stacking context introduced by other components where they are placed.

One common case is placing such components inside the infinite scrollers like iron-list or vaadin-grid.
See the related issues: PolymerElements/iron-list#242 and vaadin/vaadin-grid#842

Currently pretty much every custom elements library has its own overlay system which means that trying to mix components from different libraries in the same app might introduce compatibility problems.

Prior art

Frameworks

Web components

Note: the following list contains the implementations that I'm aware of, please feel free to add yours in the comments.

Goals

  • Provide a low-level helper that could be used by libraries as well as vanilla custom element authors.
  • Aim for a consistent developer experience and user experience across web components libraries.

[SSR] DOM Shim Specification and / or Polyfill

Overview

Breaking the SSR part off from #35 since there were a couple thoughts going on there and thought it might be better to keep them separate

As "Web Components" itself is an umbrella label for a subset of web standards and APIs native to browsers, it is an exercise left up to developers who want to server-render web components to shim the DOM themselves on the server, typically in a JavaScript (e.g. NodeJS) runtime. It would be nice as a community if we could define what a common set of reasonable Web Components and Web Components adjacent APIs for the server-side would like.

This could just be a documented spec / reference, or even a package that can be distributed out on npm for libraries and frameworks and to leverage.

Specification

I think first would be establishing what we would consider a reasonable set of shims to be for a server environment.

At the most basic, that would seem to include:

  • window / document
  • customElements.[define|get]
  • HTMLElement
  • addEventListener (no-op?)
  • HTMLTemplateElement
  • attachShadow
  • .[get|set|has]Attribute
  • <template> / DocumentFragment

Shared Polyfill

Much like the @webcomponentsjs family of polyfills, it would be nice for this CG to maintain / contribute this as a library that could be published to npm.

It would be nice if this is something that could be extended from so if libraries and frameworks want to add additional support on top of the baseline, they can offer that.

Prior Art

(please comment below and share others!)


(who knows, maybe we even "upstream" this into the WinterCG spec! ๐Ÿคฉ )

Pending State API

Hi @justinfagnani, you could also move the pending state proposal from your private repo here, we use a modified version of your proposal a lot and are quite happy with it.

edit: sorry, i missed the PR :)

[pending-task] Cancellation mechanism

The proposal says:

This proposal does not cover cancelling tasks. Similar to Promises, this proposal assumes that task cancellation is best done by the task initiators with an AbortSignal. Objects being notified of a task shouldn't necessarily be able to cancel it.

We (LWC) were looking over this proposal today, and one observation was that there are, occasionally, cases where the event receiver may want to cancel the promise.

For example, you might imagine a parent component showing a loading spinner representing the loading state of its child components (i.e. to show one spinner instead of multiple). That parent component might have an "X" (or "Cancel") button. When the user clicks the button, all child components should cancel their promises.

If such a scenario is considered out-of-scope for this proposal, then it may be worth thinking about the recommended alternative. E.g.:

  • Should the parent disconnect its children and expect the children to cancel their promises in their disconnectedCallbacks?
  • Should prop-drilling or the Context API be used as a communication mechanism?

Otherwise, maybe there should be an optional abort callback that could be attached to the PendingTaskEvent? Something like:

interface PendingTaskEvent extends Event {
  complete: Promise<void>;
+  abort?: () => void;
}

/cc @caridy @leobalter

[Proposal] Registration API

Hi. I'd like to propose a generic API to register custom elements easier, and allow to customize the tag name if it's needed. I got the inspiration from this article from @mayank99.

The code:

class MyComponent extends HTMLElement {
  static tagName = "my-component";

  static register(tagName = this.tagName) {
    customElements.define(this.tagName = tagName, this);
  }
}

This would allow to register the element in different ways:

import MyComponent from "./my-component.js";

MyComponent.register(); // Register as `<my-component>`.
import MyComponent from "./my-component.js";

MyComponent.register("your-component"); // Register as `<your-component>`.
import MyComponent from "./my-component.js";

MyComponent.tagName = "your-component";
MyComponent.register(); // Register as `<your-component>`.

Compatibility and Interop Specification (Versioning WCs)

Overview

As "Web Components" itself is an umbrella label for a subset of web standards, yet also benefits from and is enhanced by many other adjacent web standards, coalescing around a shared understanding of what it means to be a "Web Component" can be a bit challenging from a user (of WCs) perspective. This extends to characteristics of a Web Component like the spec web platform features used, bundling, polyfills, or server rendering support to name a few. Additionally, it is often up to each maintainer / platform / project to try and best explain the journey of developing and distributing Web Components so as to best facilitate that user (developer) journey for their respective use case.

Motivation

For example, for a new major version release of most large and popular open source project, I can go to a changelog or blog post and typically get a run down of:

  • Features and capabilities
  • Environment and runtime expectations
  • Potential known issues and limitations

By taking a sample of what's standard in the platform through a representative majority of critical browsers (Chrome / Edge, Firefox, Safari) at a given time, the goal of this proposal would be in the (ongoing) drafting of a living specification document that can identify a reasonable "SLA" that community projects can align on, either directly documented in a project's README / website, or through something like Custom Elements Manifest.

Use Cases

There are a couple principal use cases that come to mind as for who might best be able to take advantage of such a protocol for Web Components.

Library Authors

Library authors wanting to know what features are "safe" to use or adopt, and that can reference via a link for evaluators or potential consumers to communicate what sort of features they take advantage of from the web platform or to what degree of polyfills or shims may be expected from the userland implementor. Being able to express this through a link that could provide supporting references and materials to help users achieve the necessary parity to instrument said library would be very useful.

It can also be used to hint or indicate to any sort of SSR compatibility.

(Full Stack) Framework Implementors

For those delivering frameworks solutions where SSG or SSR comes into play, this would really benefit by aligning on a shared understanding of the WC related APIs, on top of the runtime at play. I know from observing a handful of various repos on GitHub that using WCs in non-tuned for WCs SSR frameworks can often come with unexpected results, so adding a little more standardization on the server side could be really valuable.

In a way, this feels like a natural extension from the ElementRender proposal presented in the SSR issue.

Specification

Similar to how TC39 drafts a new version of the ECMAScript specification each year to set a level of expectation, the Web Components community could similarly draft as well, and help capture what features or standards have broad enough platform support that they can be "versioned" against.

Documentation

So at pre-defined intervals the governing body would "snap a line" of what is supported by browser vendors at that given time and "tag" that new level of cross-platform support as a new version, and then publish those details. Each new entry published would be a canonical link that could be reference in Custom Elements Manifest, thus clearly communicating a level of support and / or compatibility.

Version Year Standards Adopted Notes
2 TBD TBD Evaluating import assertions and constructible stylesheets
1 2021 Custom Elements, HTML Templates, Shadow DOM, ESM Baseline

Custom Elements Manifest

Totally bike-shedding on the name here with specificationVersion but an example snippet from a custom-elements-manifest.json would be defined as such, e.g.

{
  "schemaVersion": "1.0.0",
  "specificationVersion": "v2",
  "readme": "README.md",
  "modules": [
    "..."
  ]
}

Canonical link would be evaluated to something like http://webcomponents.org/community-specification/v2.html.

Server Side Rendering (SSR)

I see something like this being especially valuable for SSR frameworks, so as to allow each of them to set and support the level of compatibility with any of these versions that they can support. For user's of these frameworks, it would be a very helpful reference of what sort of baseline support to expect as they're picking their frontend libraries / design systems / etc.

For example, thinking of these kinds of APIs (and to what degree of support if applicable) that might be assumed already given a browser context and so would want special attention for SSR:

  • window / document
  • customElements.[define|get]
  • HTMLElement
  • addEventListener (no-op?)
  • HTMLTemplateElement
  • attachShadow
  • .[get|set|has]Attribute
  • <template> / DocumentFragment

On the topic of SSR, not sure if there would be a different version needed for SSR support, or maybe just a "companion" list to supplement the spec? There is also WinterCG, which is curating a "web standard" first runtime for JavaScript in the context of Serverless and Edge functions, so factoring that in could also be useful?

Governance

It would be great to have some governance around this in particular to make sure participation is socialized and to conduct a "roll call" from key contributors when preparing the next version of the specification. This could also maybe even align with other objectives and interest like our reports for TPAC?

Not so much sure on the process just yet, mostly just interested in getting the idea out there for now.


Thoughts / Prompts / Bike Shedding

  1. How often to snap a line (yearly?)
  2. Versioning strategy (semver, yearly, other)
  3. Different specification for Browser vs SSR?
  4. Specification name (for CEM)

[meta] Add guidance on events and interfaces

We would prefer that protocol are consistent when it comes to use of events and other interfaces. We should provide guidance to encourage this and help reviewers.

I would propose that the guidance include things like:

  • Do not depend on specific implementations as part of a protocol. Define interfaces instead.
  • Define interfaces in TypeScript
  • Prefer Event subclasses over CustomElement
  • Publishing an interface-only (.d.ts) package to npm can be useful for interop
  • Publishing interoperability / conformance tests is great!
  • Publishing a reference implementation is great too

[context] improve naming of dispose and multiple arguments

Currently the proposal outlines that the context-event should carry a payload that can include the optional argument multiple in order to indicate that the consumer will handle multiple repeated deliveries of the context value. The callback in the payload then receives an optional second argument dispose which is a function which can be invoked by a component to indicate to its provider that it no longer wishes to receive updates.

I'd like to propose that we rename these two parameters to make them more related to each other:

export type ContextCallback<ValueType> = (
  value: ValueType,
  unsubscribe?: () => void
) => void;

export class ContextEvent<T extends ContextKey<unknown>> extends Event {
  public constructor(
    public readonly context: T,
    public readonly callback: ContextCallback<ContextType<T>>,
    public readonly subscribe?: boolean
  ) {
    super('context-request', {bubbles: true, composed: true});
  }
}

So an event is emitted with an argument in its payload to subscribe to further updates to the context value, and then receives in its callback a second argument to unsubscribe from those updates.

This feels to me like it clarifies the intent and behavior here.

Support versioning of Web Components

This could be the simplest request to fulfill, ever, assuming it makes sense to anyone.

The issue of name clashes between web components has been addressed here, and progress has been frustratingly slow. Here's to hoping it sees some progress soon.

This proposal is highly related to that proposal.

Let me give this proposal a tentative name: Custom Element Weak Map Lookup, or CEWML for short, and I'll refer to Scoped Custom Element Proposal as SCEP.

CEWML is not meant to supplant SCEP, and I think might continue to be useful even if SCEP was fully implemented by the browsers. However, I haven't wrapped my brain around that proposal enough to know if that is the case or not.

But even if it renders this proposal useless, this could, I think, be used in the interim.

The key is we create a single npm package with a single file exported JS module, with either this signature:

export const scopedVersions = new WeakMap<ShadowRoot | Document, WeakMap<{new(): HTMLElement}, string>>();

or more simply, this:

export const versions = new WeakMap<{new(): HTMLElement}, string>();

Maybe it includes both?

We version this package 1.0.0 and never, ever modify it. So as long as everyone imports it via the same mechanism (import maps or bundling), there shouldn't be an issue of multiple versions of this one or two line JS file running around.

Each web component provider could provide its own way of facilitating how to do this registration / lookup.

Here's what I'm following, but this proposal in no way requires this approach. I just think it's helpful to provide a concrete example of how this could work with one implementation.

When I register a web component, say "my-component", I first check if my-component is already defined. If not, great. I happily register it with that name.

But if it is in use, I quietly register it by appending the first number I can find that hasn't been registered: for example my-component-1, or my-component-2, etc. Kind of like starting a web server, and searching for an available port.

I adopt the Polymer convention of adding the "is" static property to my custom element constructors. I set this to the canonical name. But when I find an available name before calling define, I set another static property on the constructor: "isReally".

So now if I generate the html using JavaScript, I can dynamically substitute the name in with this admittedly ugly code:

html`<${myComponentImport.isReally}>...</${myComponentImport.isReally}>`

However, this is a very specific implementation, and other groups may have no interest in adopting this exact approach.

But the key is each web component provider that wishes to partake in this solution would need to provide some way to associate the import with a specific name guaranteed to not clash with any other import. Really, if everyone could guarantee a unique global name, we really don't need the outer ShadowDOM WeakMap key, I don't think. But maybe that's asking too much?

What's always bothered me about this, is this wouldn't work for HTML-first solutions (like server rendered code). But if there is a common lookup mechanism like we have above, it would be possible to create a little JS code for that:

  1. Wraps all web components that may have clashes with other web components inside a template:
<template>
  <my-canonical-name>
   <my-light-children></my-light-children>
 </my-canonical-name
</template>

Yes, this means the SSR wouldn't show anything until the name resolution is complete, which isn't ideal, but this is the best I can come up with.

  1. The JavaScript would, after importing all the custom element references, perform the lookup, search for such templates inside its ShadowDOM, and rewrite the outer tag name to match the lookup while instantiating the template.

The lookup would look like:

const finalName = scopedVersions .get(shadowDomRoot).get(myComponentClass);

or, if in addition, we insist that this protocol works with some way of avoiding name clashes, just do:

const finalName = versions .get(myComponentClass);

I think the first lookup would be useful for scenarios where the web component provider provides a way of automatically subclassing the base element for each ShadowDOM with a specific name in mind.

I know this proposal isn't quite 100% solid, but I wanted to throw it out there to see if there's something like this we could do.

Hot module replacement API

Currently it seems a fair amount of projects are working towards implementing HMR support.

A couple of existing implementations related to webcomponents/ESM:

These are only a couple, there will be more. However, there is no consistent API right now across these.


There are three parts to HMR as far as I can see:

  • Server-side (basically a file watcher which notifies the client when a module changes)
  • Client-side (an API to communicate with these server updates)
  • Framework/library specific (an integration of the client-side API into a specific ecosystem like lit-element)

Server-side

The server-side implementation should be as simple as a web socket service which emits messages of the following types:

  • update - a message specifying that a particular module needs reloading
  • reload - a message specifying that the page must reload as a whole

Client-side

An API should be made available at import.meta.hot which can have methods for the following:

  • Accept updates (notify the server this module can handle updates, via an accept message)
  • Refuse updates (notify the server this module cannot handle updates)
  • Invalidate the current module (if something went wrong, force a full reload)
  • Disposer (handle teardown of the module before a new version is loaded)

The client-side implementation should primarily exist to handle the server-side messages, though it should also emit its own message:

  • accept - a message specifying that the current module supports HMR

Handling of the server-side messages could look like this:

  • update - dynamically import the specified module and execute a user-supplied callback for dealing with the update
  • reload - call window.location.reload i suppose

Example implementation

Within the modernweb repo I wrote the following message types:

// emitted by the server
export interface HmrReloadMessage {
  type: 'hmr:reload';
}

// emitted by the server
export interface HmrUpdateMessage {
  type: 'hmr:update';
  url: string;
}

// emitted by the client
export interface HmrAcceptMessage {
  type: 'hmr:accept';
  id: string;
}

Note that the message types are prefixed here because we already had a web socket open and didn't want to have a second just to specify the protocol. Though it could be argued a protocol is better here than a prefixed set of types.

Meanwhile, i used snowpack as inspiration to write a client API which looks like this:

// at import.meta.hot

{
  accept(callback);
  accept(deps[], callback);
  dispose(callback);
  decline();
  invalidate();
}

However i'm not such a fan of it even though i did it. As confusion can quickly come about by weak naming.

I would suggest more like:

{
  acceptCallback(callback);
  acceptCallback(deps[], callback);
  disposeCallback(callback);
  decline();
  invalidate();
}

Framework/library specific

For example, the work being done to lit-element around HMR will produce an overridden customElements.define which then understands how to update an element when it is re-defined.

Peter's work in the lit branch has this:

static notifyOnHotModuleReload(tag, newClass)

Which i agree with, though maybe named with a Callback suffix like connectedCallback and such.

The idea here being every hmr-compatible web component would have this standard static method which the library or user must implement.

Summary

I think the most important thing to get right here is the client API available at import.meta.hot and the framework/library specific interface.

Proposal: Well-known symbol for DOM shim

Issue #17 seeks to make web components runtime agnostic but not relying on globals. However this will be difficult to get the community aligned on in practice as it would require essentially all component authors to start writing components in a different way than today.

This proposal instead seeks to lessen the damage that comes with requiring a DOM shim. Using DOM shims has the disadvantage that adding globals such as window and document can interfere with libraries that still use those globals to detect that they are running in a browser context.

globalThis[Symbol.for('wc.defaultView')]

This proposal is to establish a well-known symbol (bikeshed on the name) that a shim will place browser globals. A shim might look like this:

shim.js

const { window, document, customElements, HTMLElement } = new DOMLibrary();

globalThis[Symbol.for('wc.defaultView')] = {
   window,
   document,
   customElements,
   HTMLElement
};

Then a component library will first look for this symbol to extract the globals it needs:

lip-element.js

const { HTMLElement, customElements, document } = globalThis[Symbol.for('wc.defaultView')] || globalThis;

Falling back means that in browser context where a shim hasn't ran it will extract these globals from the window object.

[context] are context objects needed?

Reading the Context proposal one thing that sticks out to me is that it's a bit unclear what the purpose of the Context object is. It seems like it's used by a provider as a filter for contexts that it can supply. Assuming that's the case, wouldn't make the type of context being requested part of the event name instead?

ie, instead of "context-request" it becomes "context-request-theme" and a theme provider listens to that specific event rather than all context events.

[reactivity] Reactivity/Observable/Signals Proposal

Reactivity Protocol Proposal

IMPORTANT: Be sure to scroll down in the discussion for a secondary proposal that I think may be a better approach than this one.

The primary purpose of this proposal is to start the discussion on trying to understand whether a general reactivity protocol is feasible, allowing:

  • Model/Components systems to decouple themselves from view rendering engines and reactivity libraries.
  • View rendering engines to decouple themselves from reactivity libraries.
  • Potentially, future native HTML templates and DOM Parts to be able to rely on minimal APIs, even if the browser doesn't yet ship with an implementation.

Achieving this would enable application developers to:

  • Swap out their view layers without needing to re-write their models.
  • More easily mix multiple view layer technologies together without sync/reliability problems.
  • Choose between multiple reactivity implementations, picking the one that has the best performance characteristics based on their application needs. For example, one engine might be faster for initial view rendering, but another might be faster for view updating. Engines could also be selected based on target device. So, a lower memory engine could be used on mobile devices, for example.

A quick implementation of the ideas in this proposal is available here.

Consumers

There are three different consumers of the protocol: reactivity engines, view engines, and model/application developers. Let's look at the proposal from each of these perspectives, in reverse order.

Model and Application Developers

The primary APIs needed by app developers are those that enable them to create reactive values and models. The protocol provides both a declarative and an imperative way of creating property signals. It also provides low-level APIs for creating custom signals.

Example: Declaring a model with an observable property

import { observable } from "@w3c-protocols/reactivity";

export class Counter {
  @observable accessor count = 0;

  increment() {
    this.count++;
  }

  decrement() {
    this.count--;
  }
}

The observable decorator creates an observable property. The underlying protocol doesn't provide an implementation of the signal infrastructure, just a way for the model/app developer to declare something as reactive. We'll look at how the reactivity implementation provides the implementation shortly. There's also an imperative API, which can be used on any object like this:

Example: Using the imperative API to define an observable property

import { Observable } from "@w3c-protocols/reactivity";

Observable.defineProperty(someObject, "someProperty");

Under the hood, both the declarative and the imperative APIs create properties where the getter calls the configured reactivity engine's onAccess() callback and the setter calls the engine's onChange() callback.

The protocol provides a facade to the underlying engine via the Observable.trackAccess() and Observable.trackChange() APIs for consumers that want to create custom signals. Here's how one could create a simple signal on top of the protocol:

Example: Creating a custom signal

import { Observable } from "@w3c-protocols/reactivity";

export function signal(value, name = generateUniqueSignalName()) {
  const getValue = () => {
    Observable.trackAccess(getValue, name);
    return value;
  }

  const setValue = newValue => {
    const oldValue = value;
    value = newValue;
    Observable.trackChange(getValue, name, oldValue, newValue);
  }

  getValue.set = setValue;
  Reflect.defineProperty(getValue, "name", { value: name });

  return getValue;
}

Example: Using a custom signal

const count = signal(0);
console.log('The count is: ' + count());

count.set(3);
console.log('The count is: ' + count());

View Engine Developers

While app developers have a primary use case of creating reactive values, models, and components, view engine developers primarily need to observe these reactive objects, so they can update DOM. The primary APIs being proposed for this are ObjectObserver, PropertyObserver, and ComputedObserver. These are named and their APIs are designed to follow the existing patterns put in place by MutationObserver, ResizeObserver, and IntersectionObserver. A view engine that wants to observe a binding and then update DOM would use the API like this:

Example: A view engine updating the DOM whenever a binding changes

import { ComputedObserver } from "@w3c-protocols/reactivity";

const updateDOM = () => element.innerText = counter.count;
const observer = new ComputedObserver(o => o.observe(updateDOM));
observer.observe(updateDOM);

In fact, you may recognize this as the effect pattern, provided by various libraries, which could generally be implemented on top of the protocol like this:

Example: Implementing an effect helper on top of the protocol

function effect(func: Function) {
  const observer = new ComputedObserver(o => o.observe(func));
  observer.observe(func);
  return observer;
}

Example: Using an effect helper to update the DOM

effect(() => element.innerText = counter.count);

Each of the *Observer classes take a Subcriber in its consturctor, just like the standard MutationObserver, ResizeObserver, and IntersectionObserver. Following the same pattern, they each also have observe(...) and disconnect() methods. The implementation of each of these is provided by the underlying reactivity engine.

Reactivity Engine Developers

A reactivity engine must implement the following interface:

interface ReactivityEngine {
  onAccess(target: object, propertyKey: string | symbol): void;
  onChange(target: object, propertyKey: string | symbol, oldValue: any, newValue: any): void;
  createComputedObserver(subscriber: Subscriber): ComputedObserver;
  createPropertyObserver(subscriber: Subscriber): PropertyObserver;
  createObjectObserver(subscriber: Subscriber): ObjectObserver;
}

The app developer can then plug in the reactivity engine of their choice, with the following code:

Example: Configuring a reactivity engine

import { ReactivityEngine } from "@w3c-protocols/reactivity";

// Install any engine that implements the interface.
ReactivityEngine.install(myFavoriteReactivityEngine);

NOTE: By default, the protocol library provides a noop implementation, so all reactive models will function properly without reactivity enabled.

Here is a brief explanation of the interface methods:

  • onAccess(...) - The protocol will call this whenever an observable value is accessed, allowing the underlying implementation to track the access. This is invoked from the getter of a protocol-defined property. Custom signal implementations can also directly invoke this via Observable.trackAccess(...).
  • onChange(...) - The protocol will call this whenever an observable value changes, allowing the underlying implementation to respond to the change. This is invoked from the setter of a protocol-defined property. Custom signal implementations can also directly invoke this via Observable.trackChange(...).
  • createComputedObserver(...) - The protocol calls this whenever new ComputedObserver() runs so that the implementation can provide its own computed observation mechanism.
  • createPropertyObserver(...) - The protocol calls this whenever new PropertyObserver() runs so that the implementation can provide its own property observation mechanism.
  • createObjectObserver(...) - The protocol calls this whenever new ObjectObserver() runs so that the implementation can provide its own object observation mechanism.

Since ObjectObserver can be implemented in terms of PropertyObserver and PropertyObserver can be implemented in terms of ComputedObserver, the protocol library provides a FallbackPropertyObserver and FallbackObjectObserver that do just that. This means that the underlying implementation is only required to implement createComputedObserver(). But implementations can choose to optimize property and object observation if they want to by providing observers for these scenarios.

The proposal repo contains a work-in-progress implementation of this proposal. It also contains two test reactivity engine implementations, as well as a test view engine, and a test application.

WARNING: Do not even think about using the test reactivity engines or the test view engine in a real app. They have been deliberately simplified, have known issues, and are not the least bit production-ready. They serve only to validate the protocol.

Open Questions

  • Should the protocol enable view engines to mark groups of observers for more efficient observe/disconnect?
    • e.g. Observable.pushScope(), Observable.popScope(), and scope.disconnect().
  • Should the protocol provide a way to create observable arrays and array observers?
    • e.g. const a = Observable.array(1,2,3,4,5); and new ArrayObserver(...).observe(a);;
  • Should the shared protocol library take on the responsibility of implementing common patterns on top of the protocol such as signal, effect, and resource? (An effect implementation is currently provided as an example.)
  • Should the protocol include a standard update queue to ensure timing of subscription delivery or should they be delivered immediately, with the expectation that subscribers handle any sort of batching or DOM update timing?

[context] What is the use case for one-off requests?

I think for many people (including me), the intuitive idea of a context is that a consumer, simply by virtue of existing under a provider, should have the provider's value; I think a lot of people would think of this as an invariant that contexts should obey. But the idea of a one-off context request (as opposed to a subscription request) inherently contradicts this, since the consumer immediately falls out-of-sync with the provider when the context changes. Effectively, it feels like the consumer was never really 'in the context' of the provider to begin with.

This might seem trivial, since if a consumer wanted to stay in-sync, it should have asked the provider to keep it in-sync. But my question is: why wouldn't a consumer want to stay in-sync? Why would a consumer want to behave in this way? Again, it feels contradictory to the very idea of contexts; it's not apparent to me that it makes conceptual sense.

@lit/context actually defaults to making one-off requests, which very much implies there is a good use case for them, but none of the code examples seem to motivate this. I expect I may be missing something obvious, so I would appreciate it if anyone can shed light on this.

Server-Side Rendering (SSR) API

Context

Web Components have hydration or upgrade capabilities built-in.

Basic example:

<my-comp>
Loading...
</my-comp>

Until <my-comp> is registered (via Javascript), the browser will show Loading....

The recent advancements in Declarative Shadow DOM
whatwg/dom#831 give us the possibility to even pre-render shadow roots with encapsulated styles.

<my-comp>
  <template shadowroot="open">
    <style>
       .myshadowclass {
          ...
       }
    </style>
    <div class="myshadowclass">
        ...
    </div>
  </template>
</my-comp>

And this would be progressively enhanced by the custom element code.

Motivation

Ideally you want to have a single source code for your component that will management the pre-rending and the progressive enhancement.

This has been managed recently by "meta-frameworks" like Next.js, Nuxt or Sapper but these are very linked too the underlying technology, React, Vue and Svelte respectively.

There is an opportunity with Web Components to decouple the framework/lib used to build the components and the "meta-framework" used to orchestrate the pre-rendering and hydration.

In other words, you could have "meta-frameworks" (or 11ty plugins for that matter) that could be able to pre-render static forms of any Web Components no matter what the framework/lib was used to build them (LitElement, Stencil, or any of the 40+ others).

I think it makes sense to deal with this in user land.

Proposal

This is a proposal to get the ball rolling and discuss, in no way I think this is the perfect solution.

Define a method that SSR capable Custom Element would implement:

interface {
   render_ssr() : string
}

Frameworks/lib can automatically implement this so users only need to write a single render()-like method for both SSR and client side. It can provide a ssr flag for conditional rendering but this is up to the framework/lib to decide on the implementation.

"meta-frameworks" in charge of generating the static content would:

  • load page (including javascript)
  • discover custom-elements
  • instantiate custom-element classes
  • inject attrib and properties
  • call render_ssr()
  • stick it into place

Example

source index.html

<html>
...
Foo bar foo
<my-comp attr="true">
</my-comp>
...
</html>

index.html after generation by "meta-framework" by calling render_ssr() method.

<html>
...
Foo bar foo
<my-comp attr="true">
  <template shadowroot="open">
    <style>
       .myshadowclass {
          ...
       }
    </style>
    <div class="myshadowclass">
        ...
    </div>
  </template>
</my-comp>
...
</html>

Questions

How to push properties statically?

Can we use proven dot notation? Already used by many Web Component libs today.

<my-comp attr="true" .prop1="{ key: value }">
 ...
</my-comp>

Anything in .prop1 value would be evaluated and the result would go to property prop1.

Do we need a additional constructor constructor(for_ssr: boolean)?

To give full context to Web Component right at the beginning of the instance.
It would be safer I guess to deal with side-effects related to Attributes/Properties setters.

My 2 cents

Runtime agnostic web components

I'm spinning this off from #7. 1 of the underlying goals in that issue is:

  • Prevent running server-only code in the client.

This issue is for discussing ideas on how to write web components in such a way that they can be run in any JS environment without throwing. Environments might include: The web, web workers, Node.js, Deno, Cloudflare, among others.

Custom elements have at least 2 dependencies that are web (main thread) specific APIs, HTMLElement and customElements. A typical custom element is written like this:

class MyElement extends HTMLElement {

}

customElements.define('my-element', HTMLElement);

While libraries often provide their own base class and some might also register the element for you, it doesn't change the fact that these APIs are dependencies in an element's graph.

I would add to the list of goals the following:

  • Elements should not rely on a shimmed environment, ie no patching globalThis.

Note that this issue does not try to address how much of HTMLElement is needed, as that will vary depending on the element. Some elements might need setAttribute, for example, while others might not need any methods of HTMLElement at all.

Below I'm going to post some ideas on how to structure custom element code to address this problem. I encourage others to provide their ideas as there are multiple ways to tackle it.

[context] supporting cases where provider is defined after consumer

Hey all, first time posting here. I'm very happy to find the context proposal here as it's a pattern I've needed several times since we've started working on a web component library. I wanted to raise an issue we've run into using events to implement a context API just as a data point to consider.

When we first ran into the need for this, we started off with a very similar approach; events with callbacks dispatched by consumers. This generally works really well, but it has one requirement that ended up biting us in a couple situations: provider components generally must be defined before consumer components. If consumer components are defined first, it's possible for events dispatched by the consumers to bubble up through the provider component before it's upgraded.

Many times this isn't a problem; you can have your consumer component import the provider component (or await customElements.whenDefined(...), but we have a couple of cases where it became an issue:

  1. Context-based systems where the consumers don't know what module (or the name of the element) of the provider it's looking for can't import or await the definition
  2. In some cases, we specifically want to load the consumer component with higher priority.

The second case there is a little less obvious, so here's an example from our component library; we have a custom form component (e.g. my-form) that is essentially a <form> element with a bunch of additional features. One of the things it does is allows other custom elements the ability to hook into form submission and validation via a context API, so you can build custom inputs and various other features that hook into form state.

<my-form>
  <my-input name="foo"></my-input>
  <button type="submit">Submit</button>
</my-form>

In this scenario, the issue for us was performance optimization: our form component is fairly heavy, but it doesn't have any styles or markup (so it doesn't affect paint at all). Conversely, the components that consume its context API tend to be smaller, but do tend to impact layout/paint. If we're trying to optimize for first paint, it makes sense to load the form component itself after these child components, but doing so breaks the event-based context registration.

The solution we came up with was to replace the event dispatch with a utility that asynchronously crawls up the DOM tree from a consumer, awaiting each custom element it encounters along the way. It looks something like this:

async function findContext<T>(
  from: HTMLElement,
  isMatch: (e: Element) => e is Provider<T>
) {
  let el = from;

  while (el.parentElement) {
    el = el.parentElement;

    // We can skip any builtin elements
    if (!el.tagName.includes('-')) continue;

    const tag = el.tagName.toLowerCase();

    if (!customElements.get(tag)) {
      // non-upgraded elements might be ancestors
      await customElements.whenDefined(tag);
    }

    if (isMatch(el)) return el as Provider<T>;
  }
}

This ensures that the context provider will get hooked up to consumers even if it is upgraded later, but it's not perfect:

  • It makes the initial registration async instead of sync; this wasn't really an issue for us but it could be for some
  • It breaks if any elements sitting between the consumer and the provider never get upgraded, as it will await indefinitely on each undefined custom element it encounters while crawling up the tree.

This solution is working for us for now, but I'd love to settle on something a bit more inline with what other folks are doing if it can work for these sorts of use cases.

[defer-hydration] Async hydration

Since the defer-hydration proposal seems to be moving forward I'd like to start one point of discussion: Is hydration a fundamentally synchronous or asynchronous process? I don't know of a strict, formal definition of "hydration" which can answer this question, but the proposal currently uses the following definition:

In server-side rendered (SSR) applications, the process of a component running code to re-associate its template with the server-rendered DOM is called "hydration".

This issue mainly boils down to answering the question: Is the process of re-associating a template with server-rendered DOM always synchronous?

Use case

I think there is a case to be made for async hydration. Consider a component which needs to load its own data asynchronously before it is fully functional. For example, consider a component which shows a user with a large number of friends. We might not want to list out every friend in the initial HTML, because some users can have thousands of friends. Instead, we might choose to lazy load this list of friends and render it when it becomes available (possibly with streaming or other cool tricks). Full example on Stacklibtz.

<my-user user-id="1234">
  <div>Name: <span class="name">Devel</span></div>
  <div>Friends list: <span class="loading">Loading...</span></div>
  <ul class="friends"></ul>
</my-user>
class MyUser extends HTMLElement {
  private user?: User;

  connectedCallback(): void {
    if (!this.isHydrated) {
      this.hydrate();
      this.isHydrated = true;
    }
  }

  private isHydrated = false;
  private async hydrate(): Promise<void> {
    const userId = Number(this.getAttribute('user-id')!);
    this.user = await fetchUser(userId);

    const friendsList = this.querySelector('.friends');
    for (const friend of this.user.friends ?? []) {
      const friendListItem = document.createElement('li');
      friendListItem.textContent = friend.name;
      friendsList.append(friendListItem);
    }

    this.querySelector('.loading').remove();
  }
}

customElements.define('my-user', MyUser);

interface User {
  name: string;
  friends?: User[];
}

Ok, so we defined our own hydrate method and made it async. Web components are free to define their implementations and this is fine on its own. It's basically just a "slow" hydration. The problem comes when we try to expose this async data such as a getFriends method.

class MyUser extends HTMLElement {
  private user?: User;

  getFriends(): User[] {
    return this.user!.friends ?? [];
  }
}

This might seem like a simple addition, but it completely changes the lifecycle of this component as we now have a timing bug. This code assumes hydrate() has fully completed its async work before getFriends is called. However, the promise which awaits this data (the return value of hydrate()) is not accessible in a generic manner. For example, if we tried to hydrate and use this component according to the defer-hydration specification, it would look like:

const userComponent = document.querySelector('my-user');
userComponent.removeAttribute('defer-hydration'); // Trigger hydration.
console.log(userComponent.getFriends()); // ERROR! We don't know any friends yet!

We're forced into some uncomfortable design decisions. I can see a few potential solutions to this component which don't involve modifying the defer-hydration proposal:

Wait via a my-user-specific API

One approach is to have my-user define its own API users should use to know when it is done hydrating asynchronously:

const userComponent = document.querySelector('my-user');
userComponent.removeAttribute('defer-hydration'); // Trigger hydration.
await userComponent.doneLoadingUser;
console.log(userComponent.getFriends()); // Works!

Downsides:

  • Every component will define this API a little differently.
  • The code which hydrates the component (removeAttribute('defer-hydration')) may be very far away from the code which calls getFriends() and may not know it is looking at a my-user component or that doneLoadingUser exists.

Implicitly hydrate in getFriends

Another approach is for getFriends to automatically hydrate before returning:

const userComponent = document.querySelector('my-user');
const friends = await userComponent.getFriends(); // Implicitly hydrates.
console.log(friends); // Works!

Downsides:

  • Every method needs to implicitly check and initialize the component automatically.
  • This "colors" every method to be async, even when it doesn't actually do any async work beyond hydration.
  • It's not obvious that getFriends will hydrate the component or apply any associated side effects (trigger network requests, add event listeners, modify the component UI, etc.).
  • Component needs to remember to run this.removeAttribute('defer-hydration') or it could misrepresent its current hydration status.
    • Question: Does this happen when the user calls getFriends or when the returned Promise resolves? Is the component "hydrated" when it starts hydrating or when it's done hydrating?
      • The "obvious" answer to me is that it's hydrated when it's done hydrating, however that goes against what happens when defer-hydration is manually removed by a parent component. In that scenario, defer-hydration is removed at the start of hydration, but calling getFriends would remove it at the end of hydration.

Both of these approaches effectively treat hydration as the synchronous process of reading the DOM state (the user-id attribute in this case) and providing a separate "initialization" process for consumers to know when the component is initialized and ready. Since initialization is a different, out-of-scope process from hydration, component consumers cannot make any generic inferences about how initialization will work.

Async data takes a lot of forms. One can imagine a framework which identifies large component trees and pushes some hydration data out of the initial page response to reduce the initial download time. Then on hydration, components may fetch the data they need to hydrate in order to make themselves interactive. I'm not aware of any framework which quite does this (I don't think Qwik or Wiz work this way), but it is an interesting avenue which could be explored in the future and would be incompatible with defer-hydration as currently specified.

Straw-proposal

Just to put out one potential proposal which could address this use case in the community protocol, we could define a whenHydrated property on async hydration components (mirroring customElements.whenDefined). This property would be assigned to a Promise which, when resolved, indicates that the component is hydrated. In practice this would look like:

class MyUser extends HTMLElement {
  public whenHydrated?: Promise<void>;
  private hydrate(): void {
    this.whenHydrated= (async () => {
      const userId = Number(this.getAttribute('user-id')!);
      this.user = await fetchUser(userId);

      const friendsList = this.querySelector('.friends');
      for (const friend of this.user.friends ?? []) {
        const friendListItem = document.createElement('li');
        friendListItem.textContent = friend.name;
        friendsList.append(friendListItem);
      }

      this.querySelector('.loading').remove();
    })();
  }
}

Then, when hydrating a component we can generically check if async work needs to be done.

const userComponent = document.querySelector('my-user');
userComponent.removeAttribute('defer-hydration'); // Trigger hydration.

// Wait for hydration to complete. If there is no `whenHydrated` set, then it must be able to synchronously hydrate.
if (userComponent.whenHydrated) await userComponent.whenHydrated;

console.log(userComponent.getFriends()); // Works!

This proposal supports async hydration in a generic and interoperable manner.

Discussion

To be clear, I'm not necessarily trying to argue defer-hydration absolutely should support async hydration. I'm not fully convinced this is a good idea either, but I do think it's something worth discussing at minimum.

Hydration vs. Initialization

As I've hinted a bit earlier, I suspect the concept of "async hydration" is somewhat intermingling two independent concerns: hydration and initialization. An alternative definition of "hydration" can more narrowly specify the concept along the lines of "Reading initial component state from prerendered DOM". Based on this definition, the my-user component described above does more than just hydrate itself. Number(this.getAttribute('user-id')!) is the only real "hydration" the component performs. Everything else is completely unrelated initialization work which applies both in CSR and SSR use cases. Fetching data from the user ID and updating the DOM can be considered "initialization" rather than "hydration".

If we accept this more narrow definition of "hydration" and call initialization an independent problem which is out of scope of defer-hydration, then there's no bug here and the proposal doesn't need to change at all. Understanding when an object is initialized has been a problem for as long as we've had objects after all.

OTOH, we could define "hydration" along the lines of "Making the component interactive to the user", I think it is entirely fair to expect some components will require asynchronous work before they can support interactivity. Here's another Stackblitz of a somewhat contrived use case which requires a network request of initialization data before buttons can be enabled. Calling such a component "hydrated" synchronously after defer-hydration is removed would be misleading because the component is still in no way interactive and has not presented any visual or behavioral change to the user.

If we accept the separation of concerns between hydration and initialization, then defer-hydration becomes a much less powerful proposal. If "hydrated" does not imply "initialized" then it is hard to generically do anything with a component.

const myElement = document.querySelector('.some-element');
if (myElement.hasAttribute('defer-hydration')) myElement.removeAttribute('defer-hydration');

// Do... something... with `myElement`?
// Can't really do anything because we have no guarantee that it will work, even if it's hydrated.
myElement.doSomething(); // Could fail purely because initialization hasn't completed yet.

// Is a cast even valid? We have no reason to believe `myUser.getFriends()` would work here,
// so why should we type it in a way which implies it would work?
const myUser = myElement as MyUser;

I think my initial interpretation of defer-hydration was that it could serve as a signal that a component was initialized and fully functional. It's entirely possible that interpretation was incorrect, but I do still think that's usually true, and it provides a lot of power when working with components in a generic fashion. I suspect making defer-hydration support async use cases could further enable hydration to serve as an initialization signal if we think that is the right approach to explore.

Again, I'm not totally sold on the idea of "async hydration" either. I just think its neat.

Meme of Marge Simpson holding up a potato labeled "Async-hydrating components" and saying "I just think they're neat".

[context] Event namespace

Thanks for putting together a formal specification @justinfagnani

I tried (and failed) at pushing for some centralization some years ago via dom-context

Two things to share:

  1. Please see the below list of related projects. In writing dom-context one of the design goals was to be compatible with existing libraries, so I think such an audit would be helpful for moving the proposal forward.
  2. Consider making the event name flexible. The spec now considers everything as context-request. Other libraries use other event names. It might aid centralization and standardization if we can support polyfills for existing components and libraries (both providers and subscribers).

Related projects

  • blikblum/wc-context - uses the same event handler approach, includes integrations with other Web Component libraries, and is well tested, but doesn't support retries/polling. Uses the context-request-${name} event namespace. Exposes a core library, so it can be used in other web component compilers.

  • askbeka/wc-context - uses the same event handler approach with the request-context-${contextName} namespace. Only works with custom elements, so incompatible with Stencil.

  • petermikitsh/stencil-context - uses the same event handler approach, but does not support having different context names (everything uses the same shared mountConsumer event name)

  • ionic-team/stencil-state-tunnel - doesn't support nested providers (see issue #8) and requires javascript props on components to wire them up.

  • mihar-22/stencil-wormhole - uses the same event handler approach with openWormhole and closeWormhole event names. Only supports using a single object as context, spreads that object to it's children properties.

  • @corpuscule/context - uses the same event handler approach, but uses decorators so it is incompatible with Stencil

  • haunted - uses the same event handler approach with haunted.context event name, but relies on detail.Context objects for handling multiple context types. Only exposes Provider as custom HTML elements, so doesn't support global providing, or connecting providers into non-custom elements.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.