h-rea / hrea Goto Github PK
View Code? Open in Web Editor NEWA ValueFlows / REA economic network coordination system implemented on Holochain and with supplied Javascript GraphQL libraries
Home Page: https://docs.hrea.io
License: Other
A ValueFlows / REA economic network coordination system implemented on Holochain and with supplied Javascript GraphQL libraries
Home Page: https://docs.hrea.io
License: Other
When I ran through the first pass of inter-DNA linking, we were storing "base" entries (*) as the address of the first version of an entry to keep consistent IDs. This was mostly to allow for consistent record IDs between entry updates, across networks. After implementing the second pass, which does not use "base" entries as targets but instead writes metadata around the target link as a JSON-based entry, storage of such consistent record hashes seems less necessary.
(*)(which have since been renamed to "key indexes"; please substitute as appropriate when you see the older terminology)
It may be possible to link directly between entries whilst always referring to them by their consistent initial hash without incurring any additional storage overhead. For example, we would no longer need the consistently-identified EVENT_BASE_ENTRY_TYPE
linking to the underlying EVENT_ENTRY_TYPE
that has a roaming hash.
create_record
, instead of storing the base entry address, just return the initial hash coming from commit_entry
.update_record
there would no longer be any need to dereference the entry; however, reading the entry metadata in order to determine the most recent version hash may be necessary. The initial hash (rather than most recent entry hash) would be returned from this method as an identifier for the record that remains consistent between updates.
update_record
should accept a revision ID (read: actual entry hash) rather than a record ID (read: hash of first entry); which would necessitate returning this record metadata in responses (see #40). This method would also be better for avoiding undesirable update conflicts.delete_record
may have the same revision ID / record ID concern as for update, with the addition that there is no longer any base entry to delete.read_record_entry
takes the record ID initially returned from commit_entry
and follows the update metadata through to the latest version of the entry automatically- there is no longer any reason to dereference the base entry. We may optionally wish to validate that no previous versions of the provided entry address exist, to ensure that revision IDs cannot be incorrectly used as record locators.Aside from restructuring the zome link!
definitions to remove the indirection, I don't think anything needs to change in the linking API. Provided all links continue to use the initial version of an entry, they should all still be readable in a single query for field traversals. It'll just be different link type names.
Currently, the limited nature of bridging configurations means we are experimenting with a 1:1 relationship between different modules of the system (eg. "planning" & "observation")- in other words, records can only be linked to a single destination network that is known ahead of time; rather than being able to link anywhere. Eventually we want to be at a place where multiple modules of the same type can be connected to a DNA (eg. multiple "observation" DNAs referencing shared "planning" space) and linked arbitrarily.
This will mean that records can no longer be referenced by unique bridge ID and instead must be referenced by DNA hash. We will thus need to conform to a URI spec (proposal at end of this document) and implement appropriate resolvers; which at this time can only be created via introspection of an agent's local capability registry.
Holochain has plans to support this, and it will require some changes to the bridges
definitions in zome.json
(as yet unknown). In addition, it will also require some helper functions to be implemented.
construct_response
helpers)BRIDGED_PLANNING_DHT
) from the network portion of the URINote that by the time this is attempted, some or all of the above may have been provided in the HDK.
A PoC can be implemented by allowing a many:many configuration between the observation and planning DNAs.
See https://circleci.com/gh/holo-rea/holo-rea/3
They both appear to concern the updating of field indexes between records referenced in different zomes within the same DNA.
Works on my local setup. @fosterlynn @bhaugen when you get set up next week, would be good to see if you can reproduce the problem locally.
If we can wire this up from the GraphQL query all the way down, we can use it to more efficiently load records from the DHT by not following links that are being discarded by the caller.
I know both of those words have different meanings to different people, so I'll define how I am using them here, and why the difference matters.
By Ontology, I mean "a set of concepts and categories in a subject area or domain that shows their properties and the relations between them".
Ontologies can be domain ontologies, covering a specific domain, or upper ontologies, trying to cover all domains. REA is a domain ontology, originally covering accounting, later extended to cover all economic interactions. ValueFlows is a vocabulary based on REA and other sources.
By Taxonomy,I mean "the branch of science concerned with classification". Taxonomies are usually hierarchical, for example, a granny smith apple is an apple is a pome fruit is a fruit is an agricultural product is a product.
Here's a sketch of how taxonomies might fit into ValueFlows:
Here's the whole VF ontology model under that excerpt.
Resource Specifications would fit naturally into various taxonomies of resource classifications, like http://aims.fao.org/vest-registry/vocabularies/agrovoc suggested in the diagram, a taxonomy of agricultural products. Other resource classification taxonomies exist, like http://www.productontology.org/ which of course calls itself an ontology. (My goal in this issue is to differentiate concepts and how they are used, not to argue against other uses of the words ontology and taxonomy.)
If you selected an item from Agrovoc or Prodont and used it as a Resource Specification in ValueFlows, it would instantly have all of the relationships with other concepts shown in the larger VF ontology model. If you selected an concept from ValueFlows (like resource classification) you would have a hard time fitting into Prodont or Agrovoc or any other taxonomy I know of, and if you did, it would only have some hierarchical relationships.
That's the basics. Next I'll get into how groups of agents agree - or fail to agree - on ontologies and taxonomies.
I'll add each consideration (think use case, but sometimes more general) in a separate comment with a heading. Occasionally do a summary. Feel free to add other considerations, summaries, or comments.
These pads were written as preparation for discussing HoloREA modularity:
There has been some discussion about what's best for managing these kinds of "relationship" records, and how to make things more ergonomic for the caller. We have particularly focused on the logic for update behaviours (adding, removing and editing relationship records), and on "shorthand" data structures for defining dependant records (eg. declaring fulfillment
relationships when writing an economicEvent
).
After thinking about this for a while my opinion is that we should not bother with these kinds of indirect edit methods and simply treat "relationship" records (like fulfillment
) as first-class items with their own CREATE, UPDATE & DELETE methods. The reasoning is as follows:
Create:
There is a small efficiency gain to the caller if they can specify a sub-structure for relationship records when creating a parent. The internal logic for implementation is also deemed to be reasonably simple, as it is simply the composition of the creation of the base record and creation of any child relationship record(s). However, the sub-records for declaring relationships would necessarily differ from the stored records themselves (for example, the fulfilled_by
field of the fulfillment
would come from the newly created economicEvent
). And, in practise, this logic and field munging ends up being cumbersome and somewhat of a burden to implementors. With the goal of making development of HoloREA as accessible as possible, I think we should simply omit this burden. All calls are going through a websocket at the end of the day anyway.
Update & Delete:
Any set-based implementations for UPDATE & DELETE of referenced records within the api involve complex logic for retrieving the affected record and / or managing the change. However, it is pointless for the backend to be forced to run these computations and checks! Any UPDATE or DELETE request is going to originate at the UI layer, by the action of a user who has clicked on some control next to a data item. In all cases, the ID of the affected relationship is already known by the UI at time of operation. There is no need to say "remove this ID from the set of an economicEvent
's fulfills
", we only need to say "delete the fulfillment with this ID". And the same applies to UPDATE.
Delete:
Perhaps worth forking to a separate issue if there is contention, but specific to DELETE is the way that we manage the deletion- do we remove links, or records? I think this will be a case-by-case thing, but expect that in most cases we would want to remove the record rather than the link. This allows UIs to load previous versions of records where they have been deleted, by following the dangling link pointer back to the previously available version of the referenced record.
Either way it seems as though the decision to make historical information easily visible or not will fall to the UI layer, and the particular norms and requirements of the context in which a consuming application is being built.
QueryParams.input_of
searchtest_record_links_cross_*.js
)move
events where source and destination resource point to the same resource need to be tested to ensure that quantities are correct in both event creation operation output and subsequent resource readsThis would reduce complexity in the operation of the dev environment as it can essentially mask the Nix runtime dependencies behind other commands transparently, and avoid the need to manually enter the Nix shell.
This continues #35 which is a first pass at getting the system working on latest Holochain & the Nix setup.
I'm running into a decision point where one side of a network fires a call
into another network, and the other side only deals with its own internal links.
I'm now wondering under which conditions links between networks need to be updated in both directions- usually it is a "foreign key" type relationship, where one side of the relationship controls the other. So in this case, I can't update Commitment.fulfilledBy
to change the associated EconomicEvent
s, I have to change each EconomicEvent.fulfilled
for any given commitment.
What's the best way to make this bi-directional? Will any of these kinds of linking operations all need to be updated in both directions, or will there always be sender/receiver roles in play? (in this case, 'sender' = EconomicEvent
& 'receiver' = the Commitment
s which an EconomicEvent
fulfills
).
Currently everything runs through hc_public
and this needs to be constrained to particular functionality-based permissions. That also means that agent and bridge registration callbacks need to grant the appropriate capabilities for newly created networks and newly joining agents.
These need to be stubbed out and wired up to EconomicEvent
API handlers. Note that VF 0.3 defines some fields as "may be calculated"- I think the appropriate behaviour there is to use a value where present and derive the field if an explicit value has not been set. Does that sound correct @fosterlynn?
At present, data that lives "between" networks (eg. Satisfaction
& Fulfillment
) requires a "context record" to be kept on either side of the network boundary in order for participants from both networks to have equal access to information.
You can see this implemented in our system where the satisfaction zome in the planning DNA pings the satisfaction zome in the observation DNA in order to create a duplicate of the record.
We currently only allow the creation of satisfactions & fufillments within the planning DNA space, though ideally both actions should be triggerable via the observation DNA as well. In the current implementation we can't do this, because you can't configure cyclic bridge dependencies. But it does seem like a common practise to have logic in one DNA responding to activity in another.
The signalling API may be the answer to this, provided signals emitted in one DNA are received by all connected DNAs running in the conductor; and no bridging is needed to enable this link.
If we can get this working we should remove the unnecessary bridge and provide for an ability to create fulfillments in the observation DNA as well. We should also decide on some standard message structures in order to wire up similar triggers between inter-network parts of the system.
Looking forward, we want to think about how to have complete record of create, update, delete operations on VF records.
Haven't actually implemented this yet in the ActivityPub track, but there I think it would play out like:
It would make sense to me to try to keep the same base vocabulary when doing this in the Holochain track, unless there is something already built into Holochain for this?
ResourceSpecification
CRUD routes
ResourceSpecification
link fields, and integration test for creating EconomicResource
s in an associated observation DNA which can then be retrieved via resourceSpecification(id: ID).conformingResources
(leave this until last)ProcessSpecification
CRUD routes
Unit
CRUD routes
Unit.id
as the ID in getUnit(id: ID)
(will require base entries for the unit label index and nonstandard get_unit
zome API logicAction
routes for reading of the built-in Action
type definitions from lib/vf_core/src/measurement.rs
:
get_action(id: ActionId)
query_actions()
- no parameters, just return all of them in an arrayNote that the specification types will refer to classifications. At this stage, these are just string fields which will contain the URIs of semantic web ontology terms.
Unit
s are best identified by their id
, rather than entry hash. Every unit in the OM ontology is unique.
This needs doing per the upcoming 0.3 release of ValueFlows.
Some operations that are critical or basic to the functioning of REA need access to some fields of some classes to function. For example, a simple trace procedure cannot function without:
EconomicEvent
inputOf
outputOf
EconomicResource
affectedBy
Process
& Transfer
inputs
outputs
Given the clean dichotomy of public and private data natively supported in Holochain as described in #7, we will need some (many? all?) types to have dual representation. I.E. a single object has one public record and one private record.
In this thread, we will discuss and conclude for each VF class:
Throughout this process, we should remember that the full properties of an object in Holochain must uniquely identify that object. If one record is { foo: "bar" }
, there can never be a different object whose only properties are { foo: "bar" }
. Therefore, neither the public nor private face of the object may have:
In addition, I have made no attempt to use links to or from private entries. They may differ significantly; I seem to recall someone saying that links are on the DHT (public) only. Private data on the local chain may be queried via query
, but this is a little different from the "queries" I implement with links.
This will require:
hc-web-client
Subscriptions
resolvers for the ValueFlows GraphQL schema in vf-graphql-holochain
, using the hc-web-client
PubSubEngine
To build on #19 & #20 โ once completed, the read query for Intent.satisfiedBy
as exposed by GraphQL should load both the Commitment
relationships from the planning DNA & the EconomicEvent
relationships from the observation DNA. This will be a chance to think through our naming conventions and patterns as all our prior helper functions will be needed to build up the necessary behaviour.
I just wanted to draw this out into an explicit decision- I've added a section on contribution agreements to the contributors doc in the repo in response to more interested third-parties and potential contributors reaching out.
Since you two are the other major stakeholders in this at the moment I wanted to run it by your eyes and ask for any input; at least one other person should OK this before we start living by it. Feel free to provide suggestions & comments here, or as PRs.
These need to provide for the intermediary records (they currently link directly between EconomicEvent
, Commitment
and Intent
) and all CRUD methods as deemed appropriate by #37.
Once finalised we can roll this pattern out to all zome API handlers and complete the Observation and Planning modules.
(observation, planning & specification)
This refers to Rust unit tests- currently the only module with unit tests is hdk_graph_helpers::maybe_undefined
.
Is there something akin to a pubsubhub feed aggregator function/happ?
I am interested in that for offers/wants/status-updates.
Any pointers appreciated.
I envisioned a minimum of fields
This basically means bringing the work @sqykly has been doing on reproducing LinkRepo
into the shell of the end-to-end system and validating the architecture by implementing our first inter-DNA bidirectional link.
Most of the groundwork and underlying architecture will be similar to the structure shown in these diagrams, and most questions needed per #3 have been answered. Additionally, it is also clear that if one defines a "record" as a single logical entity within a Holochain app then there are actually multiple DNA primitives involved in constructing that representation.
It has been observed and discussed that there are some common emerging patterns for record management on Holochain:
hdk::utils
.These things being the case, the evolutionary pattern is likely to be:
Progressing through this implementation process should land us on a stable, polished foundation to build out the rest of the app on. This is likely to lead to the creation of some URI resolver logic to more reasonably handle cross-DNA record linkage.
This issue is a place to talk through the high-level design of our graph-like record structures. The lower-level tasks encompassing this work (so far) can be found in #18, #19, #20, #21 & #22.
The other reason to log this task separately is to claim it as a milestone. We should not proceed beyond the implementation of our first set of relationships (satisfactions & fulfillments) until we are happy with the code structure and quality, and comfortable building out the rest of the app to the same standard.
Many users will want to share their data only among their peers in an organization, while exposing enough to do business with others on the same network. Others (e.g. Sensorica) want everything out there for anyone to see and validate. Holochain has only two spheres of access rights:
In addition, the definitions of data types in the DNA determine immutably where all instances of that type live; there is no API option to make exceptions on a record-to-record granularity. Therefore, we will need to engineer a sensible way to split our data types into public and private aspects. In addition, we will need to implement a mechanism for agents to share their private data with trusted peers at their discretion. I'll be using the term "selective sharing" here, and abbreviating it as SS if I have to type it more than twice.
Based on Art's suggestions, the tentative plan for selective sharing is to send data directly peer-to-peer:
send
with an SS message to the owner whose payload is a structure containing:
MAYBE indicates that this is something that popped into my head right now and needs vetting or input from @pospi et al.
Please pick this scenario apart, shoot it down, add or subtract things, or whatever so that we can arrive at the best possible selective sharing.
I've got some questions on how to manage the fields of an EconomicResource
in regard to event-related functionality through the API, apologies if they are documented elsewhere; if not- this might be a good opportunity to describe the state machine at a high level.
When creating a resource via the additional createResource
parameters in the createEconomicEvent
GraphQL mutation:
stage
: should this be provided as an input parameter, just start off as 'none', or should it take on the ProcessSpecification
of the related EconomicEvent
that resulted in the resource creation, if EconomicEvent.outputOf
is set? Or inputOf
? Or something else?unitOfEffort
: similarly, must this be explicitly provided or does it come from the associated ResourceSpecification
?state
comes from the event's action
, if the action is pass
or fail
? If that's going in to the model... how does a "pass" expire for inspections that need periodical review? It doesn't feel right for it to "stick" to the resource...accountingQuantity
adjusts in response to e.resourceQuantity
based on e.action.resourceEffect
? (being either increment
or decrement
)EconomicResource
fields affected by or derived from fields of the associated EconomicEvent
that is provided at creation time?resourceInventoriedAs
in the stored event?I also have outstanding ambivalence about createResource
as a parameter name in the event API, even though we've discussed that it's unavoidable. What if we renamed it to observedResource
? I'd feel better about that :P
For updating a resource directly (pretty much only provided for correcting data entry errors...)
lot
, unitOfEffort
, stage
, state
& containedIn
)For updating a resource in response to an event:
classifiedAs
ever altered by events, or does it retain the value it began with? i.e. do I update r.classifiedAs
in response to e.resourceClassifiedAs
if e.resourceInventoriedAs
is provided? If so, is e.toResourceInventoriedAs
modified as well?onhandQuantity
affected by e.resourceQuantity
, or just accountingQuantity
?containedIn
respond to any action types?currentLocation
?stage
& state
?One other unrelated thing- is onhandQuantity
the correct casing, or should the 'H' be uppercase?
Oh, and- deleting records... is this possible? Or should they stick around, even if all the events that created them and interacted with them are deleted? Is it worth implementing extra logic to manage the cleanup?
So I've seeded this repo with a bunch of boilerplate- basically just trying to dump out all the best practices, tools & conventions that I've personally come to in my time managing development teams and get a stable foundation ready for us to build on. There's a collaborators guide which details dev tools and git conventions, which should give us a good place to start.
I want to make it clear that this is in no way me prescribing a way that we must do things (and on that note, nor is anything else I would say, ever) - just starting the conversation. It may take some dialogue for us to find a place that we both feel makes sense and lets us work together smoothly whilst getting out of our way. I think that's basically the goal here- if we were a team within some enterprise then I would probably have things a bit stricter, but it's important to remember that we are doing this for love of the game and that if it feels too onerous then it can take the joy out of it.
With that in mind, when we're past GFD and you have time to digest this it would be great to hear your thoughts & feelings about what I'm proposing :)
I'd also like to get better at the coordination side this time around, and really embrace having a regular cadence that keeps us all in the loop. My present feelings are that this would be a case of:
feature/
branch when done for the dayThis issue has been filed to record the outstanding cleanup jobs remaining on #13. As you can see there is a lot of tidy-up to do... my goal is to try to keep things moving and avoid a large merge conflict or bottleneck in aligning what we've already done (particularly as I have low bandwidth for development in the coming week).
Though the PR is still rough around the edges the framework, some patterns and at least the locations of files appear to be coalescing and I don't want to waste an opportunity for us to be working within the same collaboration space & polishing out the same details- I think things are stable enough for both of us to be working on top of. For completeness and so you can get a sense of what still needs to be tidied up, here's an exhaustive list of what still needs doing-
code cleanup:
vf_core
, abstract out the pattern
vf-graphql
integration:
vf-graphql-holochain
package and no longer depend on Webpack for bundling (remove related raw.macro
& hack in postinstall.sh
as well)documentation:
vf_core
record macros)example
GraphiQL explorermodules/vf-graphql-holochain
testing:
implementation:
Action
fields using static action builtins (note- some entry_address
wrapping still needed)EconomicEvent
s as fulfillment
s of a Commitment
), handlers should pass back ZomeApiResult
objects and leave the context handling to the caller. This would usually be done towards the end of the callback handler by partitioning out all the Err
responses from sub-operations and returning them in the API response object. You could also do things like pass the errors into some bridged "error logging" Holochain DHT.fulfillment
test implementation to include the intermediary record struct and abstract out a pattern forMy goal is to keep working to tick many of these off; things that are to be left for later will be logged as separate issues. @sqykly please let me know if there is anything here you think should be completed before merging the PR- in other words, that you feel blocks you from being able to continue work after merging #13 into what you're doing. Other than that, I definitely think most of this should be taken care of before proceeding beyond event / fulfillment / commitment, as I want to ensure we've landed on optimal patterns and laid a good foundation for code best-practises before we continue work elsewhere.
There are also some assumptions in my tasks which might require unpacking, please query anything which doesn't make sense. Perhaps this can become the new "optimal architecture" thread, since it seems pretty clear that read
& create
are mostly solved problems and the only thing we may still have to discuss is update
and how that fits in to the project scaffolding I've already created.
There are a couple of other items that can't be tied off just yet until the Holochain core & HDK catch up (CC @philipbeadle @thedavidmeister @willemolding)-
I haven't dug into much of the technical architecture of Holochain, so if you @sqykly and @pospi tell me this is dumb, I'll close it. But I have wondered this for quite a while.
Holochain is "agent-centric", we all want to be "agent-centric" too. But how does it work? (I've thought about it quite a bit in our ActivityPub track, so I'll use that to translate over to here.)
Some different aspects of my questions:
Apologies again for not being very deep into the architecture, for any wrong terms I used, etc.
Once we have more flexible links between records (see #49), we need to resolve the issue whereby a destination network may be upgraded and thus the old data becomes unresolvable.
"Source" DNAs will also have to keep a list of equivalences between current & prior versions of "destination" DNAs in order to determine the correct DNA hash to look up the current version of an entry. It is as yet unknown how best to account for this.
We need to align with the officially supported dependency management system.
Work on https://github.com/holo-rea/holo-rea/tree/feature/nix-install-scripts is mostly completed for the 0.0.26
upgrade (quite a few breaking changes), as well as revised setup instructions and install script for downloading a specific version of https://github.com/holochain/holonix. Editor tooling is also working well under the new environment.
Two minor issues left to resolve after the upgrade before this can be completed, see this post.
We have said that holo-rea will be an REA framework that others can build on top of with application or domain specific logic. What does that actually mean in the context of holochain architecturally? And when do we want/need to figure this out? Is there a specific group we are working with where we could work through the question in practice?
Or if we do understand it, I'd be interested in hearing how it should work.
Once #18, #19, #20 & #21 are completed we will have covered most of the data structures needed for VF records. This would be a good time to take the thus-far unavoidable field duplication and verbose impl
definitions in the core VF modules and turn those into macros so that the code is kept as DRY as possible.
See valueflows/vf-apps#5.
For either / or fields, there are some fields where one or the other is required. These are:
Starting a new issue for this, seems enough different from agent-centric architecture. Should it be broader? Like how to prepare for group agents or something?
@pospi @sqykly Please feel free to pull the various discussions from elsewhere into this to seed it. And I'll think about how onBehalfOf might (or might not) want to become part of VF, seems like a universal problem. Probably working it through here first though.
Use https://www.npmjs.com/package/dataloader or something like it to batch querying of records per request, so that things are only fetched once.
This is a thread to discuss Rust's language features and how we best implement the DHT code... presuming Philip doesn't come along with the new GraphQL API generation feature and obviate us needing to hand-code most of the backend ;)
The first thing about Rust is that it's an ML-derived language and neither of us has learned any ML before. This will probably make the experience somewhat painful for a while until we attain some lightbulb moments, after which it will suddenly become amazing. It might be an idea to have weekly catch-ups where we compare notes on our learning as this will help accelerate each other. I will keep updating this thread as I learn new insights so that you can alert me if you've come to different conclusions.
Type hints:
Something I have seen in a couple of 'best practise' documents is to make type declarations "clear and expressive, without being cumbersome". However, information on how to make such distinctions is lacking.
From what I can tell, the compiler mandates that you declare a return type and parameter types for all functions. I suspect the above guideline is around the 'restrictiveness' of type parameters, and that the best practise is to make your types as generic as possible (eg. using &str
instead of String
to allow values which are not instances of the String
class but can be converted to be used).
Record architecture:
The topic I've been debating lately is how best to architect the VF core functionality. Rust is not an OO language, and so preferring composition over inheritance is not only good practise here, it's also idiomatic and more performant, and AFAIK OO is not even really an option. Rust's trait system looks to be a really solid and strongly-typed way of dealing with mixins, though- we are in good hands.
The HDK methods for data handling are all quite low-level. Some amount of wrapping them up will be needed, especially around links. And then we need some higher-order wrapping that combines an entry and some links to create a record, like we did with GoChain. I imagine we will want very similar functionality.
The other challenge with the core VF fields is that we probably need to change our way of thinking, because a) you can't inherit structs, and b) traits cannot define fields. Rust really enforces that you keep your data separate from your behaviour. As a consequence, I suspect we will need to use macros to declare our entry types succinctly and avoid having to redeclare fields.
So this is what I came up with as a rough scratchpad for implementing a higher-order struct to manage all the related record data, and a trait implementation for managing it:
#[derive(Eq, PartialEq, Debug)]
enum LinkOrLinkList {
Link(Address),
LinkList(Vec<Address>),
}
#[derive(Eq, PartialEq, Debug, Default)]
pub struct Record<T> {
entry_type: String,
entry: T,
address: Option<Address>,
links: HashMap<String, LinkOrLinkList>,
}
trait LinkedRecord {
fn commit(self); // :TODO: return something useful
}
impl<T> LinkedRecord for Record<T> {
fn commit(self) {
// save entry
let entry = Entry::App(self.entry_type.into(), self.entry.into());
let address = hdk::commit_entry(&entry);
match address {
Err(e) => { /* :TODO: bail */ }
Ok(a) => {
self.address = Some(a);
}
}
// save links
for (tag, addr) in &self.links {
match addr {
LinkOrLinkList::Link(link) => {
match &self.address {
None => { /* should probably throw some error here, or check `address` above */ },
Some(self_addr) => {
// :TODO: handle result
link_entries(&self_addr, &link, tag.to_string());
}
}
},
LinkOrLinkList::LinkList(links) => {
match &self.address {
None => { /* ... */ },
Some(self_addr) => {
for link in links {
// :TODO: handle result
link_entries(&self_addr, &link, tag.to_string());
}
}
}
}
}
}
}
}
Some notes about this:
commit
function fails on self.entry.into()
, and I couldn't figure out how to declare the type for the LinkedRecord
implementation to make that work. I don't really understand the Into
and From
traits yet, they seem like dark magic.LinkOrLinkList
enum. The way the language handles these felt overbearing at first, but I can see the benefits. It's impossible to miss a condition unless you explicitly want to.LinkRepo
does.Anyway, that's my half-baked thoughts after a day and a half or so of learning Rust. If I'm doing stupid things or on the wrong track I would love to know about it! heh
GraphQL is able to detect the presence or absence of mandatory fields in responses, which gives us some error checking for free.
This would also be a good integration test to run over each record type, to ensure that storage & retrieval works for every field and that no typos or other developer errors have occurred between specification and implementation.
It is likely that this is the only integration test we will need for each simple record type, once the core CRUD behaviours are wrapped up by a macro (see #22). Once that task is complete, most API functionality will be tested via unit tests in the macro package or in hdk_graph_helpers
. If we can get to a place where the only code in the zome API handlers is declarative use of the graph helper framework code, then the only thing remaining to test for each external API is that all the fields are correctly wired up.
Following on from #19, the pattern needs to be extended to work for inter-DNA field links. It is difficult to say what the correct flow of data is for managing updates across DNAs, but one goal is for the Fulfillment
record to exist on both sides of the network boundary. Ideally fulfillments can be managed in either network and changes would flow between them, but it may be more appropriate to only have one side of the relationship editable. We could start there and see what the added complexity in bidirectional editing looks like?
It's a "modular" VF implementation on Holochain, yes. But the architecture of the program depends strongly on the exact protocol through which a client app interfaces with a "module". We need to decide what "module" really means to us. This issue can serve to collect our thoughts and knowledge in advance of the next meeting, and to hold our conclusions afterward.
A drop-in zome is source code that Holochain developers can include in their own projects. The interface with our code would be cross-zome methods, which have a special declaration in the DNA and a special way they must be called:
let vf_data = call("name_of_holorea_zome", "name_of_holorea_function", argument);
The VF model objects would live on the same DHT as the rest of their app. Any object created by the zome(s) is validated in our own zome. After that, our data is at the mercy of the host app's security strategy, i.e. there is no point in building in access control. The host app ultimately has total control and our zome(s) operate with full transparency to it. It would be trivial for developer to break our zome in some way, but that transparency also creates possibilities that may be impossible otherwise by allowing the host app to use or redefine our types.
If desired by the host app, the special DNA declaration can also expose our functions as web API or bridge functions. We can't guarantee any part of an API we define will be a part of an app running HoloREA. We also have no configuration data apart from what we require from the host app to run. We can effectively never push an update,
Like drop-in zomes, only Holochain developers can use a bridged app. Our DNA will define a set of bridge functions, which may be called from the client app. Instead of call
, there is a similar HDK function:
let vf_object = bridge("name_of_holorea_app", "name_of_zome", "name_of_function", argument);
Our data lives on a different DHT than the client app. We do all of our own validation and access control. We control the configuration data and our own DNA, so we decide which parts of the API we expose at every level.
I don't think we will have a different DHT for each client app by default; we will need to juggle multiple data sets for different apps. I think we can use configurations to make each client app's HoloREA a different app hash, which would relieve this problem where the client app plays along. Whether this will affect updates to HoloREA, I don't know.
This is how it worked in the prototype. Every hApp runs a server that serves requests for static content and requests to the web API endpoints its DNA declares. The client app can then be anything, not just another hApp. A web browser can access our default UI (like the REPL), or another app can use our API and serve its own pages (like our separate UI server). Client apps call our functions via requests:
POST holorea-server/fn/holorea-function-name
A web API can also be used as a bridged app if we declare so in our DNA. It could be used as drop-in zomes, but doing so will cede the advantages of being an independent app. It would also make security very difficult. Let's not do that. Bridging might be okay.
In most other characteristics, this situation is identical to a bridged app. The difference is only in how the API functions are declared in the DNA.
Questions, comments, additional facts before I weigh in?
Extracted from https://docs.google.com/presentation/d/15hDyktqVni3NatUeB1wS_zMS0_msXpM1xZNUQ1WU6Ww/edit?usp=sharing
action
builtins directly to reference a combination of these, and records from an associated Specification zome. For now we can presume that only 1 Specification DNA can be connected to any consuming zome.Nothing crazy here, just another record type with CRUD routes to define in the observation
DNA.
Request from Sensorica: valnet/valuenetwork#524
This is a good first relationship to implement as it deals with same-DNA field links. Implementation requires:
Commitment
& Intent
in the planning DNA
Satisfaction
records as sub-records of a new Commitment
Satisfaction
records as sub-records of a new Intent
Satisfaction
Commitment
& Intent
(non-relational fields only)Satisfaction
Satisfaction
Commitment
& Intent
I don't think there is any need to implement an ability to modify Satisfaction
records via manipulation of Commitment
or Intent
- the client needs to be responsible for tracking which records to CREATE / DELETE anyway, so at that point they might as well call UPDATE / DELETE on Satisfaction
directly. I do still think it's a nice convenience thing to have an ability to author them at Commitment
/ Intent
logging time but am open to having my mind changed on that. I thought if nothing else it would be a good way to encourage splitting abstractions out.
We also need to define a naming convention for our zome API methods going forward, part of this issue is deciding on those naming conventions.
(BTW: if you think we need to define acceptance criteria for this and other issues I'm about to log, please let me know; otherwise happy for those tests to emerge as makes sense.)
We first need to implement a wrapper for records that allows other DNAs to detect multiple versions of the same entry as one. This allows cross-DNA linking to work predictably, but requires some wrappers to manage the underlying DHT structures in order to do so.
Basically, this means:
initial_entry
link.initial_entry
link and updates the de-referenced entry instead of the ID entryTests should prove that creating & updating a record does not result in loss of data linked to the entry ID.
Common abstractions
Careful thought needs to be taken before addressing these items. Advanced Rust skills are needed. Don't presume that an issue from this list should be fixed in a particular way, or if indeed it is an issue at all.
hdk_graph_helpers
methods for clarity and consistency once abstractions stabilise:
type_aliases.rs
, ensure it has zero runtime performance impact, most ergonomic possible development experience & correct usage documentationhdk_graph_helpers::records::create_record
) to take payloads by referenceZomeApiResult<Vec<(A, ZomeApiResult<R>)>>
, not ZomeApiResult<Vec<(A, Option<R>)>>
try_decode_entry
(& dependants) don't need to return Option
s, should return ZomeApiResult<R>
get_linked_addresses_as_type
& get_linked_remote_addresses_as_type
need to take refslink_entries_bidir
handle errors nicely- we want to be able to more safely link to entries which don't exist (should be a no-op with returned Err
)get_links_and_load_type
read_from_zome
without discarding the inner error (investigate error chaining?)Record CRUD API gateway
vf_core/src/type_aliases.rs
)construct_response
call signature (potentially via partial application of first 2 params)receive_query_
API calls to consistently take parameters by name (as struct, not fn args- see satisfactions API as example)default_false
serde field filler out of record classes into hdk_graph_helpers
Best practise
EconomicEvent
s as fulfillment
s of a Commitment
), handlers should pass back ZomeApiResult
objects and leave the context handling to the caller. This would usually be done towards the end of the callback handler by partitioning out all the Err
responses from sub-operations and returning them in the API response object. You could also do things like pass the errors into some bridged "error logging" Holochain DHT.Following on from #53.
This is going to require figuring out how this should work. Speccing out "delete" functionality may play into it. There is also a failing test in resource_links.js
which needs to be uncommented once the correct logic has been sorted out.
So, how should this work? Deleting the associated event deletes the resource as well if it's the only attached event? Or should there be a separate API entrypoint for deleting resources separately to events?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.