dwhjames / datomisca Goto Github PK
View Code? Open in Web Editor NEWDatomisca: a Scala API for Datomic
Home Page: https://dwhjames.github.io/datomisca/
License: Apache License 2.0
Datomisca: a Scala API for Datomic
Home Page: https://dwhjames.github.io/datomisca/
License: Apache License 2.0
For example, the txReportQueue
, removeTxReportQueue
, and requestIndex
methods in Connection
should probably all be declared with ()
.
The parser only accepts
[
[[rule ?e ?v]
[?e :ns/attr ?v]]
]
but it should also accept
[
[(rule ?e ?v)
[?e :ns/attr ?v]]
]
Maybe we should create a 0.1 tag from commit on the date when we have opened Datomisca release.
And on current master, rename it 0.2-SNAPSHOT and when we have a reached a stable scope to tag it 0.2...
Your mind?
sbinary (https://github.com/harrah/sbinary) uses much more cleaner style of writing custom Reads/Writes formats
see example at DataApi proejct: DatascopeProtocol.scala
See syncExcise, syncIndex, and syncSchema
To be able to execute a query you need to have an appropriate implicit DatomicDataToArgs
. Currently this means that you have to have the imports:
import datomisca._
import Datomic._
It would be much more preferable if DatomicDataToArgs
implicits were in implicit scope so that import datomisca._
would suffice.
The implicit conversions for DDReader
and DDWriter
currently only support conversions between the Java versions of BigInt
and BigDecimal
for DBigInt
and DBigDec
.
Revisit Datomisca queries and remove superseded features.
See Database.attribute and Attribute
Hi!
I'm pretty sure I found a bug, but I would love to be wrong about this :). My bug seems to be stemming from when I have a case class that references another instance of the same class. take the following example:
case class Price(uuid: UUID, amount: Double, previousPrice: Option[Price])
Notice how previousPrice
is an option of type Price
. Now if I map everything the normal way and add a newPrice method to create new Price entities....
object Price{
object Schema {
object ns { val price = Namespace("price") }
val uuid = Attribute(ns.price / "uuid", SchemaType.uuid, Cardinality.one).withUnique(Unique.identity)
val amount = Attribute(ns.price / "amount", SchemaType.double, Cardinality.one)
val previousPrice = Attribute(ns.price / "previousPrice", SchemaType.ref, Cardinality.one)
val all = List(uuid, amount, previousPrice)
}
implicit val reader: EntityReader[Price] = (
Schema.uuid.read[UUID] and
Schema.amount.read[Double] and
Schema.previousPrice.readOpt[Price]
)(Price.apply _)
def newPrice(amount: Double, previousPrice: Option[Price])(implicit conn: Connection): Future[Price] = {
val tempId = DId(Partition.USER)
val addTxn: AddEntity = (
SchemaEntity.newBuilder
+= (Schema.uuid -> Datomic.squuid())
+= (Schema.amount -> amount)
+?= (Schema.previousPrice -> previousPrice.map(pp => LookupRef(Schema.uuid, pp.uuid)))
) withId tempId
Datomic.transact(addTxn) map { r =>
val entityId = r.resolve(tempId)
val entity = r.dbAfter.entity(entityId)
DatomicMapping.fromEntity[Price](entity)
}
}
}
So far, so good, but if I run the following unit test....
"A price" should {
"be created once" in new WithDB {
(for{
_ <- Datomic.transact(Price.Schema.all)
price <- Price.newPrice(123.00, None)
} yield {
price.amount must beEqualTo(123.00)
}).await
}
"be created twice" in new WithDB {
(for{
_ <- Datomic.transact(Price.Schema.all)
price1 <- Price.newPrice(123.00, None)
price2 <- Price.newPrice(456.00, Some(price1))
} yield {
(price1.amount must beEqualTo(123.00)) and (price2.amount must beEqualTo(456.00))
}).await
}
}
The first test, (which does not tie in a previous price) passes, but the second test fails with the dreaded NullPointerException
. This is the stack trace that I get:
[error] NullPointerException: (attribute2EntityReader.scala:253)
[error] datomisca.Attribute2EntityReaderCast$$anon$25$$anon$11.read(attribute2EntityReader.scala:253)
[error] datomisca.package$RichAttribute$$anonfun$readOpt$extension$1.apply(package.scala:148)
[error] datomisca.package$RichAttribute$$anonfun$readOpt$extension$1.apply(package.scala:146)
[error] datomisca.EntityReader$$anon$1.read(entityMapper.scala:71)
[error] datomisca.EntityReader$EntityReaderMonad$$anonfun$bind$1.apply(entityMapper.scala:77)
[error] datomisca.EntityReader$EntityReaderMonad$$anonfun$bind$1.apply(entityMapper.scala:77)
[error] datomisca.EntityReader$$anon$1.read(entityMapper.scala:71)
[error] datomisca.EntityReader$EntityReaderMonad$$anonfun$bind$1.apply(entityMapper.scala:77)
[error] datomisca.EntityReader$EntityReaderMonad$$anonfun$bind$1.apply(entityMapper.scala:77)
[error] datomisca.EntityReader$$anon$1.read(entityMapper.scala:71)
[error] datomisca.EntityReader$EntityReaderFunctor$$anonfun$fmap$1.apply(entityMapper.scala:81)
[error] datomisca.EntityReader$EntityReaderFunctor$$anonfun$fmap$1.apply(entityMapper.scala:81)
[error] datomisca.EntityReader$$anon$1.read(entityMapper.scala:71)
[error] datomisca.DatomicMapping$.fromEntity(DatomicMapping.scala:26)
Printing out the entity produced by the line val entity = r.dbAfter.entity(entityId)
seems to give me exactly the entity I would expect, but then it crashes on the next line.
BTW, for the record, I'm aware of how silly this example is in a database that keeps history. But trust me when I say that my real example needs to be like this :).
Use of this wrapper around the transaction report queue seems like a recipe for memory leaks. It should probably just be removed. Maybe the BlockingQueue interface should be proxied.
To be consistent with DDatabase.{entity, entityOpt, tryEntity}
, we should have DatomicMapping.fromEntity
throw an exception and have fromEntityOpt
and tryFromEntity
variants.
The getting started guide here: http://pellucidanalytics.github.io/datomisca/doc/getstarted.html
has the following code snippet:
val txData: Seq[TxData] = Seq(
name, home, birth, characters, // attributes
movies, music, reading, sports, travel // ident entities
)
But 'characters' is never mentioned in the rest of the guide.
I am a member of the datomisca group and i cannot create new posts.
We are on 0.8.3731
, and the latest is 0.8.3814
We need to support Keywords as a value type. This also means we need to revisit our use of DRef to return idents as the value for references. From a query, the value of a reference attribute will be either Long or Keyword. From the entity graph, the value will be either Entity or Keyword.
[(.startsWith ^String ?title ?prefix)]
gives the complication error:
[error] `)' expected but `^' found
[error] [(.startsWith ^String ?title ?prefix)]
[error] ^
If one builds the following query,
[
:find ?e
:where
[?e :ns/attr]
]
then
q(query, database)
fails. The query is interpreted as a no-args query and no database value can be supplied.
The is a potential for inconsistent reads in the implementation of resolveEntity
.
in DatomicFacilities
def resolveEntity(tx: TxReport, id: DId)(implicit db: DDatabase): DEntity = {
tx.resolveOpt(id) match {
case None => throw new TempidNotResolved(id)
case Some(e) => db.entity(e)
}
}
in TxReport
def resolveOpt(id: DId): Option[Long] =
Option {
datomic.Peer.resolveTempid(dbAfter.underlying, tempids, id.toNative)
} map { id =>
id.asInstanceOf[Long]
}
The current implementation resolveEntity
could result in the id being resolved in one state of the db and the entity read in another.
I think resolveEntity
should be moved into TxReport
and the dbAfter
database value that is provided in the tx report should be used for both resolving the id and reading the entity.
class AddIdent
is Referenceable
, but the use of DRef
over Keyword
is inconsistent with Attribute
.
It would be nice to have a method in DEntity
that did the equivalent of entity(Namespace.DB / "id").as[DLong]
Aggregates aren’t supported by the query parser.
I’d like to get back to using master as the default branch and move away from the current versioning to something more akin to http://semver.org/
An extractor for DRef to help match against keywords would be helpful.
case DRef(KW(":ns/attr")) =>
However, DRef is already a case class, and the macros rely on this.
I'm getting an error in my implicit reader when I create an implicit reader for an entity that has an attribute that is of type instant
and cardinality many
, which I'm pretty sure is a bug. Here's an example of a class with a singular date that works fine:
case class AppointmentRequest(title: String, proposedTime: Date)
object AppointmentRequest {
object Schema {
object ns {
val appointmentRequest = new Namespace("appointmentRequest")
}
val title = Attribute(ns.appointmentRequest / "title",
SchemaType.string,
Cardinality.one)
val proposedTime = Attribute(ns.appointmentRequest / "proposedTime",
SchemaType.instant,
Cardinality.one)
}
implicit val reader = (
Schema.title.read[String] and
Schema.proposedTime.read[java.util.Date]
)(AppointmentRequest.apply _)
}
But if I change proposedTime to have a Cardinality.many...
case class AppointmentRequest(title: String, proposedTime: Set[Date])
object AppointmentRequest {
object Schema {
object ns {
val appointmentRequest = new Namespace("appointmentRequest")
}
val title = Attribute(ns.appointmentRequest / "title",
SchemaType.string,
Cardinality.one)
val proposedTime = Attribute(ns.appointmentRequest / "proposedTime",
SchemaType.instant,
Cardinality.many)
}
implicit val reader = (
Schema.title.read[String] and
Schema.proposedTime.read[Date]
)(AppointmentRequest.apply _)
}
It gives the following error:
[error] /.../AppointmentRequest.scala:29: There is no type-casting reader for type java.util.Date given an attribute with Datomic type java.util.Date and cardinality datomisca.Cardinality.many.type to type java.util.Date
[error] Schema.proposedTime.read[Date]
[error] ^
I’m concerned about
implicit def database(implicit conn: Connection) = conn.database
in trait DatomicPeer
. I don’t think we should have this as an implicit def, just a regular def. This makes it far too easy to silently lose control over what database value is being used, and thus has potential pitfalls for losing transactionality over multiple reads.
The only major part of Datomisca that has (implicit db: DDatabase)
params are the query methods. And this param is only required to ensure that a database value is given as input to a query if no other inputs are given. I think it would be better to strip this out of the implementation of queries, and simply require that query invocations should never forget to provide the appropriate inputs.
The parser should only accept final ids for retractions… instead it only accepts temporary ids:
[:db/retract #db/id[:db.part/user] :db/ident :region/n]
For assertions it should accept both temporary and and final ids, but it only accepts temporary ids.
There should be overloads for Long
and DLong
.
If the tag for a schema component hasn’t changed, the schema manager should still recurse into the required schema components just to check if they have changed.
DSet is the only collection type in DatomicData but when using it in datasources for queries, it may lose the order as it's considered as a Set and not a Seq.
We might introduce something to manage this:
???
noticed in #38
val set = entity(manyRefAttr)
throws an exception when the values are idents not entities, as the chosen implicit infers the type Set[Long]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.