GithubHelp home page GithubHelp logo

scalar-labs / scalardb Goto Github PK

View Code? Open in Web Editor NEW
438.0 24.0 38.0 41.48 MB

Universal transaction manager

Home Page: https://scalardb.scalar-labs.com/docs

License: Apache License 2.0

Java 99.41% TLA 0.30% JavaScript 0.06% Dockerfile 0.03% Shell 0.15% Ruby 0.06%
transaction cassandra database distributed-systems dynamodb mysql postgresql nosql microservice microservices

scalardb's Introduction

ScalarDB

CI

ScalarDB is a universal transaction manager that achieves:

  • database/storage-agnostic ACID transactions in a scalable manner even if an underlying database or storage is not ACID-compliant.
  • multi-storage/database/service ACID transactions that can span multiple (possibly different) databases, storages, and services.

Install

The library is available on maven central repository. You can install it in your application using your build tool such as Gradle and Maven.

To add a dependency on ScalarDB using Gradle, use the following:

dependencies {
    implementation 'com.scalar-labs:scalardb:3.12.2'
}

To add a dependency using Maven:

<dependency>
  <groupId>com.scalar-labs</groupId>
  <artifactId>scalardb</artifactId>
  <version>3.12.2</version>
</dependency>

Docs

Contributing

This library is mainly maintained by the Scalar Engineering Team, but of course we appreciate any help.

  • For asking questions, finding answers and helping other users, please go to stackoverflow and use scalardb tag.
  • For filing bugs, suggesting improvements, or requesting new features, help us out by opening an issue.

Here are the contributors we are especially thankful for:

Pre-commit hook

This project uses pre-commit to automate code format and so on as much as possible. If you're interested in the development of ScalarDB, please install pre-commit and the git hook script as follows.

$ ls -a .pre-commit-config.yaml
.pre-commit-config.yaml
$ pre-commit install

The code formatter is automatically executed when commiting files. A commit will fail and be formatted by the formatter when any invalid code format is detected. Try to commit the change again.

Exception and log message guidelines

All the exception and log messages in this project are consistent with the following guidelines:

  • The first character is capitalized.
  • The message does not end with a punctuation mark.

When contributing to this project, please follow these guidelines.

License

ScalarDB is dual-licensed under both the Apache 2.0 License (found in the LICENSE file in the root directory) and a commercial license. You may select, at your option, one of the above-listed licenses. The commercial license includes several enterprise-grade features such as ScalarDB Server, management tools, and declarative query interfaces like GraphQL and SQL interfaces. Regarding the commercial license, please contact us for more information.

scalardb's People

Contributors

brfrn169 avatar dependabot[bot] avatar feeblefakie avatar jnmt avatar josh-wong avatar komamitsu avatar kota2and3kan avatar laysakura avatar mathewseby avatar mdbox avatar milancorreggio avatar miseyu avatar reddikih avatar scalar-boney avatar scalarindetail avatar supl avatar thongdk8 avatar torch3333 avatar y-taka-23 avatar yito88 avatar ymorimo avatar ypeckstadt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scalardb's Issues

update fails if immediately followed by an put

Describe the bug
A clear and concise description of what the bug is.

I am running a test case in which I get a record to check that it doesn't exist, then I put the record, then I get it to see it was successfully added, then I update it (by calling put again) and then I get it again to see that the value was updated.

  • get A - fails
  • put A - success
  • get A - success
  • update A - should be success but fails with error No Mutation applied

ERROR
2020-09-25 12:11:15,225 [TRACE] from repository.AnswersTransactionRepository in ScalaTest-run-running-AllRepositorySpecs - putting answer Put{namespace=Optional[codingjedi], table=Optional[answer_by_user_id_and_question_id], partitionKey=Key{TextValue{name=answered_by_user, value=Optional[11111111-1111-1111-1111-111111111111]}, TextValue{name=question_id, value=Optional[11111111-1111-1111-1111-111111111111]}}, clusteringKey=Optional.empty, values={answer_id=TextValue{name=answer_id, value=Optional[11111111-1111-1111-1111-111111111111]}, image=TextValue{name=image, value=Optional[{"image":["image1binarydata","image2binarydata"]}]}, answer=TextValue{name=answer, value=Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}]}, creation_year=BigIntValue{name=creation_year, value=2019}, creation_month=BigIntValue{name=creation_month, value=12}, notes=TextValue{name=notes, value=Optional[some notesupdated]}}, consistency=SEQUENTIAL, condition=Optional[com.scalar.db.api.PutIfExists@2e057637]}
2020-09-25 12:11:15,225 [DEBUG] from com.scalar.db.storage.cassandra.Cassandra in ScalaTest-run-running-AllRepositorySpecs - executing batch-mutate operation with [Put{namespace=Optional[codingjedi], table=Optional[answer_by_user_id_and_question_id], partitionKey=Key{TextValue{name=answered_by_user, value=Optional[11111111-1111-1111-1111-111111111111]}, TextValue{name=question_id, value=Optional[11111111-1111-1111-1111-111111111111]}}, clusteringKey=Optional.empty, values={tx_id=TextValue{name=tx_id, value=Optional[5239a8db-07c9-4b9b-ba25-732875af2475]}, tx_state=IntValue{name=tx_state, value=1}, tx_prepared_at=BigIntValue{name=tx_prepared_at, value=1601032275225}, answer_id=TextValue{name=answer_id, value=Optional[11111111-1111-1111-1111-111111111111]}, image=TextValue{name=image, value=Optional[{"image":["image1binarydata","image2binarydata"]}]}, answer=TextValue{name=answer, value=Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}]}, creation_year=BigIntValue{name=creation_year, value=2019}, creation_month=BigIntValue{name=creation_month, value=12}, notes=TextValue{name=notes, value=Optional[some notesupdated]}, tx_version=IntValue{name=tx_version, value=1}}, consistency=LINEARIZABLE, condition=Optional[com.scalar.db.api.PutIfNotExists@21bf308]}]
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.Cassandra in ScalaTest-run-running-AllRepositorySpecs - executing put operation with Put{namespace=Optional[codingjedi], table=Optional[answer_by_user_id_and_question_id], partitionKey=Key{TextValue{name=answered_by_user, value=Optional[11111111-1111-1111-1111-111111111111]}, TextValue{name=question_id, value=Optional[11111111-1111-1111-1111-111111111111]}}, clusteringKey=Optional.empty, values={tx_id=TextValue{name=tx_id, value=Optional[5239a8db-07c9-4b9b-ba25-732875af2475]}, tx_state=IntValue{name=tx_state, value=1}, tx_prepared_at=BigIntValue{name=tx_prepared_at, value=1601032275225}, answer_id=TextValue{name=answer_id, value=Optional[11111111-1111-1111-1111-111111111111]}, image=TextValue{name=image, value=Optional[{"image":["image1binarydata","image2binarydata"]}]}, answer=TextValue{name=answer, value=Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}]}, creation_year=BigIntValue{name=creation_year, value=2019}, creation_month=BigIntValue{name=creation_month, value=12}, notes=TextValue{name=notes, value=Optional[some notesupdated]}, tx_version=IntValue{name=tx_version, value=1}}, consistency=LINEARIZABLE, condition=Optional[com.scalar.db.api.PutIfNotExists@21bf308]}
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.StatementHandler in ScalaTest-run-running-AllRepositorySpecs - query to prepare : [INSERT INTO codingjedi.answer_by_user_id_and_question_id (answered_by_user,question_id,tx_id,tx_state,tx_prepared_at,answer_id,image,answer,creation_year,creation_month,notes,tx_version) VALUES (?,?,?,?,?,?,?,?,?,?,?,?) IF NOT EXISTS;].
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.StatementHandler in ScalaTest-run-running-AllRepositorySpecs - there was a hit in the statement cache for [INSERT INTO codingjedi.answer_by_user_id_and_question_id (answered_by_user,question_id,tx_id,tx_state,tx_prepared_at,answer_id,image,answer,creation_year,creation_month,notes,tx_version) VALUES (?,?,?,?,?,?,?,?,?,?,?,?) IF NOT EXISTS;].
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[11111111-1111-1111-1111-111111111111] is bound to 0
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[11111111-1111-1111-1111-111111111111] is bound to 1
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[5239a8db-07c9-4b9b-ba25-732875af2475] is bound to 2
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 1 is bound to 3
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 1601032275225 is bound to 4
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[11111111-1111-1111-1111-111111111111] is bound to 5
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[{"image":["image1binarydata","image2binarydata"]}] is bound to 6
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}] is bound to 7
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 2019 is bound to 8
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 12 is bound to 9
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[some notesupdated] is bound to 10
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 1 is bound to 11
2020-09-25 12:11:15,241 [WARN] from com.scalar.db.transaction.consensuscommit.CommitHandler in ScalaTest-run-running-AllRepositorySpecs - preparing records failed
com.scalar.db.exception.storage.NoMutationException: no mutation was applied.
at com.scalar.db.storage.cassandra.MutateStatementHandler.handle(MutateStatementHandler.java:47)
at com.scalar.db.storage.cassandra.Cassandra.put(Cassandra.java:108)

To Reproduce
Steps to reproduce the behavior:

  1. create a table with schema
      |    answered_by_user text,
      |    question_id text,
      |    answer_id text,
      |    before_answer_id text,
      |    answer text,
      |    before_answer text,
      |    creation_month bigint,
      |    before_creation_month bigint,
      |    creation_year bigint,
      |    before_creation_year bigint,
      |    image text,
      |    before_image text,
      |    notes text,
      |    before_notes text,
      |    tx_id TEXT,
      |    before_tx_id TEXT,
      |    tx_prepared_at BIGINT,
      |    before_tx_prepared_at BIGINT,
      |    tx_committed_at BIGINT,
      |    before_tx_committed_at BIGINT,
      |    tx_state INT,
      |    before_tx_state INT,
      |    tx_version INT,
      |    before_tx_version INT,
      |    PRIMARY KEY ((answered_by_user, question_id))
      |);```

2. create three methods (add, get, update).

def update(transaction:DistributedTransaction, answer:AnswerOfAPracticeQuestion) = {
    logger.trace(s"updating answer value ${answer}")
    add(transaction,answer, new PutIfExists)
  }
---------------
def add(transaction:DistributedTransaction,answer:AnswerOfAPracticeQuestion,mutationCondition:MutationCondition = new PutIfNotExists()) = {
    logger.trace(s"adding answer ${answer} with mutation state ${mutationCondition}")
    val pAnswerKey = new Key(new TextValue("answered_by_user", answer.answeredBy.get.answerer_id.toString),
      new TextValue("question_id",answer.question_id.toString))

    //to check duplication, both partition and clustering keys need to be present
    //val cAnswerKey = new Key(new TextValue("answer_id",answer.answer_id.toString))

    //logger.trace(s"created keys. ${pAnswerKey}, ${cAnswerKey}")
    val imageData = answer.image.map(imageList=>imageList).getOrElse(List())
    logger.trace(s"will check in ${keyspaceName},${tablename}")
    val putAnswer: Put = new Put(pAnswerKey/*,cAnswerKey*/)
      .forNamespace(keyspaceName)
      .forTable(tablename)
      .withCondition(mutationCondition)
      .withValue(new TextValue("answer_id", answer.answer_id.get.toString))
      .withValue(new TextValue("image", convertImageToString(imageData)))
      .withValue(new TextValue("answer", convertAnswersFromModelToString(answer.answer)))
      .withValue(new BigIntValue("creation_year", answer.creationYear.getOrElse(0)))
      .withValue(new BigIntValue("creation_month", answer.creationMonth.getOrElse(0)))
      .withValue(new TextValue("notes", answer.notes.getOrElse("")))

    logger.trace(s"putting answer ${putAnswer}")
    transaction.put(putAnswer)
  }
-------------------
   def get(transaction:DistributedTransaction,key:AnswerKeys):Either[AnswerNotFoundException,AnswerOfAPracticeQuestion] = {
    //Create the transaction//Create the transaction
    logger.trace("checking if answer exists for" + key);
    //Perform the operations you want to group in the transaction
    val pAnswerKey = new Key(new TextValue("answered_by_user",key.answerer_id.toString),
      new TextValue("question_id",key.question_id.toString))

   // val cAnswerKey = if(key.answer_id.isDefined) Some(new Key(new TextValue("answer_id",key.answer_id.get.toString))) else None

    //logger.trace(s"created user keys ${pAnswerKey},${cAnswerKey}")
    logger.trace(s"getting answer from ${keyspaceName}, ${tablename} using keys ${pAnswerKey}")
    val get:Get =  new Get(pAnswerKey/*,cAnswerKey.get*/)
        .forNamespace(keyspaceName)
        .forTable(tablename)

    val result:Optional[Result] = transaction.get(get);
    logger.trace(s"got result ${result}")
    if(result.isPresent){
      logger.trace(s"found answer ${result}")
      //checktest-get an answer
      Right(rowToModel(result))
    } else {
      //checktest-not get answer if answer doesn't exist
      logger.error(s"Answer doesn't exist")
      Left(AnswerNotFoundException())
    }
  }

1. execute the get, add, get, update calls in order like in the following test case

"update an answer if the answer exists" in {
      beforeEach()
      embeddedCassandraManager.executeStatements(cqlStartupStatements) //set up tables


      val cassandraConnectionService = CassandraConnectionManagementService() //set db connection
      val (cassandraSession, cluster) = cassandraConnectionService.connectWithCassandra("cassandra://localhost:9042/codingjedi", "codingJediCluster")
      //TODOM - pick the database and keyspace names from config file.
      cassandraConnectionService.initKeySpace(cassandraSession.get, "codingjedi")
      val transactionService = cassandraConnectionService.connectWithCassandraWithTransactionSupport("localhost", "9042", "codingJediCluster" /*,dbUsername,dbPassword*/)

      val repository = new AnswersTransactionRepository("codingjedi", "answer_by_user_id_and_question_id")

      val answerKey = AnswerKeys(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.answer_id.get,
        repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.question_id,
        Some(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.answer_id.get))

      logger.trace(s"checking if answer already exists")
      val distributedTransactionBefore = transactionService.get.start()

      val resultBefore = repository.get(distributedTransactionBefore, answerKey)
      distributedTransactionBefore.commit()

      resultBefore.isLeft mustBe true
      resultBefore.left.get.isInstanceOf[AnswerNotFoundException] mustBe true

      logger.trace(s"no answer found. adding answer")
      val distributedTransactionDuring = transactionService.get.start()

      repository.add(distributedTransactionDuring, repoTestEnv.answerTestEnv.answerOfAPracticeQuestion)
      distributedTransactionDuring.commit()
      logger.trace(s"answer added")

      val distributedTransactionAfter = transactionService.get.start()

      val result = repository.get(distributedTransactionAfter, answerKey)
      distributedTransactionAfter.commit()

      result mustBe (Right(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion))
      logger.trace(s"got answer from repo ${result}")

      val updatedNotes = if(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.notes.isDefined)
        Some(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.notes.get+"updated") else Some("updated notes")
      val updatedAnswer = repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.copy(notes=updatedNotes)
      logger.trace(s"old notes ${repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.notes} vs new notes ${updatedNotes}")
      logger.trace(s"updated answer ${updatedAnswer}")

      val distributedTransactionForUpdate = transactionService.get.start()
      val resultOfupdate = repository.update(distributedTransactionForUpdate,updatedAnswer)
      distributedTransactionForUpdate.commit()

      logger.trace(s"update done. getting answer again")

      val distributedTransactionAfterUpdate = transactionService.get.start()

      val resultAfterUpdate = repository.get(distributedTransactionAfterUpdate, answerKey)
      distributedTransactionForUpdate.commit()

      resultAfterUpdate mustBe (Right(updatedAnswer))
      logger.trace(s"got result after update ${resultAfterUpdate}")

      afterEach()
    }
**Expected behavior**
update should be successful

**Desktop (please complete the following information):**
scala, scalar db

`DynamoAdmin.namespaceExists()`check only prefixes of namespaces

Describe the bug

DynamoAdmin.namespaceExists()check only prefixes of namespaces.

if (tableName.startsWith(namespace.prefixed())) {
namespaceExists = true;

To Reproduce

Test code: https://github.com/scalar-labs/scalardb/pull/782/files#diff-deac59405708060a7ae11c7066ae51495705cbab43b7c17d94777a8bf30923d6R203-R220

Expected behavior

The test above passes (but fails at 2nd assertion).

Screenshots

Environment (please complete the following information):

  • OS: macOS
  • Java Version: 1.8
  • Scalar DB Version: master

Additional context

Scan after put doesn't return the written value, nor raise an error for EXTRA_READ

Describe the bug

With EXTRA_READ strategy, scanning values after putting values doesn't return the written value, nor results in an error.

To Reproduce

Initial state: empty

  1. Begin a transaction.
  2. Read a value at key05 (returns empty).
  3. Put a value 1 at key05.
  4. Scan from key01 to key10.
  5. Commit the transaction.

Expected behavior

The scan returns a value 1, or the commit results in an error.

Actual behavior

The scan returns empty result and the commit succeeds.

Additional context

Linux and Cassandra 3 on Docker.

Possible fixes

  • Checking the write set and the delete set when scanning a range and returning the written value. Since the write set contains partial records, the written value must be merged into scanned value, if exists.
  • Raising an error on commit or scan if the write set is not empty, like EXTRA_WRITE strategy, or if keys in the write set overlap the scanned range.

data_loaderを使ってテーブルにデータを投入する際に一部のカラムのみデータを入力した状態でもINSERTしたい

Is your feature request related to a problem? Please describe.
例として以下のようなテーブルに対して

id name tel

id,nameのみをデータ入力したINSERT文を用意してデータ投入を行いたい。現状はテーブルに定義された全てのカラムに対してデータを用意する必要がある。

Describe the solution you'd like
data_loaderを使ってテーブルにデータを投入する際に一部のカラムのみデータを入力した状態でもINSERTしたい

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Add missing table name to this exception message `StorageRuntimeException: no table information found`

Is your feature request related to a problem? Please describe.
Let's state that this is a program executing a Put request targeting a Cassandra table named FooTable using the transaction mode.

When Scalar DB fails to execute the Put because FooTable was not created in Cassandra, the stack trace will look like this.

no table information found
com.scalar.db.exception.storage.StorageRuntimeException: no table information found
	at com.scalar.db.storage.cassandra.ClusterManager.getMetadata(ClusterManager.java:83)
	at com.scalar.db.storage.cassandra.Cassandra.getTableMetadata(Cassandra.java:185)
	at com.scalar.db.storage.cassandra.Cassandra.get(Cassandra.java:79)
	at com.scalar.db.transaction.consensuscommit.RollbackMutationComposer.getLatestResult(RollbackMutationComposer.java:145)
	at com.scalar.db.transaction.consensuscommit.RollbackMutationComposer.add(RollbackMutationComposer.java:60)
	at com.scalar.db.transaction.consensuscommit.Snapshot.lambda$to$0(Snapshot.java:126)
	at java.util.concurrent.ConcurrentHashMap$EntrySetView.forEach(ConcurrentHashMap.java:4795)
	at com.scalar.db.transaction.consensuscommit.Snapshot.to(Snapshot.java:122)
	at com.scalar.db.transaction.consensuscommit.RecoveryHandler.rollback(RecoveryHandler.java:58)
	at com.scalar.db.transaction.consensuscommit.CommitHandler.commit(CommitHandler.java:44)
	at com.scalar.db.transaction.consensuscommit.ConsensusCommit.commit(ConsensusCommit.java:121)
        at com.scalar.ist.api.ScalarDBTest.put(ScalarDBTest.java:71)
...

Since the stracktrace is originating from the faulty user code, I tend to think that FooTable is missing which is correct in that case.

But, in the case the FooTable was created but the coordinator.state table is missing. The exception will look like this :

no table information found
com.scalar.db.exception.storage.StorageRuntimeException: no table information found
	at com.scalar.db.storage.cassandra.ClusterManager.getMetadata(ClusterManager.java:83)
	at com.scalar.db.storage.cassandra.Cassandra.getTableMetadata(Cassandra.java:185)
	at com.scalar.db.storage.cassandra.Cassandra.checkIfPrimaryKeyExists(Cassandra.java:191)
	at com.scalar.db.storage.cassandra.Cassandra.put(Cassandra.java:107)
	at com.scalar.db.transaction.consensuscommit.Coordinator.put(Coordinator.java:101)
	at com.scalar.db.transaction.consensuscommit.Coordinator.putState(Coordinator.java:49)
	at com.scalar.db.transaction.consensuscommit.CommitHandler.commitState(CommitHandler.java:114)
	at com.scalar.db.transaction.consensuscommit.CommitHandler.commit(CommitHandler.java:62)
	at com.scalar.db.transaction.consensuscommit.ConsensusCommit.commit(ConsensusCommit.java:121)
	at com.scalar.ist.api.ScalarDBTest.put(ScalarDBTest.java:71)
...

The exception message is identical to the missing FooTable but the stacktrace partly differs. Though, it is not obvious that the coordinator.state table is missing and not the FooTable.

It happened that I spent some time trying to understand what was wrong because I was convinced wrongly that Scalar DB could not find the FooTable for some reason even though the coordinator table was the one missing.
I guess new Scalar DB users who are still not fully aware that the coordinator table is required may be quite confused as well.

Describe the solution you'd like
The exception message below could be updated to include the missing table keyspace and name

com.scalar.db.exception.storage.StorageRuntimeException: no table information found

Put operation without preceding Get operation fails after the first attempt

Describe the bug
I'm playing with https://github.com/scalar-labs/scalardb/blob/master/docs/getting-started-with-scalardb.md#store--retrieve-data. I might be missing something, but I noticed Put operation without preceding Get operation for the same target record fails after the first test finishes successfully. It works when I try to fetch the target record using TransactionCrudOperable.get() before Put operation.

To Reproduce
Steps to reproduce the behavior:

  1. Prepare config file
# The JDBC URL
scalar.db.contact_points=jdbc:postgresql://localhost:5432/
scalar.db.username=scalardb
scalar.db.password=xxxxxx
scalar.db.storage=jdbc
scalar.db.consensus_commit.isolation_level=SERIALIZABLE
  1. Setup table schema using schema loader with https://github.com/scalar-labs/scalardb/blob/master/docs/getting-started-with-scalardb.md#set-up-database-schema
  2. Add the following code to https://github.com/scalar-labs/scalardb/blob/master/docs/getting-started-with-scalardb.md#store--retrieve-data
public class ElectronicMoney implements Closeable {
  :
    public void set(String id, int amount) throws TransactionException {
        DistributedTransaction tx = manager.start();

        try {
            Put put =
                    Put.newBuilder()
                            .namespace(NAMESPACE)
                            .table(TABLENAME)
                            .partitionKey(Key.ofText(ID, id))
                            .intValue(BALANCE, amount)
                            .build();
            tx.put(put);

            tx.commit();
        } catch (Exception e) {
            tx.abort();
            throw e;
        }
    }

    public static void main(String[] args) throws IOException, TransactionException {
        try (ElectronicMoney emoney = new ElectronicMoney()) {
            emoney.set("komamitsu", 42);
        }
    }
}
  1. Execute ElectronicMoney.main(). The first execution would finish successfully.
  2. Execute ElectronicMoney.main() again.
  3. See error

Expected behavior
The test code should finish successfully without any error even it's 2nd execution or later.

Error message

Exception in thread "main" com.scalar.db.exception.transaction.CommitConflictException: preparing record exists
	at com.scalar.db.transaction.consensuscommit.CommitHandler.prepare(CommitHandler.java:63)
	at com.scalar.db.transaction.consensuscommit.CommitHandler.commit(CommitHandler.java:42)
	at com.scalar.db.transaction.consensuscommit.ConsensusCommit.commit(ConsensusCommit.java:121)
	at ElectronicMoney.set(ElectronicMoney.java:47)
	at ElectronicMoney.main(ElectronicMoney.java:56)
Caused by: com.scalar.db.exception.storage.NoMutationException: no mutation was applied
Caused by: com.scalar.db.exception.storage.NoMutationException: no mutation was applied

	at com.scalar.db.storage.jdbc.JdbcDatabase.put(JdbcDatabase.java:111)
	at com.scalar.db.storage.jdbc.JdbcDatabase.mutate(JdbcDatabase.java:151)
	at com.scalar.db.transaction.consensuscommit.CommitHandler.lambda$prepareRecords$0(CommitHandler.java:81)
	at com.scalar.db.transaction.consensuscommit.ParallelExecutor.executeTasks(ParallelExecutor.java:97)
	at com.scalar.db.transaction.consensuscommit.ParallelExecutor.prepare(ParallelExecutor.java:52)
	at com.scalar.db.transaction.consensuscommit.CommitHandler.prepareRecords(CommitHandler.java:83)
	at com.scalar.db.transaction.consensuscommit.CommitHandler.prepare(CommitHandler.java:52)
	... 4 more

PostgreSQL records

scalardb=> select * from coordinator.state ;
                tx_id                 | tx_state | tx_created_at
--------------------------------------+----------+---------------
 c9679069-e489-4b60-8d39-f799fbdba8d0 |        3 | 1662294830783
 0ef1552e-2928-4e58-a33c-915340f26701 |        4 | 1662294975106
(2 rows)

scalardb=> select * from emoney.account ;
    id     | balance |                tx_id                 | tx_state | tx_version | tx_prepared_at | tx_committed_at | before_tx_id | before_tx_state | before_tx_version | before_tx_prepared_at | before_tx_committed_at | before_balance
-----------+---------+--------------------------------------+----------+------------+----------------+-----------------+--------------+-----------------+-------------------+-----------------------+------------------------+----------------
 komamitsu |      42 | c9679069-e489-4b60-8d39-f799fbdba8d0 |        3 |          1 |  1662294830519 |   1662294830787 |              |                 |                   |                       |                        |
(1 row)

Desktop (please complete the following information):

  • OS: Windows
  • Java 8
  • Version: Scalar DB 3.6.0

BTW, this issue template doesn't seem to really fit this project (e.g. browser) ?

Typecast issue exists in scalardb with cosmos db.

Describe the bug
Typecast issue exists in scalardb while reading data from Cosmos DB using scalardb

To Reproduce
Steps to reproduce the behavior:

  1. Setup QA application with cosmos db
  2. Login to QA application
  3. Insert some questions using the Submit a Question button.
  4. See some error An error occurred while looking up the question after creating the question.

Expected behavior
Read data from cosmos db using scalardb without any error.

Screenshots
Typecast issue exists while reading data from cosmos db.

Backend Details

  • Scalardb: 2.2.0
  • Storage: Cosmos DB

Additional context
Issue related logs

2020-10-08 23:12:59.355 ERROR 8243 --- [0.1-8090-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long] with root cause

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
        at com.scalar.db.storage.cosmos.ResultImpl.convert(ResultImpl.java:142) ~[scalardb-2.2.0.jar:na]
        at com.scalar.db.storage.cosmos.ResultImpl.add(ResultImpl.java:120) ~[scalardb-2.2.0.jar:na]
        at com.scalar.db.storage.cosmos.ResultImpl.lambda$interpret$0(ResultImpl.java:101) ~[scalardb-2.2.0.jar:na]
        at com.google.common.collect.ImmutableSortedMap.forEach(ImmutableSortedMap.java:588) ~[guava-24.1-jre.jar:na]
        at java.util.Collections$UnmodifiableMap.forEach(Collections.java:1505) ~[na:1.8.0_152-ea]
        at com.scalar.db.storage.cosmos.ResultImpl.interpret(ResultImpl.java:99) ~[scalardb-2.2.0.jar:na]
        at com.scalar.db.storage.cosmos.ResultImpl.<init>(ResultImpl.java:44) ~[scalardb-2.2.0.jar:na]
        at com.scalar.db.storage.cosmos.ScannerIterator.next(ScannerIterator.java:34) ~[scalardb-2.2.0.jar:na]
        at com.scalar.db.storage.cosmos.ScannerIterator.next(ScannerIterator.java:9) ~[scalardb-2.2.0.jar:na]
        at com.example.qa.dao.question.QuestionDao.scan(QuestionDao.java:49) ~[main/:na]
        at com.example.qa.service.question.QuestionServiceForStorage.get(QuestionServiceForStorage.java:130) ~[main/:na]
        at com.example.qa.controller.question.QuestionController.getQuestions(QuestionController.java:54) ~[main/:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_152-ea]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_152-ea]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_152-ea]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_152-ea]
        at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
        at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
        at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:877) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:783) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]

Non-serializable execution (lost update) when a recode is deleted

Environment

  • Version 2.0.0
  • with SERIALIZABLE

How to reproduce

CREATE TRANSACTION TABLE foo.foo (
  id TEXT PARTITIONKEY,
  value TEXT
);
  • Initial state:
id value tx_version (tx metadata)
foo 2 1
  • Steps

    • T1: start
    • T1: Read a record that has id = foo and put the value into A. If the record does not exist, put 0.
      • (id, value, tx_version) = ("foo", 2, 1)
    • T2: start
    • T2: Delete a record that has id = foo
    • T2: commit
    • T3: start
    • T3: Read a record that has id = foo and put the value into B. If the record does not exist, put 0.
    • T3: Add 1 to B and put
      • (id, value, tx_version) = ("foo", 1, 1)
    • T3: commit
    • T1: Add 1 to A and put
      • (id, value, tx_version) = ("foo", 3, 2)
    • T1: commit
  • Actual states :

id value tx_version
foo 3 2

The above state can NOT be produced by T1, T2 and T3 in some serial order, which means the execution is not serializable.

  • T1->T3->T2 and T3->T1->T2
    no record

  • T2->T1->T3 and T2->T3->T1

id value tx_version
foo 2 2
  • T1->T2->T3 and T3->T2->T1
id value tx_version
foo 1 1

ScalarDB prints ? instead of values

Describe the bug
A clear and concise description of what the bug is.
In log traces, Scalardb prints ? character instead of values. Eg.

query to prepare : [UPDATE codingjedi.partitions_of_a_tag SET tx_id=?,tx_state=?,tx_prepared_at=?,partition_info=?,before_tx_prepared_at=?,before_tx_id=?,before_tx_state=?,before_tx_committed_at=?,before_tx_version=?,before_partition_info=?,tx_version=? WHERE tag=? IF tx_version=? AND tx_id=?;].

To Reproduce
Steps to reproduce the behavior:
Increase tracing level to highest and do Put operation

Expected behavior
The values should be printed

Desktop (please complete the following information):
Windows

[Question]How to use ScalarDB in a way that data is stored in DynamoDBLocal?

Is your feature request related to the problem? Please explain.

I believe ScalarDB supports DynamoDB since Version 3.0.0. I would like to use ScalarDB in the form of storing data in DynamoDBLocal. The reason for this is that when we develop as a team, we prepare test data to validate the source code implemented in the local environment, and we want to develop without mixing in test data used by other engineers on the same team while working. We are also concerned that if we store the test data during development in the local environment in AWS DynamoDB, we will incur AWS DynamoDB costs for that.
Is there any way to use ScalarDB with DynamoDBLocal?

Please describe the solution you want.

I would like to use ScalarDB in the form of storing data in DynamoDBLocal.

Describe the alternatives you have considered.

If there is no way to use ScalarDB in the form of storing data in DynamoDBLocal, I would consider the following methods

If there is no way to use ScalarDB in the form of storing data in DynamoDBLocal, I would consider the following: - Unify the AWS DynamoDB tables used for development in the local environment with the team.
Assign an AWS DynamoDB table to each engineer.

I would like to avoid both of these methods, however, because they incur AWS DynamoDB costs.

Additional context.

data_loaderをscalardb3.0.0に対応して欲しい

Is your feature request related to a problem? Please describe.
https://github.com/scalar-labs/scalardb/tree/master/tools/data_loader

を使ってバージョン3.0.0のscalarDBにデータを投入したい。
現状はこちらのbuild.gradleをローカルで編集してscalarDBのバージョンを3.0.0にあげると以下のコンパイルエラーになる。
https://github.com/scalar-labs/scalardb/blob/master/tools/data_loader/build.gradle

Configure project :
The JavaApplication.setMainClassName(String) method has been deprecated. This is scheduled to be removed in Gradle 8.0. Use #getMainClass().set(...) instead. See https://docs.gradle.org/6.7.1/dsl/org.gradle.api.plugins.JavaApplication.html#org.gradle.api.plugins.JavaApplication:mainClass for more details.
        at build_81mtyax5oyegbg0tco40liq3j$_run_closure3.doCall(/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/build.gradle:31)
        (Run with --stacktrace to get the full stack trace of this deprecation warning.)

> Task :compileJava FAILED
/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/src/main/java/com/scalar/dataloader/ScalarDbRepository.java:103: エラー: 例外CrudExceptionは報告されません。スローするには、捕捉または宣言する必要があります
      getTx().delete(delete);
                    ^
/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/src/main/java/com/scalar/dataloader/ScalarDbRepository.java:135: エラー: 例外CrudExceptionは報告されません。スローするには、捕捉または宣言する必要があります
      getTx().put(put);
                 ^
/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/src/main/java/com/scalar/dataloader/ScalarDbRepository.java:208: エラー: 例外TransactionExceptionは報告されません。スローするには、捕捉または宣言する必要があります
              .start();
                    ^
エラー3個

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':compileJava'.
> Compilation failed; see the compiler error output for details.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 1s
1 actionable task: 1 executed

Describe the solution you'd like
data_loaderを使ってAmazon DynamoDB,MySQLに対してデータ投入を行いたい。

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Scalar DB Schema Tool fails due to parser tool updates.

Scalar DB Schema Tools can not be "make".

The reason is that the IF of the Parser being used by the Tool has changed, and since there is an argument with no argument, an error occurs at make time.

■Error detail

$ cd tools/schema
$ make
go get github.com/alecthomas/kingpin
go get github.com/alecthomas/participle

  1. _/home/ec2-user/scalardb/tools/schema/internal/parser
    internal/parser/parser.go:42:68: not enough arguments in call to participle.UseLookahead
    have ()
    want (int)
    make: *** エラー 2

■Parser Tool
https://github.com/alecthomas/participle

transaction.commit() unexpectedly succeeded without clustering keys

A record was committed unexpectedly by the Put has a clustering key as a value.
This commit should fail because the Cassandra driver throws InvalidQueryException.

This issue happens when a record is an initial record for the primary key.
The put for insertion isn't checked if the Put has the correct keys in the driver.
When the insertion succeeds, the preparation phase of the transaction also succeeds.

The commit of the transaction for the record fails because of the lack of clustering keys.
But it is eventual committed because this commit phase is after the state updating.

Refactor request: branching by SQLSTATE

Refactoring target

// ignore the duplicate key error
// "23000" is for MySQL/Oracle/SQL Server and "23505" is for PostgreSQL
if (!e.getSQLState().equals("23000") && !e.getSQLState().equals("23505")) {

Problem

May easily forget to add a necessary condition when adding a new RDB engine support.

Solution idea

Use a strategy pattern, where an interface has detectDuplicateKeyError(e) and concrete classes implement the logics to check SQLSTATE, for example.

scalar-schemaを使ってパーティションキーあり、クラスタリーキーなし、セカンダリインデックスありの条件のテーブルをAmazon DynamoDBに登録したい

Is your feature request related to a problem? Please describe.
scalardbのツールを使ってdynamodbにschemaを定義したいのだが特定のパターンのテーブルだと定義できない。
https://github.com/scalar-labs/scalardb/tree/master/tools/scalar-schema

パーティションキーあり、クラスタリーキーなし、セカンダリインデックスありの条件のテーブルをscalar-schemaでdynamodbにテーブル作成しようとすると以下のエラーになった。

./workflow_templates.json
2021-04-09 13:19:44,696 Exception in thread "main" software.amazon.awssdk.services.dynamodb.model.DynamoDbException: One or more parameter values were invalid: Some index key attributes are not defined in AttributeDefinitions. Keys: [deleted], AttributeDefinitions: [concatenatedPartitionKey] (Service: DynamoDb, Status Code: 400, Request ID: 810F9NAKCICUGV9JIB02EIGRFJVV4KQNSO5AEMVJF66Q9ASUAAJG, Extended Request ID: null)
	at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:123)
	at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:79)
	at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:59)
	at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:40)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
	at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:64)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:34)
	at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
	at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
	at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31)
	at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
	at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
	at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:128)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:154)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:107)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:162)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:91)
	at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
	at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
	at software.amazon.awssdk.services.dynamodb.DefaultDynamoDbClient.createTable(DefaultDynamoDbClient.java:1062)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:167)
	at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:102)
	at scalar_schema.dynamo$create_table.invokeStatic(dynamo.clj:196)
	at scalar_schema.dynamo$make_dynamo_operator$reify__3394.create_table(dynamo.clj:210)
	at scalar_schema.operations$create_tables$fn__4030.invoke(operations.clj:23)
	at clojure.core$map$fn__5866.invoke(core.clj:2755)
	at clojure.lang.LazySeq.sval(LazySeq.java:42)
	at clojure.lang.LazySeq.seq(LazySeq.java:51)
	at clojure.lang.Cons.next(Cons.java:39)
	at clojure.lang.RT.next(RT.java:713)
	at clojure.core$next__5386.invokeStatic(core.clj:64)
	at clojure.core$dorun.invokeStatic(core.clj:3142)
	at clojure.core$doall.invokeStatic(core.clj:3148)
	at scalar_schema.operations$create_tables.invokeStatic(operations.clj:19)
	at scalar_schema.core$_main.invokeStatic(core.clj:35)
	at scalar_schema.core$_main.doInvoke(core.clj:32)
	at clojure.lang.RestFn.applyTo(RestFn.java:137)
	at scalar_schema.core.main(Unknown Source)

こちらのJSONデータを使ってscalar-schemaを実行した際に上のエラーになりました。

{
  "keyspacename.workflow_templates": {
    "transaction": true,
    "partition-key": [
      "template_id"
    ],
    "columns": {
      "template_id": "TEXT",
      "template_name": "TEXT",
      "template_desc": "TEXT",
      "owner": "TEXT",
      "members": "TEXT",
      "status": "TEXT",
      "created_at": "BIGINT",
      "created_by": "TEXT",
      "updated_at": "BIGINT",
      "updated_by": "TEXT",
      "template_detail_json": "TEXT",
      "deleted": "BOOLEAN"
    },
    "secondary-index": [
      "deleted"
    ]
  }
}

Describe the solution you'd like
パーティションキーあり、クラスタリーキーなし、セカンダリインデックスありの条件のテーブルをscalar-schemaでdynamodbにテーブルを登録したい

Describe alternatives you've considered

Additional context

Non-serializable execution with write skew in some cases

Environment

  • Version 2.0.0
  • with SERIALIZABLE

How to reproduce

CREATE TRANSACTION TABLE foo.foo (
  id TEXT PARTITIONKEY,
  sub_id TEXT CLUSTERINGKEY,
  value INT,
);
  • Initial state: no records

  • Steps

    • T1: start
    • T2: start
    • T1: Scan all records that have foo as a partition key, and calculate a sum of value. (name the sum A). 0 since the table is empty.
      • (this is also fine) T1: Read ("foo", "t2") and put the value into A. Put 0 if the record does not exist.
    • T2: Scan all records that have foo as a partition key, and calculate a sum of value (name the sum B). 0 since the table is empty.
      • (this is also fine) T2: Read ("foo", "t1") and put the value into B. Put 0 if the record does not exist.
    • T1: Add 1 to A and put the value to ("foo", "t1")
    • T2: Add 1 to B and put the value to ("foo", "t2")
    • T1: commit
    • T2: commit
  • Actual states :

id sub_id value
foo t1 1
foo t2 1

The above states can NOT be produced by either executing T1 and T2 in this order or T2 and T1 in this order as follows, which means the execution is not serializable.

  • T1->T2
id sub_id value
foo t1 1
foo t2 2
  • T2->T1
id sub_id value
foo t1 2
foo t2 1

Schema generator tool seems to be broken

The schema generator tool does not seem to parse the sdbql files correctly. When I run the generator on the provided samples I get the following errors:

$ ./generator sample_input1.sdbql out
generator: error: sample_input1.sdbql:4:8: unexpected "TRANSACTION" (expected "NAMESPACE")
$ ./generator sample_input2.sdbql out
generator: error: sample_input2.sdbql:5:8: unexpected "TABLE" (expected "NAMESPACE")

I assume that, similar to #19, this issue is due to the participle library being updated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.