GithubHelp home page GithubHelp logo

aws / amazon-documentdb-jdbc-driver Goto Github PK

View Code? Open in Web Editor NEW
13.0 13.0 14.0 10.16 MB

Amazon DocumentDB JDBC driver to connect from BI tools and execute SQL Queries

License: Apache License 2.0

JavaScript 0.50% Java 99.38% Dockerfile 0.02% Shell 0.09%
aws database documentdb documentdb-jdbc jdbc sql

amazon-documentdb-jdbc-driver's People

Contributors

affonsov avatar alexey-temnikov avatar alinalibq avatar andiemontoyeah avatar birschick-bq avatar cassielyu avatar dependabot[bot] avatar gavinray97 avatar mitchell-elholm avatar sunnie629 avatar v-sathvik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-documentdb-jdbc-driver's Issues

[FEATURE] Please promptly update the intact taco file with a valid signature

Dear esteemed brother

The signature of the AWS DocumentDB connector file you uploaded has expired. As a result, I am currently using the command -DDisableVerifyConnectorPluginSignature=true forcefully. I believe, as much as you do, that this is not an ideal solution.

I kindly request you to promptly update the signature on the intact taco file. I implore you to save me from this situation.

@birschick-bq

Is your feature request related to a problem? Please describe.

A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like

A clear and concise description of what you want to happen.

Describe alternatives you've considered

A clear and concise description of any alternative solutions or features you've considered.

Additional context

Add any other context or screenshots about the feature request here.

[BUG] Investigate Error Handling for unhandled SQL query

Environment

  • DocumentDB JDBC driver version: 1.4.4
  • DocumentDB server version: n/a
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit] n/a
  • BI Tool or client name: n/a
  • BI Tool or client version: n/a
  • Java version (if known): n/a

Problem Description

  1. Steps to reproduce:
    • In DocumentDbQueryMappingService original exceptions can be hidden on call to getExceptionMessages(e) - for example, if it throws a null pointer exception, hiding the original exception. This should be adjusted to ensure that the original error is preserved, and other error handling should be investigated to look for similar issues.
  2. Expected behaviour:
  3. Actual behaviour:
  4. Error message/stack trace:
  5. Any other details that can be helpful:

Screenshots

  • If applicable, add screenshots to help explain your problem.

JDBC log

  • Add related JDBC log entries here.

[FEATURE] Full implementation of PreparedStatement

Is your feature request related to a problem? Please describe.

Add full support for PreparedStatment where parameter values are prepared with a SQL statement with replaceable parameter placeholders.

Describe the solution you'd like

Refer to Oracle documentation

Describe alternatives you've considered

Statements work, but you need to provide the values in-line with the SQL statement.

Additional context

None.

Using a literal comparison that resolves to FALSE throws exception (e.g., WHERE 3 > 4) [BUG]

Environment

  • DocumentDB JDBC driver version: 1.4.4
  • DocumentDB server version: MongoDB v6.0.3
  • OS: MacOS Ventura 13.1
  • BI Tool or client name: DBeaver
  • BI Tool or client version: 22.3.5

Problem Description

  1. Steps to reproduce: any query that has a WHERE clause that evaluates to a contradiction
  2. Expected behaviour: Returns no results
  3. Actual behaviour: Throws an exception
  4. Error message/stack `trace:
org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error: Unsupported SQL syntax 'SELECT "key", "val" FROM test_table where val <> val'. Please refer to the list of the supported features.
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:582)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$1(SQLQueryJob.java:491)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:173)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:498)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:924)
	at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3805)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:123)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:173)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:121)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:5073)
	at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
	at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: java.sql.SQLFeatureNotSupportedException: Unsupported SQL syntax 'SELECT "key", "val"FROM test_table where val <> val'. Please refer to the list of the supported features.
	at software.amazon.documentdb.jdbc.common.utilities.SqlError.createSQLFeatureNotSupportedException(SqlError.java:220)
	at software.amazon.documentdb.jdbc.query.DocumentDbQueryMappingService.get(DocumentDbQueryMappingService.java:161)
	at software.amazon.documentdb.jdbc.DocumentDbQueryExecutor.runQuery(DocumentDbQueryExecutor.java:172)
	at software.amazon.documentdb.jdbc.DocumentDbQueryExecutor.executeQuery(DocumentDbQueryExecutor.java:125)
	at software.amazon.documentdb.jdbc.DocumentDbStatement.executeQuery(DocumentDbStatement.java:98)
	at software.amazon.documentdb.jdbc.common.Statement.execute(Statement.java:107)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:329)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.lambda$0(JDBCStatementImpl.java:131)
	at org.jkiss.dbeaver.utils.SecurityManagerUtils.wrapDriverActions(SecurityManagerUtils.java:96)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:131)
	... 12 more
  1. Any other details that can be helpful: This exception is thrown for any contradiction ex. val <> val, val = 1 and val <> 1, 1 > 2 etc.

This has been noted in Known Issues but we were hoping to increase the priority of this issue as it is preventing some queries from running that are important to our development of a connector using this driver.

[BUG] Time literals are not formatted correctly

Environment

  • DocumentDB JDBC driver version: v1.0.0
  • DocumentDB server version: 3.6, 4.0
  • OS: Any
  • BI Tool or client name: Any
  • BI Tool or client version: Any
  • Java version (if known): Java 8

Problem Description

  1. Steps to reproduce:
  • Connect to any database either using a Java application or BI tool.
  • Execute a query containing a time literal such as SELECT TIME '20:17:40' FROM collection
  • Error occurs when retrieving result set.
  1. Expected behaviour: Time value can be retrieved and displayed.
  2. Actual behaviour: Query fails with error message - Conversion between the source type (Integer) and the target type (Date) is not supported.
  3. Error message/stack trace:
Conversion between the source type (Integer) and the target type (Date) is not supported.
java.sql.SQLException: Conversion between the source type (Integer) and the target type (Date) is not supported.
	at software.amazon.documentdb.jdbc.common.utilities.SqlError.createSQLException(SqlError.java:191)
	at software.amazon.documentdb.jdbc.DocumentDbAbstractResultSet.getValue(DocumentDbAbstractResultSet.java:173)
	at software.amazon.documentdb.jdbc.DocumentDbAbstractResultSet.getDate(DocumentDbAbstractResultSet.java:317)
	at software.amazon.documentdb.jdbc.DocumentDbAbstractResultSet.getTime(DocumentDbAbstractResultSet.java:335)
	at software.amazon.documentdb.jdbc.DocumentDbAbstractResultSet.getTime(DocumentDbAbstractResultSet.java:249)
Caused by: org.apache.commons.beanutils.ConversionException: Default conversion to java.sql.Date failed.
	at org.apache.commons.beanutils.converters.AbstractConverter.handleMissing(AbstractConverter.java:314)
	at org.apache.commons.beanutils.converters.AbstractConverter.handleError(AbstractConverter.java:269)
	at org.apache.commons.beanutils.converters.AbstractConverter.convert(AbstractConverter.java:177)
	at software.amazon.documentdb.jdbc.DocumentDbAbstractResultSet.getValue(DocumentDbAbstractResultSet.java:171

Screenshots

  • NA

JDBC log

Created projection stages of pipeline.
Pipeline stages added: {$project: {'EXPR$0': {"$literal": 73060000}, _id: 0}}
Query 1926e6b5-33f4-4e5a-84a3-a2cfdcfaedc6: Took 2057 ms to translate query.
Sending command '{"aggregate": "collection", "pipeline": [{"$project": {"EXPR$0": {"$literal": 73060000}, "_id": 0}}], "cursor": {"batchSize": 2000}, "$db": "resultDatabase", "lsid": {"id": {"$binary": {"base64": "qpsgQdszQWOv3kr6DfGdVQ==", "subType": "04"}}}}' with request id 46 to database resultDatabase on connection [connectionId{localValue:9, serverValue:10}] to server localhost:55921
Execution of command with request id 46 completed successfully in 13.97 ms on connection [connectionId{localValue:9, serverValue:10}] to server localhost:55921
Query 1926e6b5-33f4-4e5a-84a3-a2cfdcfaedc6: Took 36 ms to execute query and retrieve first batch of results.
Query 1926e6b5-33f4-4e5a-84a3-a2cfdcfaedc6: Executed on collection collection with following pipeline operations: [{"$project": {"EXPR$0": {"$literal": 73060000}, "_id": 0}}]
Converting 'Integer' value '73060000' to type 'java.sql.Date'
    IntegerConverter cannot handle conversion to 'java.sql.Date'
    Conversion threw ConversionException: IntegerConverter cannot handle conversion to 'java.sql.Date'
    IntegerConverter cannot handle conversion to 'java.sql.Date'
Conversion between the source type (Integer) and the target type (Date) is not supported.

Logstash Pipeline : input jdbc - Must only be called before \"cook()\" | [BUG]

Environment

  • DocumentDB JDBC driver version: 1.4.2
  • DocumentDB server version: MongoDB 4.0 Compatibility
  • OS: Ubuntu 20.04
  • BI Tool or client name: Logstash
  • BI Tool or client version:
  • Java version (if known): openjdk 11.0.17

Problem Description

  1. Steps to reproduce:

I am trying to run logstash pipeline which requires to get input (data from DocumentDB 'Mongo') and output to elasicSearch.

input {
    jdbc{
        jdbc_driver_library => ""
        jdbc_driver_class => "software.amazon.documentdb.jdbc.DocumentDbMain"
        jdbc_connection_string => "jdbc:documentdb://user:[email protected]/testDB?tlsAllowInvalidHostnames=true&tlsCAFile=rds-combined-ca-bundle.pem>
        jdbc_user => ""
        jdbc_password => ""
        statement => "select * FROM `products` where `id` = 10"
    }
}
filter {}
output{
  stdout { codec => rubydebug { metadata=> true } }
  elasticsearch {
    action => "update"
    doc_as_upsert => true
    index => "logstash-products-index"
    hosts => ["localhost:9200"]
    user => ""
    password => ""
  }
}
  1. Expected behaviour:

Get product from DocumentDB and add to ElasticSearch.

  1. Actual behaviour:

When the pipeline is run, it fails and throws error.

  1. Error message/stack trace:

[ERROR] 2022-12-19 14:52:25.217 [Ruby-0-Thread-17: :1] javapipeline - Worker loop initialization error {:pipeline_id=>"main", :error=>"Must only be called before "cook()"", :exception=>Java::JavaLang::IllegalStateException, :stacktrace=>"org.codehaus.janino.SimpleCompiler.assertUncooked(org/codehaus/janino/SimpleCompiler.java:526)\norg.codehaus.janino.SimpleCompiler.cook(org/codehaus/janino/SimpleCompiler.java:252)\norg.codehaus.janino.SimpleCompiler.compileToClassLoader(org/codehaus/janino/SimpleCompiler.java:517)\norg.codehaus.janino.SimpleCompiler.cook(org/codehaus/janino/SimpleCompiler.java:241)\norg.codehaus.janino.SimpleCompiler.cook(org/codehaus/janino/SimpleCompiler.java:219)\norg.codehaus.commons.compiler.Cookable.cook(org/codehaus/commons/compiler/Cookable.java:79)\norg.codehaus.commons.compiler.Cookable.cook(org/codehaus/commons/compiler/Cookable.java:74)\norg.logstash.config.ir.compiler.ComputeStepSyntaxElement.lambda$compile$0(org/logstash/config/ir/compiler/ComputeStepSyntaxElement.java:150)\njava.util.concurrent.ConcurrentHashMap.computeIfAbsent(java/util/concurrent/ConcurrentHashMap.java:1708)\norg.logstash.config.ir.compiler.ComputeStepSyntaxElement.compile(org/logstash/config/ir/compiler/ComputeStepSyntaxElement.java:140)\norg.logstash.config.ir.compiler.ComputeStepSyntaxElement.instantiate(org/logstash/config/ir/compiler/ComputeStepSyntaxElement.java:124)\norg.logstash.config.ir.compiler.DatasetCompiler.terminalOutputDataset(org/logstash/config/ir/compiler/DatasetCompiler.java:212)\norg.logstash.config.ir.CompiledPipeline$CompiledExecution.compileOutputs(org/logstash/config/ir/CompiledPipeline.java:414)\norg.logstash.config.ir.CompiledPipeline$CompiledExecution.(org/logstash/config/ir/CompiledPipeline.java:379)\norg.logstash.config.ir.CompiledPipeline$CompiledUnorderedExecution.(org/logstash/config/ir/CompiledPipeline.java:337)\norg.logstash.config.ir.CompiledPipeline.buildExecution(org/logstash/config/ir/CompiledPipeline.java:156)\norg.logstash.execution.WorkerLoop.(org/logstash/execution/WorkerLoop.java:67)\njdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\njdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(jdk/internal/reflect/NativeConstructorAccessorImpl.java:77)\njdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(jdk/internal/reflect/DelegatingConstructorAccessorImpl.java:45)\njava.lang.reflect.Constructor.newInstanceWithCaller(java/lang/reflect/Constructor.java:499)\njava.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:480)\norg.jruby.javasupport.JavaConstructor.newInstanceDirect(org/jruby/javasupport/JavaConstructor.java:237)\norg.jruby.RubyClass.new(org/jruby/RubyClass.java:911)\norg.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)\nusr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.init_worker_loop(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:580)\nusr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:286)\norg.jruby.RubyProc.call(org/jruby/RubyProc.java:309)\njava.lang.Thread.run(java/lang/Thread.java:833)", :thread=>"#<Thread:0x2c74eb8e sleep>"}

  1. Any other details that can be helpful:
    N/A

Screenshots

N/A


JDBC log

Tableau Server package signature

Environment

  • DocumentDB JDBC driver version: 1.4.4
  • OS: Windows 10 64-bit
  • BI Tool or client name: Tableau Desktop
  • BI Tool or client version: 2022.3 and later

Problem Description

  1. Steps to reproduce: Install Tableau Desktop 2022.3 later, Install .taco file Connector, try to connect to an Amazon DocumentDB server from Tableau Desktop.
  2. Expected behaviour: Successful connect.
  3. Actual behaviour: Package signature is invalid error message appears.
  4. Error message/stack trace: unverified package signature
  5. Any other details that can be helpful: Probably the signature has expired, a connection from Tableau Desktop to Amazon DocumentDB can't be established.

[BUG] Exception thrown on CASE WHEN where ELSE literal value is non-numeric

Environment

  • DocumentDB JDBC driver version: 1.4.4
  • DocumentDB server version: n/a
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit]: n/a
  • BI Tool or client name: n/a
  • BI Tool or client version: n/a
  • Java version (if known): n/a

Problem Description

  1. Steps to reproduce:
select c1, coalesce(c1, {ts '1899-12-28 00:00:00'}) as c2
from
(
  select CAST(NULL as TIMESTAMP) as c1
    from sales.emp
)
group by c1  
  1. Expected behaviour:

1899-12-28 00:00:00

  1. Actual behaviour:

Exception thrown:
cannot convert TIMESTAMP literal to class java.math.BigDecimal java.lang.AssertionError: cannot convert TIMESTAMP literal to class java.math.BigDecimal

  1. Error message/stack trace:
cannot convert TIMESTAMP literal to class java.math.BigDecimal
java.lang.AssertionError: cannot convert TIMESTAMP literal to class java.math.BigDecimal
    at org.apache.calcite.rex.RexLiteral.getValueAs(RexLiteral.java:1143)
    at org.apache.calcite.rel.rules.ProjectAggregateMergeRule$1.visitCall(ProjectAggregateMergeRule.java:111)
    at org.apache.calcite.rel.rules.ProjectAggregateMergeRule$1.visitCall(ProjectAggregateMergeRule.java:91)
    at org.apache.calcite.rex.RexCall.accept(RexCall.java:189)
    at org.apache.calcite.rex.RexVisitor.visitList(RexVisitor.java:63)
    at org.apache.calcite.rex.RexVisitor.visitList(RexVisitor.java:71)
    at org.apache.calcite.rel.rules.ProjectAggregateMergeRule.onMatch(ProjectAggregateMergeRule.java:132)
    at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:337)
    at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556)
    at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:420)
    at org.apache.calcite.plan.hep.HepPlanner.executeRuleInstance(HepPlanner.java:243)
    at org.apache.calcite.plan.hep.HepInstruction$RuleInstance$State.execute(HepInstruction.java:178)
    at org.apache.calcite.plan.hep.HepPlanner.lambda$executeProgram$0(HepPlanner.java:211)
    at com.google.common.collect.ImmutableList.forEach(ImmutableList.java:405)
    at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:210)
    at org.apache.calcite.plan.hep.HepProgram$State.execute(HepProgram.java:118)
    at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:205)
    at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:191)
    at org.apache.calcite.test.RelOptFixture.checkPlanning(RelOptFixture.java:378)
    at org.apache.calcite.test.RelOptFixture.check(RelOptFixture.java:330)
    at org.apache.calcite.test.RelOptFixture.checkUnchanged(RelOptFixture.java:322)
    at org.apache.calcite.test.RelOptRulesTest.testProjectAggregateMergeNonNumericLiteral(RelOptRulesTest.java:4810)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:568)
    at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
    at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
    at org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46)
    at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
    at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
    at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
    at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
    at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
    at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
    at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
    at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
    at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:129)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
    at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
    at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:129)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
    at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
    at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
  1. Any other details that can be helpful:

This issue resolution is in progress as of the creation of this issue.

Calcite Issue
Calcite PR

The Pull Requests will notify you of new versions for Calcite dependency. You'll need to check to see if the issue and PR are merged into that version. On release of the fix from Calcite, update the dependency version for Calcite.

E.g. if fixed version is 1.34.0
implementation group: 'org.apache.calcite', name: 'calcite-core', version: '1.34.0'


Screenshots

n/a


JDBC log

n/a

[BUG] NULL Columns passed as VARCHAR to Calcite, ResultSetMetadata for queries reports NULL columns as VARCHAR

Environment

  • DocumentDB JDBC driver version: 1.4.4
  • DocumentDB server version: n/a
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit]: n/a
  • BI Tool or client name: n/a
  • BI Tool or client version: n/a
  • Java version (if known): n/a

Problem Description

  1. Steps to reproduce:
    • For SELECT fieldNull where DatabaseMetaData would report fieldNull as NULL, ResultSetMetadata is reporting the same field as VARCHAR when used in a query.
  2. Expected behaviour:
    • column metadata should report NULL
  3. Actual behaviour:
    • column metadata should report VARCHAR
  4. Error message/stack trace:
  5. Any other details that can be helpful:
  • Test that changing the mapping here does not break existing tests.
  • Test possible effects on cached query results in BI Tools (mainly target Tableau) if we add this fix.
  • Confirm that NULL over VARCHAR is the right behaviour for this case.

Screenshots

  • If applicable, add screenshots to help explain your problem.

JDBC log

  • Add related JDBC log entries here.

[FEATURE] Use find() instead of aggregate() for simpler queries when possible to enhance performance

Is your feature request related to a problem? Please describe.

Aggregation is always used even in queries where find() would have been sufficient.
In general, find() is often faster than using the aggregation framework and always makes use of indexes. The $match stage for aggregation can only use an index to filter documents if it occurs at the beginning of a pipeline which is not always the case.

Describe the solution you'd like

There is already some work in place in the Calcite adapter to determine a filter and projection document for a possible find() operation but these values are not used. If the query does not require additional operations that are only supported in the aggregation framework then the query execution should be done using find().

Additional context

[BUG] Constructing a Calcite schema with `DocumentDbSchemaFactory`, then adding to a root schema, query returns zero records

Environment

  • DocumentDB JDBC driver version: 1.2.2
  • DocumentDB server version: N/A
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit] Windows 11 x64
  • BI Tool or client name:
  • BI Tool or client version:
  • Java version (if known): Java 17

Problem Description

  1. Steps to reproduce:
@Test
fun `create direct Calcite schema with DocumentDB adapter`() {
    val mongoURI = "mongodb://root:Password123#@localhost:27017/sample?authSource=admin"
    val documentDBURI = "jdbc:documentdb://root:Password123%23@localhost:27017/sample?tls=false"

    val connectionProps =
        DocumentDbConnectionProperties.getPropertiesFromConnectionString(documentDBURI)
    val metadataSchema =
        DocumentDbDatabaseSchemaMetadata.get(connectionProps, "sample", MongoClients.create(mongoURI))
    val calciteSchema =
        DocumentDbSchemaFactory.create(metadataSchema, connectionProps)
    
    val calciteConn = DriverManager.getConnection("jdbc:calcite:").unwrap(CalciteConnection::class.java)
    calciteConn.rootSchema.add("sample", calciteSchema)

    val query = """
        SELECT * FROM "sample"."album"
    """

    val rs = calciteConn.createStatement().executeQuery(query)
    while (rs.next()) {
        val md = rs.metaData
        for (i in 1..md.columnCount) {
            println(md.getColumnName(i) + ": " + rs.getObject(i))
        }
    }
}
  1. Expected behaviour:

The rows are printed out in the while() loop (which works when querying using the JDBC driver bundled in the project, sample below)

@Test
fun `test Amazon DocumentDB Calcite adapter with Mongo`() {
    val conn = DriverManager.getConnection(testDatabaseConfigProvider.mongoDocumentDB)

    val query = """
        SELECT
            artist.*,
            album.*
        FROM        sample.artist
        INNER JOIN  sample.album AS album
                ON  album.artist_id = artist.artist__id
    """

    val rs = conn.createStatement().executeQuery(query)
    while (rs.next()) {
        val md = rs.metaData
        for (i in 1..md.columnCount) {
            println(md.getColumnName(i) + ": " + rs.getObject(i))
        }
        println("---")
    }
}
2022-04-25 15:55:34,132 INFO  [sof.ama.doc.jdb.met.DocumentDbMetadataService] (Test worker) Successfully retrieved metadata schema _default in 35 ms.
2022-04-25 15:55:34,223 INFO  [sof.ama.doc.jdb.DocumentDbQueryExecutor] (Test worker) Query 1c85e3e0-042b-4faf-8856-9fe3b90158b6: Beginning translation of query.
2022-04-25 15:55:34,897 INFO  [sof.ama.doc.jdb.cal.ada.DocumentDbProject] (Test worker) Created projection stages of pipeline.
2022-04-25 15:55:34,968 INFO  [sof.ama.doc.jdb.DocumentDbQueryExecutor] (Test worker) Query 1c85e3e0-042b-4faf-8856-9fe3b90158b6: Took 745 ms to translate query.
2022-04-25 15:55:34,974 INFO  [sof.ama.doc.jdb.DocumentDbQueryExecutor] (Test worker) Query 1c85e3e0-042b-4faf-8856-9fe3b90158b6: Took 6 ms to execute query and retrieve first batch of results.

artist__id: 1.0
name: AC/DC
album__id: 1.0
artist_id: 1.0
title: Back In Black
---

artist__id: 1.0
name: AC/DC
album__id: 2.0
artist_id: 1.0
title: High Voltage
  1. Actual behaviour:

It runs successfully but nothing is printed. It looks like something happens, I can see the metadata introspection and projection running in the logs:

2022-04-25 15:49:09,203 INFO  [sof.ama.doc.jdb.met.DocumentDbMetadataService] (Test worker) Successfully retrieved metadata schema sample in 91 ms.
2022-04-25 15:49:09,928 INFO  [sof.ama.doc.jdb.cal.ada.DocumentDbProject] (Test worker) Created projection stages of pipeline.

Full logs below

  1. Error message/stack trace:
2022-04-25 15:49:09,106 INFO  [org.mon.dri.client] (Test worker) MongoClient with metadata {"driver": {"name": "mongo-java-driver|sync", "version": "4.6.0"}, "os": {"type": "Windows", "name": "Windows 10", "architecture": "amd64", "version": "10.0"}, "platform": "Java/Eclipse Adoptium/17.0.1+12"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='root', source='admin', password=<hidden>, mechanismProperties=<hidden>}, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@538c90bf]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
2022-04-25 15:49:09,124 INFO  [org.mon.dri.connection] (cluster-rtt-ClusterId{value='6266fb35ab0b6c505e955715', description='null'}-localhost:27017) Opened connection [connectionId{localValue:2, serverValue:223}] to localhost:27017
2022-04-25 15:49:09,124 INFO  [org.mon.dri.connection] (cluster-ClusterId{value='6266fb35ab0b6c505e955715', description='null'}-localhost:27017) Opened connection [connectionId{localValue:1, serverValue:222}] to localhost:27017
2022-04-25 15:49:09,125 INFO  [org.mon.dri.cluster] (cluster-ClusterId{value='6266fb35ab0b6c505e955715', description='null'}-localhost:27017) Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=52593400}
2022-04-25 15:49:09,191 INFO  [org.mon.dri.connection] (Test worker) Opened connection [connectionId{localValue:3, serverValue:224}] to localhost:27017
2022-04-25 15:49:09,203 INFO  [sof.ama.doc.jdb.met.DocumentDbMetadataService] (Test worker) Successfully retrieved metadata schema sample in 91 ms.
2022-04-25 15:49:09,928 INFO  [sof.ama.doc.jdb.cal.ada.DocumentDbProject] (Test worker) Created projection stages of pipeline.
  1. Any other details that can be helpful:

Screenshots

  • If applicable, add screenshots to help explain your problem.

JDBC log

  • Add related JDBC log entries here.

[BUG] - Cannot convert from BSON type string to Date

Environment

  • DocumentDB JDBC driver version: 1.4.4
  • DocumentDB server version: 4.0
  • OS: Windows 10 64-bit
  • BI Tool or client name: Tableau Desktop
  • BI Tool or client version: 2022.1
  • Java version (if known): 8.341

Problem Description

  1. Steps to reproduce: key field on DocumentDB is Date. However on Tableau appears as string and unable to convert.
  2. Expected behaviour: display date on rows field
  3. Actual behaviour: error executing query
  4. Error message/stack trace:
    "Error Code: A96BDB88
    Execute query has failed, error: shadow.com.mongodb.MongoCommandException: Command failed with error 16006: 'can't convert from BSON type string to Date' on server documentdb. The full response is {"ok": 0.0, "operationTime": {"$timestamp": {"t": 1692798039, "i": 1}}, "code": 16006, "errmsg": "can't convert from BSON type string to Date"}."

DatabaseMetaData.getTables and getColumns does not handle escaped patterns

Environment

  • DocumentDB JDBC driver version: 1.2.0
  • DocumentDB server version:
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit]
  • BI Tool or client name: PowerBI
  • BI Tool or client version:
  • Java version (if known): 8 ... 17

Problem Description

  1. Steps to reproduce:
  var metaData = connection.getMetaData();
  String catalogPattern = null;
  String schemaPattern = "database";
  String tablePattern = "table\\_name";
  
  ResultSet tables = metaData.getTables(catalogPattern, schemaPattern, tablePattern);
  if (tables.size() < 1) {
    // Unexpected result
  }
  1. Expected behaviour:
  • the number of tables returned should be 1.
  1. Actual behaviour:
  • the number of tables returned is only 1.
Pattern Regex Equivalent
\_ [_]
\% [%]
\\ [\\]
_ .
% .*

Change convertPatternToRegex() to handle escaped special characters (_and%`).

  1. Error message/stack trace:
  • none
  1. Any other details that can be helpful:

Screenshots

  • If applicable, add screenshots to help explain your problem.

JDBC log

  • Add related JDBC log entries here.

[BUG]Successful connection in tableau but Table list is not displayed

Environment

  • DocumentDB JDBC driver version:
    • documentdbjdbc-1.2.5.taco
    • documentdb-jdbc-1.2.5-all.jar
  • DocumentDB server version:docdb 4.0.0
  • OS: Windows 10 64-bit (However, there is no problem with MAC PC)
  • BI Tool or client name: Tableau Desktop
  • BI Tool or client version:20222.22
  • Java version (if known): 1.8.0_333

Problem Description

  1. Steps to reproduce:
    within ec2 instance in VPC
    With TACO and jdbc in tableau connect to Amazon documentDB

  2. Expected behaviour:
    Overview of tables in schema

  3. Actual behaviour:
    Table list is not displayed
    tableau_amazon_document_dbのテーブルー覧が表示されない

  4. Error message/stack trace:
    No error message

  5. Any other details that can be helpful:
    there is no problem with MAC PC.


Screenshots

  • If applicable, add screenshots to help explain your problem.
    tableau_バージョン

JDBC log

  • Add related JDBC log entries here.
    Tableau Desktop log
    log.txt

[BUG]Inconsistent WEEK behavior in the documentdb driver compared to other data sources

Environment

  • DocumentDB JDBC driver version: 1.4.4
  • DocumentDB server version: MongoDB v6.0.3
  • OS: MacOS Ventura 13.1
  • BI Tool or client name: DBeaver
  • BI Tool or client version: 22.3.5

Problem Description

  1. Steps to reproduce: As is noted in the SQL limitation document, EXTRACT for WEEK or WEEK() return an integer between 0 and 53. This is in line with the MongoDB interpretation of week numbers but other databases more commonly return a value between 1 and 53. While this follows the MongoDB behavior, it would be ideal to also have the option to use isoWeek which Mongo also supports get access to the week number in ISO 8601 format.
    Same goes for FLOOR(timestamp TO timeunit), basically things that have to do with week numbering.
  2. Expected behaviour: Week number in ISO 8601 format also being available
  3. Actual behaviour: Currently only supports 0-53 in WEEK
  4. Error message/stack trace: N/A
  5. Any other details that can be helpful: This would be desirable since while this does follow the Mongo behavior, the inconsistency with other databases leads to some difficulties in the development of database connectors using this driver, which can be easily resolved with the ISO format being an option. Thanks.

[BUG] Increment Tableau Connector Version

Environment

  • DocumentDB JDBC driver version: 1.1.0
  • DocumentDB server version: 4.0
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit] MacOS Monterey
  • BI Tool or client name: Tableau
  • BI Tool or client version: 2021.4.1
  • Java version (if known): N/A

Problem Description

  1. Steps to reproduce: Tableau Connector has an older version
  2. Expected behaviour: version should be plugin-version='1.14.0.0'
  3. Actual behaviour: plugin-version='1.13.0.0'
  4. Error message/stack trace: N/A
  5. Any other details that can be helpful: N/A

Screenshots

N/A


JDBC log

N/A

[BUG] Reverse operators for <, <=, >, >= defined incorrectly

The REVERSE_OPERATORS in DocumentDbRules are defined in a way that returns unexpected results.

Ex. For a column val [1, 2, 3] select val from int_table where 2 >= val should return [1, 2]
Currently, the driver reverses the operator >= to be '$lt' which is the same as select val from int_table where val < 2 which returns [1]
The correct reverse operator is '$lte' which should return [1, 2].

This should be a very quick fix, I made the change locally myself and it looks like this: Reverse Operator Change

Switch to Jsch-Fork

Is your feature request related to a problem? Please describe.

you might want to switch to https://github.com/mwiede/jsch

It's maintained and features more secure algorithms.

Describe the solution you'd like

replace dependency com.jcraft:jsch with com.github.mwiede:jsch

[BUG] Successful connection in tableau but creation of schema and tables fails.

Environment

  • DocumentDB JDBC driver version: 1.14.0.0
  • DocumentDB server version: docdb 4.0.0
  • OS: Windows 10 64-bit
  • BI Tool or client name: Tableau Desktop
  • BI Tool or client version: 2021.4.2
  • Java version (if known): -

Problem Description

  1. Steps to reproduce:
    (within ec2 instance in VPC!!)
    With TACO in tableau connect to documentDB

  2. Expected behaviour:
    Overview of tables in schema

  3. Actual behaviour:
    No overview at all.

  4. Error message/stack trace:
    An error occurred while communicating with the data source
    Bad Connection: Tableau could not connect to the data source.
    Error Code: 81B3934F
    An error occurred while communicating with data source 'paydayService'

  5. Any other details that can be helpful:
    Used the jdbc driver with dbeaver, connection and creation of schema (and tables) was successful


JDBC log

Walking through the logs gives me no greater insight in what the issue is.
Same goes for tableau logs themselves (running in DEBUG mode)

[FEATURE] Implement JDBC `.getTypeInfo()`

This is useful because tools like SchemaCrawler provide a uniform interface for getting schema information from JDBC datasources, but one of the requirements is a minimal implementation of getTypeInfo()

If this isn't too huge of a lift, maybe I can attempt a PR here if someone could give some hints on what the implementation should be?

@Test
fun `test SchemaCrawler`() {
    val dataSource = DocumentDbDataSource().apply {
        hostname = "localhost:27017"
        database = "sample"
        user = "root"
    }

    val options = SchemaCrawlerOptionsBuilder.newSchemaCrawlerOptions()
   // Create a default TypeMap
    val retrievalOptions = SchemaRetrievalOptionsBuilder.builder()
        .withTypeMap(TypeMap())
        .toOptions()

    val catalog = SchemaCrawlerUtility.getCatalog(dataSource.connection, retrievalOptions,
        options, schemacrawler.tools.options.Config())

    println(catalog.tables)
}
Caused by: java.sql.SQLFeatureNotSupportedException
	at software.amazon.documentdb.jdbc.DocumentDbDatabaseMetaData.getTypeInfo(DocumentDbDatabaseMetaData.java:650)
	at schemacrawler.crawl.DataTypeRetriever.retrieveSystemColumnDataTypesFromMetadata(DataTypeRetriever.java:179)
	at schemacrawler.crawl.DataTypeRetriever.retrieveSystemColumnDataTypes(DataTypeRetriever.java:84)
	at schemacrawler.crawl.RetrievalStopWatch.lambda$time$0(RetrievalStopWatch.java:105)
	at us.fatehi.utility.StopWatch.time(StopWatch.java:142)
	at schemacrawler.crawl.RetrievalStopWatch.time(RetrievalStopWatch.java:100)
	at schemacrawler.crawl.RetrievalStopWatch.time(RetrievalStopWatch.java:72)
	at schemacrawler.crawl.SchemaCrawler.crawlColumnDataTypes(SchemaCrawler.java:201)
	at schemacrawler.crawl.SchemaCrawler.crawl(SchemaCrawler.java:143)

Enable local persistence of detected schemas

Is your feature request related to a problem? Please describe.

I'd like to be able to save the detected schema locally. The schema information could then be restored later without needing another schema detection.

There are some collections that we work with that are very large and we may need to sample many or all of the documents. Minimizing the number of times that we do schema detection is important.

Describe the solution you'd like

Hooks in the driver for:

  • Persisting the schema of a collection
  • Restoring the persisted schema of a collection (to skip schema auto detection)

Describe alternatives you've considered

None

Additional context

At this point I'm most curious about the feasibility of this feature. Is it possible? How big of a change would this require?

[BUG] Package signature verification failed during connection creation.

Environment

  • DocumentDB JDBC driver version: 1.14.0.0
  • DocumentDB server version: 3.6
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit]: Windows 10 64-bit
  • BI Tool or client name: Tableau
  • BI Tool or client version: 2021.4.3
  • Java version (if known): unkown

Problem Description

  1. Steps to reproduce:
  • Copy the jdbc driver "1.1.1-all.jar" to C:\Program Files\Tableau\Drivers
  • Copy the .taco "documentdbjdbc-1.1.1.taco" to C:\Users\[user]\Documents\My Tableau Repository\Connectors
  • Open tableau and try to create a new DocumentDb Connection.
  • When clicking on "sign in" the following error occurs:
An error occurred while communicating with Amazon DocumentDB by Amazon Web Services
There was a problem while running the connector plugin. Reinstall the plugin or contact the plugin provider.
Note that you might need to make local configuration changes to resolve the error.
Error Code:14D18B1F
Package signature verification failed during connection creation.
Connector Class: documentdbjdbc, Version: 1.14.0.0
  1. Expected behaviour: connection would establish successful
  2. Actual behaviour: connection didnt establish
  3. Error message/stack trace:
An error occurred while communicating with Amazon DocumentDB by Amazon Web Services
There was a problem while running the connector plugin. Reinstall the plugin or contact the plugin provider.
Note that you might need to make local configuration changes to resolve the error.
Error Code:14D18B1F
Package signature verification failed during connection creation.
Connector Class: documentdbjdbc, Version: 1.14.0.0

Screenshots

document_db_connection_failure


JDBC log

Where to get it from?

Small refactoring for loading the certificate

Is your feature request related to a problem? Please describe.

A small refactoring improvement is possible when using loading the certificate:
#263

Describe the solution you'd like

try (InputStream inputStream = getTlsCAFileInputStream()) {
    if (inputStream != null) {
        final X509Certificate certificate = (X509Certificate) CertificateFactory
                .getInstance("X.509")
                .generateCertificate(inputStream);
        final SSLContext sslContext = SSLFactory.builder()
                .withTrustMaterial(certificate)
                .build()
                .getSslContext();
        builder.context(sslContext);
    }
}

At the following place:

final X509Certificate certificate = (X509Certificate) CertificateFactory

It is a small refactoring but this can be refactored to the following snippet:

try (InputStream inputStream = getTlsCAFileInputStream()) {
    if (inputStream != null) {
        final SSLContext sslContext = SSLFactory.builder()
                .withTrustMaterial(CertificateUtils.loadCertificate(inputStream))
                .build()
                .getSslContext();
        builder.context(sslContext);
    }
}

Describe alternatives you've considered

None.

Additional context

From @Hakky54 : https://github.com/Hakky54

Value expected to be of type DOCUMENT is of unexpected type NULL

Environment

  • DocumentDB JDBC driver version: 1.14.0.0
  • DocumentDB server version: 3.6
  • OS: [e.g. MacOS Big Sur 11.5.1, Windows 10 64-bit]: Windows 10 64-bit
  • BI Tool or client name: Tableau
  • BI Tool or client version: 2021.4.3
  • Java version (if known): unkown

Problem Description

  1. Steps to reproduce:
  • Copy the jdbc driver "1.1.1-all.jar" to C:\Program Files\Tableau\Drivers
  • Copy the .taco "documentdbjdbc-1.1.1.taco" to C:\Users\[user]\Documents\My Tableau Repository\Connectors
  • Open tableau and try to create a new DocumentDb Connection.
  • When clicking on "sign in" the following error occurs:
An error occurred while communicating with Amazon DocumentDB by Amazon Web Services
Bad Connection: Tableau could not connect to the data source.
Error Code: FAB9A2C5
Value expected to be of type DOCUMENT is of unexpected type NULL
Unable to connect to the Amazon DocumentDB by Amazon Web Services server "indivupad.cluster-REDACTED.us-east-1.docdb.amazonaws.com". Check that the server is running and that you have access privileges to the requested database.
Connector Class: documentdbjdbc, Version: 1.14.0.0
  1. Expected behaviour: connection would establish successful
  2. Actual behaviour: connection didnt establish
  3. Error message/stack trace:
    see above

Screenshots


JDBC log

2022-02-14 12:12:53.680 +0100 (,,,,10,17) grpc-default-executor-7 : ERROR com.tableau.connect.util.GrpcServiceHelper - Failed in constructProtocol.
shadow.org.bson.BsonInvalidOperationException: Value expected to be of type DOCUMENT is of unexpected type NULL
	at shadow.org.bson.BsonValue.throwIfInvalidType(BsonValue.java:419) ~[?:?]
	at shadow.org.bson.BsonValue.asDocument(BsonValue.java:47) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processDocumentsInArray(DocumentDbTableSchemaGenerator.java:540) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processArray(DocumentDbTableSchemaGenerator.java:405) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processComplexTypes(DocumentDbTableSchemaGenerator.java:562) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processDocument(DocumentDbTableSchemaGenerator.java:188) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processDocumentsInArray(DocumentDbTableSchemaGenerator.java:540) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processArray(DocumentDbTableSchemaGenerator.java:405) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processComplexTypes(DocumentDbTableSchemaGenerator.java:562) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processDocument(DocumentDbTableSchemaGenerator.java:188) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processComplexTypes(DocumentDbTableSchemaGenerator.java:557) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processDocument(DocumentDbTableSchemaGenerator.java:188) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processComplexTypes(DocumentDbTableSchemaGenerator.java:557) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.processDocument(DocumentDbTableSchemaGenerator.java:188) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbTableSchemaGenerator.generate(DocumentDbTableSchemaGenerator.java:80) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbMetadataService.getCollectionMetadataDirect(DocumentDbMetadataService.java:404) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbMetadataService.getNewDatabaseMetadata(DocumentDbMetadataService.java:354) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbMetadataService.get(DocumentDbMetadataService.java:128) ~[?:?]
	at software.amazon.documentdb.jdbc.metadata.DocumentDbDatabaseSchemaMetadata.get(DocumentDbDatabaseSchemaMetadata.java:114) ~[?:?]
	at software.amazon.documentdb.jdbc.DocumentDbConnection.setMetadata(DocumentDbConnection.java:234) ~[?:?]
	at software.amazon.documentdb.jdbc.DocumentDbConnection.ensureDatabaseMetadata(DocumentDbConnection.java:229) ~[?:?]
	at software.amazon.documentdb.jdbc.DocumentDbConnection.getMetaData(DocumentDbConnection.java:208) ~[?:?]
	at com.tableausoftware.jdbc.JDBCProtocolImpl.getDriverName(JDBCProtocolImpl.java:1037) ~[jdbcserver.jar:20214.0.17]
	at com.tableau.connect.service.ProtocolService.constructProtocol(ProtocolService.java:61) ~[jdbcserver.jar:20214.0.17]
	at com.tableau.connect.grpc.GrpcProtocolService.lambda$constructProtocol$0(GrpcProtocolService.java:63) ~[jdbcserver.jar:20214.0.17]
	at com.tableau.connect.grpc.GrpcProtocolService.wrap(GrpcProtocolService.java:289) ~[jdbcserver.jar:20214.0.17]
	at com.tableau.connect.grpc.GrpcProtocolService.constructProtocol(GrpcProtocolService.java:62) ~[jdbcserver.jar:20214.0.17]
	at com.tableau.connect.generated.ProtocolServiceGrpc$MethodHandlers.invoke(ProtocolServiceGrpc.java:1492) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:180) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.Contexts$ContextualizedServerCallListener.onHalfClose(Contexts.java:86) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:331) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:814) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) ~[jdbcserver.jar:20214.0.17]
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) ~[jdbcserver.jar:20214.0.17]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]

[BUG] Tableau connector does not URI encode username and password - special characters in valid password result in error

Environment

  • DocumentDB JDBC driver version: 1.14.0.0
  • DocumentDB server version: docdb 4.0.0
  • OS: Windows 10 64-bit
  • BI Tool or client name: Tableau Desktop
  • BI Tool or client version: 2021.4.2
  • Java version (if known): -

Problem Description

  1. Steps to reproduce:
    (within ec2 instance in VPC!!)
    Download jdbc and taco.
    Place files in designated folders.
    Open up tableau.
    Fill in wizard.
    Try to connect.

  2. Expected behaviour:
    A successful connection with documentDB

  3. Actual behaviour:
    No successful connection with documentDB

  4. Error message/stack trace:
    Invalid username or password.
    Error Code: 28911C50
    Invalid username or password or user is not authorized on database 'paydayService'. Please check your settings. Authorization failed for user 'masterUser' on database 'admin' with mechanism 'SCRAM-SHA-1'. Please check username, password, database and authorization for the user.
    Invalid username or password.
    Connector Class: documentdbjdbc, Version: 1.14.0.0

  5. Any other details that can be helpful:
    Tried to connect by using studio 3T. Was successful.


Screenshots

  • If applicable, add screenshots to help explain your problem.

JDBC log

  • Add related JDBC log entries here.
    "e-code":"0x28911C50","e-source":"Client","e-status-code":"16","error-source":"Client","exception-type":"BadPassword","grpc-status-code":"16","msg":""Invalid username or password or user is not authorized on database 'paydayService'. Please check your settings. Authorization failed for user 'masterUser' on database 'admin' with mechanism 'SCRAM-SHA-1'. Please check username, password, database and authorization for the user.\n"","type":"ConnectivityException"},"sponsor":"Olh5xmO/UJAIAuF7eEz/RE","type":"end"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.