GithubHelp home page GithubHelp logo

embulk-output-jdbc's Introduction

embulk-output-jdbc's People

Contributors

civitaspo avatar dlackty avatar dmikurube avatar emanon-was avatar frsyuki avatar hey-jude avatar hieudion avatar hiroyuki-sato avatar hito4t avatar joe-td avatar joker1007 avatar kakoni avatar kieaiaarh avatar kitsuyui avatar llibra avatar mikoto2000 avatar naka-sho avatar nnc-o-ishikawa avatar rajyan avatar t3t5u avatar takumakanari avatar tomykaira avatar toyama0919 avatar uu59 avatar vietnguyen-td avatar y-ken avatar yahonda avatar ynishi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

embulk-output-jdbc's Issues

Couldn't create temporary table correctly if table name length is too long

I met this problem at postgresql.
If table name length is longer than 23 bytes, temporary table name is truncated.

I have two idea to solve this problem:

  1. move OracleOutputPlugin#generateSwapTableName to AbstractOutputPlugin and use it at PostgreSQLOutputPlugin.
    (But there is a problem: this method looks like to measure length, but postgresql should measure "bytes long"... Multi-byte table name may cause problem.)
  2. create unique temporary table name which is fixed length without original table name.

What do you think about it?

embulk-output-postgresql: document merge_keys

When using mode: merge, an error gets displayed asking one to set the merge_keys in the configuration file.

Although it is easy to guess what it does, this parameter is not documented and may confuse users.

embulk-output-postgresql: Support JSON/JSONb output columns

When extracting data from a source Postgres database that utilizes the json datatypes, this output plugin is unable to read JSON and output it to the JSON datatype in the latest versions of postgres. It would be great if this plugin supported that.

Timezone handling is wrong

JDBC document says that PreparedStatement.setTimestamp uses JVM's default time zone if Calendar argument is not set. But the timezone should be configurable by timezone parameter of column_option or default_timezone.

postgresql driver does not work properly leglevel without quote.

postgresql driver does not work properly leglevel without quote.

in README example.

out:
  type: postgresql
  # ...
  options: {loglevel: 2}

I got the following error.

Error: org.postgresql.util.PSQLException: Properties for the driver contains a non-string value for the key loglevel

With quote, work fine.

out:
  type: postgresql
  # ...
  options: {loglevel: "2"}

Should I update document?

Redshift - Unable to unmarshall error response (null)

I get the below stack trace after an import. The csv file is copied to my S3 bucket and the Redshift temp table is created and dropped properly but no data is in the final table. I'm using mode: insert.

If I run the COPY FROM command manually it succeeds.

Dunno if this WARN is related:

2015-09-03 05:41:28.752 -0700 [WARN] (task-0000): An output plugin is compiled with old Embulk plugin API. Please update the plugin version using "embulk gem install" command, or contact a developer of the plugin to upgrade the plugin code using "embulk migrate" command: class org.embulk.output.jdbc.AbstractJdbcOutputPlugin$PluginPageOutput
org.embulk.exec.PartialExecutionException: com.amazonaws.AmazonClientException: Unable to unmarshall error response (null). Response Code: 403, Response Text: Forbidden
    at org.embulk.exec.BulkLoader$LoaderState.buildPartialExecuteException(org/embulk/exec/BulkLoader.java:328)
    at org.embulk.exec.BulkLoader.doRun(org/embulk/exec/BulkLoader.java:526)
    at org.embulk.exec.BulkLoader.access$100(org/embulk/exec/BulkLoader.java:33)
    at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:339)
    at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:335)
    at org.embulk.spi.Exec.doWith(org/embulk/spi/Exec.java:25)
    at org.embulk.exec.BulkLoader.run(org/embulk/exec/BulkLoader.java:335)
    at org.embulk.EmbulkEmbed.run(org/embulk/EmbulkEmbed.java:179)
    at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
    at RUBY.run(/home/dude/.embulk/bin/embulk!/embulk/runner.rb:77)
    at RUBY.run(/home/dude/.embulk/bin/embulk!/embulk/command/embulk_run.rb:278)
    at RUBY.<top>(/home/dude/.embulk/bin/embulk!/embulk/command/embulk_main.rb:2)
    at org.jruby.RubyKernel.require(org/jruby/RubyKernel.java:940)
    at RUBY.(root)(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:1)
    at home.dude.$_dot_embulk.bin.embulk.embulk.command.embulk_bundle.<top>(file:/home/dude/.embulk/bin/embulk!/embulk/command/embulk_bundle.rb:55)
    at java.lang.invoke.MethodHandle.invokeWithArguments(java/lang/invoke/MethodHandle.java:599)
    at org.embulk.cli.Main.main(org/embulk/cli/Main.java:23)
Caused by: com.amazonaws.AmazonClientException: Unable to unmarshall error response (null). Response Code: 403, Response Text: Forbidden
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1071)
    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
    at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1091)
    at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getFederationToken(AWSSecurityTokenServiceClient.java:829)
    at org.embulk.output.redshift.RedshiftCopyBatchInsert.generateReaderSessionCredentials(RedshiftCopyBatchInsert.java:135)
    at org.embulk.output.redshift.RedshiftCopyBatchInsert.access$400(RedshiftCopyBatchInsert.java:30)
    at org.embulk.output.redshift.RedshiftCopyBatchInsert$UploadAndCopyTask.call(RedshiftCopyBatchInsert.java:169)
    at org.embulk.output.redshift.RedshiftCopyBatchInsert.flush(RedshiftCopyBatchInsert.java:90)
    at org.embulk.output.postgresql.AbstractPostgreSQLCopyBatchInsert.finish(AbstractPostgreSQLCopyBatchInsert.java:81)
    at org.embulk.output.redshift.RedshiftCopyBatchInsert.finish(RedshiftCopyBatchInsert.java:104)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$PluginPageOutput.finish(AbstractJdbcOutputPlugin.java:843)
    at org.embulk.plugin.compat.TransactionalPageOutputWrapper.finish(TransactionalPageOutputWrapper.java:62)
    at org.embulk.spi.PageBuilder.finish(PageBuilder.java:223)
    at org.embulk.standards.CsvParserPlugin.run(CsvParserPlugin.java:376)
    at org.embulk.spi.FileInputRunner.run(FileInputRunner.java:147)
    at org.embulk.spi.util.Executors.process(Executors.java:61)
    at org.embulk.spi.util.Executors.process(Executors.java:40)
    at org.embulk.exec.LocalExecutorPlugin$2.call(LocalExecutorPlugin.java:104)
    at org.embulk.exec.LocalExecutorPlugin$2.call(LocalExecutorPlugin.java:100)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
    at javax.xml.xpath.XPathFactoryFinder._newFactory(XPathFactoryFinder.java:220)
    at javax.xml.xpath.XPathFactoryFinder.newFactory(XPathFactoryFinder.java:141)
    at javax.xml.xpath.XPathFactory.newInstance(XPathFactory.java:182)
    at javax.xml.xpath.XPathFactory.newInstance(XPathFactory.java:96)
    at com.amazonaws.util.XpathUtils.xpath(XpathUtils.java:114)
    at com.amazonaws.util.XpathUtils.asString(XpathUtils.java:197)
    at com.amazonaws.transform.StandardErrorUnmarshaller.parseErrorCode(StandardErrorUnmarshaller.java:93)
    at com.amazonaws.services.securitytoken.model.transform.ExpiredTokenExceptionUnmarshaller.unmarshall(ExpiredTokenExceptionUnmarshaller.java:34)
    at com.amazonaws.services.securitytoken.model.transform.ExpiredTokenExceptionUnmarshaller.unmarshall(ExpiredTokenExceptionUnmarshaller.java:25)
    at com.amazonaws.http.DefaultErrorResponseHandler.handle(DefaultErrorResponseHandler.java:95)
    at com.amazonaws.http.DefaultErrorResponseHandler.handle(DefaultErrorResponseHandler.java:40)
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1045)
    ... 24 more

Error: com.amazonaws.AmazonClientException: Unable to unmarshall error response (null). Response Code: 403, Response Text: Forbidden

Can choose NULL or an error when a value is invalid

For example

public class IntColumnSetter
        extends ColumnSetter
{
    @Override
    protected void longValue(long v) throws IOException, SQLException
    {
        if (v > Integer.MAX_VALUE || v < Integer.MIN_VALUE) {
            if (errorWhenInvalidValue) {
                throw new XXXException(xxx);
            } else {
                nullValue();
            }
        } else {
            batch.setInt((int) v);
        }
    }

`gradlew gem` fails in Windows

gradlew gem fails in Windows with the following error.

Invalid gemspec in [xxx\embulk-output-jdbc\embulk-output-mysql\build\gemspec]: No such file or directory - git
ERROR:  Error loading gemspec. Aborting.

It works in Cygwin.

when loading data to oracle and the target table has a column include '()' .it raise error

Hi
I encounter a problem when loading data to oracle and the target table has a column include "()"

table test:
SQL> desc sync.test1
Name Null? Type


COL VARCHAR2(32)

SQL>

if i execute embulk run. it will raise error as below

[root@localhost embulk-master]# embulk run a.yml
2015-08-24 10:43:57.549 +0800: Embulk v0.7.2
2015-08-24 10:43:59.720 +0800 INFO: Loaded plugin embulk-input-oracle (0.6.0)
2015-08-24 10:43:59.755 +0800 INFO: Loaded plugin embulk-output-oracle (0.4.1)
2015-08-24 10:44:00.407 +0800 INFO: Connecting to jdbc:oracle:thin:@10.89.13.57:1521:flume options {user=flume}
2015-08-24 10:44:00.694 +0800 INFO: Using insert mode
2015-08-24 10:44:00.758 +0800 INFO: SQL: CREATE TABLE "TEST1_da84ef265366c0_bl_tmp000" ("COL" VARCHAR2)
2015-08-24 10:44:00.765 +0800 ERROR: Operation failed (906:42000)
java.lang.RuntimeException: java.sql.SQLSyntaxErrorException: ORA-00906: missing left parenthesis

    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.begin(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:330)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.transaction(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:287)
    at org.embulk.exec.BulkLoader$4$1$1.transaction(org/embulk/exec/BulkLoader.java:490)
    at org.embulk.exec.LocalExecutorPlugin.transaction(org/embulk/exec/LocalExecutorPlugin.java:36)
    at org.embulk.exec.BulkLoader$4$1.run(org/embulk/exec/BulkLoader.java:486)
    at org.embulk.spi.util.Filters$RecursiveControl.transaction(org/embulk/spi/util/Filters.java:96)
    at org.embulk.spi.util.Filters.transaction(org/embulk/spi/util/Filters.java:49)
    at org.embulk.exec.BulkLoader$4.run(org/embulk/exec/BulkLoader.java:481)
    at org.embulk.input.jdbc.AbstractJdbcInputPlugin.transaction(org/embulk/input/jdbc/AbstractJdbcInputPlugin.java:147)
    at org.embulk.plugin.compat.InputPluginWrapper.transaction(org/embulk/plugin/compat/InputPluginWrapper.java:57)
    at org.embulk.exec.BulkLoader.doRun(org/embulk/exec/BulkLoader.java:477)
    at org.embulk.exec.BulkLoader.access$100(org/embulk/exec/BulkLoader.java:33)
    at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:339)
    at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:335)
    at org.embulk.spi.Exec.doWith(org/embulk/spi/Exec.java:25)
    at org.embulk.exec.BulkLoader.run(org/embulk/exec/BulkLoader.java:335)
    at org.embulk.EmbulkEmbed.run(org/embulk/EmbulkEmbed.java:179)
    at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
    at RUBY.run(/root/.embulk/bin/embulk!/embulk/runner.rb:77)
    at RUBY.run(/root/.embulk/bin/embulk!/embulk/command/embulk_run.rb:274)
    at RUBY.<top>(/root/.embulk/bin/embulk!/embulk/command/embulk_main.rb:2)
    at org.jruby.RubyKernel.require(org/jruby/RubyKernel.java:940)
    at RUBY.(root)(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:1)
    at root.$_dot_embulk.bin.embulk.embulk.command.embulk_bundle.<top>(file:/root/.embulk/bin/embulk!/embulk/command/embulk_bundle.rb:55)
    at java.lang.invoke.MethodHandle.invokeWithArguments(java/lang/invoke/MethodHandle.java:599)
    at org.embulk.cli.Main.main(org/embulk/cli/Main.java:20)

Caused by: java.sql.SQLSyntaxErrorException: ORA-00906: missing left parenthesis

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
    at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
    at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:1036)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1336)
    at oracle.jdbc.driver.OracleStatement.executeUpdateInternal(OracleStatement.java:1845)
    at oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:1810)
    at oracle.jdbc.driver.OracleStatementWrapper.executeUpdate(OracleStatementWrapper.java:294)
    at org.embulk.output.jdbc.JdbcOutputConnection.executeUpdate(JdbcOutputConnection.java:460)
    at org.embulk.output.oracle.OracleOutputConnection.createTable(OracleOutputConnection.java:86)
    at org.embulk.output.oracle.OracleOutputConnection.createTableIfNotExists(OracleOutputConnection.java:77)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.doBegin(AbstractJdbcOutputPlugin.java:427)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$1.run(AbstractJdbcOutputPlugin.java:323)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(AbstractJdbcOutputPlugin.java:901)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(AbstractJdbcOutputPlugin.java:898)
    at org.embulk.spi.util.RetryExecutor.run(RetryExecutor.java:100)
    at org.embulk.spi.util.RetryExecutor.runInterruptible(RetryExecutor.java:77)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:894)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:887)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.begin(AbstractJdbcOutputPlugin.java:318)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.transaction(AbstractJdbcOutputPlugin.java:287)
    at org.embulk.exec.BulkLoader$4$1$1.transaction(BulkLoader.java:490)
    at org.embulk.exec.LocalExecutorPlugin.transaction(LocalExecutorPlugin.java:36)
    at org.embulk.exec.BulkLoader$4$1.run(BulkLoader.java:486)
    at org.embulk.spi.util.Filters$RecursiveControl.transaction(Filters.java:96)
    at org.embulk.spi.util.Filters.transaction(Filters.java:49)
    at org.embulk.exec.BulkLoader$4.run(BulkLoader.java:481)
    at org.embulk.input.jdbc.AbstractJdbcInputPlugin.transaction(AbstractJdbcInputPlugin.java:147)
    at org.embulk.plugin.compat.InputPluginWrapper.transaction(InputPluginWrapper.java:57)
    at org.embulk.exec.BulkLoader.doRun(BulkLoader.java:477)
    at org.embulk.exec.BulkLoader.access$100(BulkLoader.java:33)
    at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:339)
    at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:335)
    at org.embulk.spi.Exec.doWith(Exec.java:25)
    at org.embulk.exec.BulkLoader.run(BulkLoader.java:335)
    at org.embulk.EmbulkEmbed.run(EmbulkEmbed.java:179)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:457)
    at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:318)
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:45)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
    at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:289)
    at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
    at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:128)
    at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:114)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:273)
    at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:79)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:83)
    at org.jruby.ir.instructions.CallBase.interpret(CallBase.java:419)
    at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:321)
    at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
    at org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:82)
    at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:198)
    at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:184)
    at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:201)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
    at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:289)
    at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
    at org.jruby.ir.interpreter.Interpreter.INTERPRET_ROOT(Interpreter.java:116)
    at org.jruby.ir.interpreter.Interpreter.execute(Interpreter.java:103)
    at org.jruby.ir.interpreter.Interpreter.execute(Interpreter.java:32)
    at org.jruby.ir.IRTranslator.execute(IRTranslator.java:42)
    at org.jruby.Ruby.runInterpreter(Ruby.java:837)
    at org.jruby.Ruby.loadFile(Ruby.java:2901)
    at org.jruby.runtime.load.LibrarySearcher$ResourceLibrary.load(LibrarySearcher.java:245)
    at org.jruby.runtime.load.LibrarySearcher$FoundLibrary.load(LibrarySearcher.java:35)
    at org.jruby.runtime.load.LoadService.tryLoadingLibraryOrScript(LoadService.java:895)
    at org.jruby.runtime.load.LoadService.smartLoadInternal(LoadService.java:540)
    at org.jruby.runtime.load.LoadService.requireCommon(LoadService.java:425)
    at org.jruby.runtime.load.LoadService.require(LoadService.java:391)
    at org.jruby.RubyKernel.requireCommon(RubyKernel.java:947)
    at org.jruby.RubyKernel.require19(RubyKernel.java:940)
    at org.jruby.RubyKernel$INVOKER$s$1$0$require19.call(RubyKernel$INVOKER$s$1$0$require19.gen)
    at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodOneOrNBlock.call(JavaMethod.java:364)
    at org.jruby.internal.runtime.methods.AliasMethod.call(AliasMethod.java:61)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
    at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:289)
    at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
    at org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:82)
    at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:198)
    at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:184)
    at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:201)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
    at root.$_dot_embulk.bin.embulk.embulk.command.embulk_bundle.invokeOther74:require(file:/root/.embulk/bin/embulk!/embulk/command/embulk_bundle.rb)
    at root.$_dot_embulk.bin.embulk.embulk.command.embulk_bundle.RUBY$script(file:/root/.embulk/bin/embulk!/embulk/command/embulk_bundle.rb:55)
    at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:599)
    at org.jruby.ir.Compiler$1.load(Compiler.java:111)
    at org.jruby.Ruby.runScript(Ruby.java:821)
    at org.jruby.Ruby.runScript(Ruby.java:813)
    at org.jruby.Ruby.runNormally(Ruby.java:751)
    at org.jruby.Ruby.runFromMain(Ruby.java:573)
    at org.jruby.Main.doRunFromMain(Main.java:403)
    at org.jruby.Main.internalRun(Main.java:298)
    at org.jruby.Main.run(Main.java:225)
    at org.jruby.Main.main(Main.java:197)
    at org.embulk.cli.Main.main(Main.java:20)

Error: java.sql.SQLSyntaxErrorException: ORA-00906: missing left parenthesis

in the content of the error.it failed to execute "CREATE TABLE "TEST1_da84ef265366c0_bl_tmp000" ("COL" VARCHAR2)" . and it lack "(32)" .how to modify the .yml file to handle this problem
thanks for you replay

java.lang.ClassCastException when load data to oracle

Hi
I am using embulk to transfer data from oracle to oracle,when i run embulk run command ,it raise below error
java.lang.ClassCastException: org.jruby.util.JRubyClassLoader cannot be cast to org.embulk.plugin.PluginClassLoader
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.loadDriverJar(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:157)
at org.embulk.output.JdbcOutputPlugin.getConnector(org/embulk/output/JdbcOutputPlugin.java:74)
at org.embulk.output.JdbcOutputPlugin.getConnector(org/embulk/output/JdbcOutputPlugin.java:21)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.newConnection(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:179)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$1.run(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:321)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:901)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:898)
at org.embulk.spi.util.RetryExecutor.run(org/embulk/spi/util/RetryExecutor.java:100)
at org.embulk.spi.util.RetryExecutor.runInterruptible(org/embulk/spi/util/RetryExecutor.java:77)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:894)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:887)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.begin(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:318)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.transaction(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:287)
at org.embulk.exec.BulkLoader$4$1$1.transaction(org/embulk/exec/BulkLoader.java:493)
at org.embulk.exec.LocalExecutorPlugin.transaction(org/embulk/exec/LocalExecutorPlugin.java:37)
at org.embulk.exec.BulkLoader$4$1.run(org/embulk/exec/BulkLoader.java:489)
at org.embulk.spi.util.Filters$RecursiveControl.transaction(org/embulk/spi/util/Filters.java:97)
at org.embulk.spi.util.Filters.transaction(org/embulk/spi/util/Filters.java:50)
at org.embulk.exec.BulkLoader$4.run(org/embulk/exec/BulkLoader.java:484)
at org.embulk.input.jdbc.AbstractJdbcInputPlugin.transaction(org/embulk/input/jdbc/AbstractJdbcInputPlugin.java:147)
at org.embulk.exec.BulkLoader.doRun(org/embulk/exec/BulkLoader.java:480)
at org.embulk.exec.BulkLoader.access$100(org/embulk/exec/BulkLoader.java:36)
at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:342)
at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:338)
at org.embulk.spi.Exec.doWith(org/embulk/spi/Exec.java:25)
at org.embulk.exec.BulkLoader.run(org/embulk/exec/BulkLoader.java:338)
at org.embulk.command.Runner.run(org/embulk/command/Runner.java:149)
at org.embulk.command.Runner.main(org/embulk/command/Runner.java:101)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
at RUBY.run(/opt/embulk-master/lib/embulk/command/embulk_run.rb:351)
at opt.embulk_minus_master.lib.embulk.command.embulk.(root)(/opt/embulk-master/lib/embulk/command/embulk.rb:47)
at opt.embulk_minus_master.lib.embulk.command.embulk.(root)(opt/embulk_minus_master/lib/embulk/command//opt/embulk-master/lib/embulk/command/embulk.rb:47)

Error: org.jruby.util.JRubyClassLoader cannot be cast to org.embulk.plugin.PluginClassLoader

   Now i don't kown how to handle this error,i need you help

Postgres merge mode fails

Merge mode tries to create temporary table with VARCHAR(2147483647).

This fails because size exceeds max limit for postgres field 10 * 1024 * 1014

org.embulk.exec.PartialExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: length for type varchar cannot exceed 10485760

Connection timed out (output-sqlserver; 0.5.1)

Error: java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: ホスト ss2016、名前付きインスタンス mssqlserver への接続が失敗しました。エラー: "java.net.SocketTimeoutException: Receive timed out"。サーバーとインスタンスの名前を調べ、ポート 1434 への UDP トラフィック がファイアウォールにブロックされていないことを確認してください。SQL Server 2005 以降では、SQL Server Browser サービスがホスト上で実行されている ことを確認してください。

I don't open UDP/1434. On the contrary, I don't boot SQL Server Browser service.
If I specify the port for SQL Server engine, "Browser Service" is not required, is it?
it is thankful to be able to use not the same as 0.5.0 also in the previous environment.

... I am not good at English.

Redshift error

Hi

I am doing a bulk upload into redshift. By the looks of it the data is getting copied into S3 from a simple select * statement in mysql.

However I am getting a postgresql error when uploading into redshift.

Java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: syntax error at or near ")"
  Position: 22

 Position: 22
        at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$PluginPageOutput.finish(AbstractJdbcOutputPlugin.java:910)
        at org.embulk.exec.LocalExecutorPlugin$ScatterTransactionalPageOutput.finish(LocalExecutorPlugin.java:496)
        at org.embulk.spi.PageBuilder.finish(PageBuilder.java:244)
        at org.embulk.input.jdbc.AbstractJdbcInputPlugin.run(AbstractJdbcInputPlugin.java:279)
        at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor.runInputTask(LocalExecutorPlugin.java:294)
        at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor.access$000(LocalExecutorPlugin.java:212)
        at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor$1.call(LocalExecutorPlugin.java:257)
        at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor$1.call(LocalExecutorPlugin.java:253)
        at java.util.concurrent.FutureTask.run(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at or near ")"
  Position: 22
        at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:173)
        at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:645)
        at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:481)
        at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:409)
        at org.embulk.output.redshift.RedshiftOutputConnection.runCopy(RedshiftOutputConnection.java:117)
        at org.embulk.output.redshift.RedshiftCopyBatchInsert$CopyTask.call(RedshiftCopyBatchInsert.java:242)
        at org.embulk.output.redshift.RedshiftCopyBatchInsert$CopyTask.call(RedshiftCopyBatchInsert.java:218)

Columns not coming through for embulk-output-postgresql

Trying to import data using embulk-input-sqlserver and many of the columns aren't coming over.

Verbose log below:

 embulk -b ./embulk_bundle run config3.yml
2015-10-03 16:43:14.912 -0400: Embulk v0.7.5
2015-10-03 16:43:15.767 -0400 [INFO] (transaction): Loaded plugin embulk-input-sqlserver (0.6.0)
2015-10-03 16:43:15.784 -0400 [INFO] (transaction): Loaded plugin embulk-output-postgresql (0.4.1)
2015-10-03 16:43:16.221 -0400 [INFO] (transaction): Connecting to jdbc:postgresql://localhost:5432/relevant_staging options {user=postgres, loglevel=3, tcpKeepAlive=true, loginTimeout=300, socketTimeout=1800}
16:43:16.232 (1) PostgreSQL 9.4 JDBC4.1 (build 1200)
16:43:16.236 (1) Trying to establish a protocol version 3 connection to localhost:5432
16:43:16.240 (1) Receive Buffer Size is 408300
16:43:16.240 (1) Send Buffer Size is 146988
16:43:16.241 (1)  FE=> StartupPacket(user=postgres, database=relevant_staging, client_encoding=UTF8, DateStyle=ISO, TimeZone=America/New_York, extra_float_digits=2)
16:43:16.242 (1)  <=BE AuthenticationReqMD5(salt=6cca4dcc)
16:43:16.243 (1)  FE=> Password(md5digest=md5111ed7aaee2a0ca96d689d2224abeac2)
16:43:16.328 (1)  <=BE AuthenticationOk
16:43:16.331 (1)  <=BE ParameterStatus(application_name = )
16:43:16.332 (1)  <=BE ParameterStatus(client_encoding = UTF8)
16:43:16.332 (1)  <=BE ParameterStatus(DateStyle = ISO, MDY)
16:43:16.332 (1)  <=BE ParameterStatus(integer_datetimes = on)
16:43:16.332 (1)  <=BE ParameterStatus(IntervalStyle = postgres)
16:43:16.332 (1)  <=BE ParameterStatus(is_superuser = on)
16:43:16.332 (1)  <=BE ParameterStatus(server_encoding = UTF8)
16:43:16.332 (1)  <=BE ParameterStatus(server_version = 9.1.14)
16:43:16.332 (1)  <=BE ParameterStatus(session_authorization = postgres)
16:43:16.332 (1)  <=BE ParameterStatus(standard_conforming_strings = on)
16:43:16.332 (1)  <=BE ParameterStatus(TimeZone = America/New_York)
16:43:16.332 (1)  <=BE BackendKeyData(pid=62855,ckey=1244144535)
16:43:16.332 (1)  <=BE ReadyForQuery(I)
16:43:16.334 (1) simple execute, handler=org.postgresql.core.SetupQueryRunner$SimpleResultHandler@2b8e497f, maxRows=0, fetchSize=0, flags=23
16:43:16.334 (1)  FE=> Parse(stmt=null,query="SET extra_float_digits = 3",oids={})
16:43:16.334 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.334 (1)  FE=> Execute(portal=null,limit=1)
16:43:16.334 (1)  FE=> Sync
16:43:16.335 (1)  <=BE ParseComplete [null]
16:43:16.335 (1)  <=BE BindComplete [null]
16:43:16.335 (1)  <=BE CommandStatus(SET)
16:43:16.335 (1)  <=BE ReadyForQuery(I)
16:43:16.336 (1)     compatible = 90400
16:43:16.336 (1)     loglevel = 3
16:43:16.336 (1)     prepare threshold = 5
16:43:16.337 (1)     types using binary send = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:16.337 (1)     types using binary receive = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,DATE,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:16.337 (1)     integer date/time = true
2015-10-03 16:43:16.347 -0400 [INFO] (transaction): SQL: SET search_path TO "public"
16:43:16.348 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@37c2f0b4, maxRows=0, fetchSize=0, flags=21
16:43:16.348 (1)  FE=> Parse(stmt=null,query="SET search_path TO "public"",oids={})
16:43:16.349 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.349 (1)  FE=> Describe(portal=null)
16:43:16.349 (1)  FE=> Execute(portal=null,limit=1)
16:43:16.349 (1)  FE=> Sync
16:43:16.350 (1)  <=BE ParseComplete [null]
16:43:16.350 (1)  <=BE BindComplete [null]
16:43:16.350 (1)  <=BE NoData
16:43:16.350 (1)  <=BE CommandStatus(SET)
16:43:16.350 (1)  <=BE ReadyForQuery(I)
2015-10-03 16:43:16.351 -0400 [INFO] (transaction): > 0.00 seconds
2015-10-03 16:43:16.351 -0400 [INFO] (transaction): Using insert mode
16:43:16.352 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@64dfb31d, maxRows=0, fetchSize=0, flags=1
16:43:16.352 (1)  FE=> Parse(stmt=null,query="BEGIN",oids={})
16:43:16.352 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.352 (1)  FE=> Execute(portal=null,limit=0)
16:43:16.352 (1)  FE=> Parse(stmt=null,query="SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME,  CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema'  WHEN true THEN CASE  WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TABLE'   WHEN 'v' THEN 'SYSTEM VIEW'   WHEN 'i' THEN 'SYSTEM INDEX'   ELSE NULL   END  WHEN n.nspname = 'pg_toast' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TOAST TABLE'   WHEN 'i' THEN 'SYSTEM TOAST INDEX'   ELSE NULL   END  ELSE CASE c.relkind   WHEN 'r' THEN 'TEMPORARY TABLE'   WHEN 'i' THEN 'TEMPORARY INDEX'   WHEN 'S' THEN 'TEMPORARY SEQUENCE'   WHEN 'v' THEN 'TEMPORARY VIEW'   ELSE NULL   END  END  WHEN false THEN CASE c.relkind  WHEN 'r' THEN 'TABLE'  WHEN 'i' THEN 'INDEX'  WHEN 'S' THEN 'SEQUENCE'  WHEN 'v' THEN 'VIEW'  WHEN 'c' THEN 'TYPE'  WHEN 'f' THEN 'FOREIGN TABLE'  WHEN 'm' THEN 'MATERIALIZED VIEW'  ELSE NULL  END  ELSE NULL  END  AS TABLE_TYPE, d.description AS REMARKS  FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c  LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0)  LEFT JOIN pg_catalog.pg_class dc ON (d.classoid=dc.oid AND dc.relname='pg_class')  LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid=dc.relnamespace AND dn.nspname='pg_catalog')  WHERE c.relnamespace = n.oid  AND n.nspname LIKE 'public' AND c.relname LIKE 'enc' ORDER BY TABLE_TYPE,TABLE_SCHEM,TABLE_NAME ",oids={})
16:43:16.352 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.352 (1)  FE=> Describe(portal=null)
16:43:16.352 (1)  FE=> Execute(portal=null,limit=0)
16:43:16.352 (1)  FE=> Sync
16:43:16.357 (1)  <=BE ParseComplete [null]
16:43:16.358 (1)  <=BE BindComplete [null]
16:43:16.358 (1)  <=BE CommandStatus(BEGIN)
16:43:16.358 (1)  <=BE ParseComplete [null]
16:43:16.358 (1)  <=BE BindComplete [null]
16:43:16.358 (1)  <=BE RowDescription(5)
16:43:16.358 (1)         Field(,<unknown:705>,65534,T)
16:43:16.358 (1)         Field(,NAME,64,T)
16:43:16.358 (1)         Field(,NAME,64,T)
16:43:16.358 (1)         Field(,TEXT,65535,T)
16:43:16.358 (1)         Field(,TEXT,65535,T)
16:43:16.359 (1)  <=BE DataRow(len=14)
16:43:16.359 (1)  <=BE CommandStatus(SELECT 1)
16:43:16.365 (1)  <=BE ReadyForQuery(T)
16:43:16.366 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@4cd7d5e1, maxRows=0, fetchSize=0, flags=1
16:43:16.366 (1)  FE=> Parse(stmt=null,query="SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM,   ct.relname AS TABLE_NAME, a.attname AS COLUMN_NAME,   (i.keys).n AS KEY_SEQ, ci.relname AS PK_NAME FROM pg_catalog.pg_class ct   JOIN pg_catalog.pg_attribute a ON (ct.oid = a.attrelid)   JOIN pg_catalog.pg_namespace n ON (ct.relnamespace = n.oid)   JOIN (SELECT i.indexrelid, i.indrelid, i.indisprimary,              information_schema._pg_expandarray(i.indkey) AS keys         FROM pg_catalog.pg_index i) i     ON (a.attnum = (i.keys).x AND a.attrelid = i.indrelid)   JOIN pg_catalog.pg_class ci ON (ci.oid = i.indexrelid) WHERE true  AND n.nspname = 'public' AND ct.relname = 'enc' AND i.indisprimary  ORDER BY table_name, pk_name, key_seq",oids={})
16:43:16.366 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.366 (1)  FE=> Describe(portal=null)
16:43:16.366 (1)  FE=> Execute(portal=null,limit=0)
16:43:16.366 (1)  FE=> Sync
16:43:16.370 (1)  <=BE ParseComplete [null]
16:43:16.370 (1)  <=BE BindComplete [null]
16:43:16.370 (1)  <=BE RowDescription(6)
16:43:16.370 (1)         Field(,<unknown:705>,65534,T)
16:43:16.370 (1)         Field(,NAME,64,T)
16:43:16.370 (1)         Field(,NAME,64,T)
16:43:16.370 (1)         Field(,NAME,64,T)
16:43:16.370 (1)         Field(,INT4,4,T)
16:43:16.371 (1)         Field(,NAME,64,T)
16:43:16.371 (1)  <=BE CommandStatus(SELECT 0)
16:43:16.371 (1)  <=BE ReadyForQuery(T)
16:43:16.372 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@3686389, maxRows=0, fetchSize=0, flags=1
16:43:16.372 (1)  FE=> Parse(stmt=null,query="SELECT * FROM (SELECT n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull OR (t.typtype = 'd' AND t.typnotnull) AS attnotnull,a.atttypmod,a.attlen,row_number() OVER (PARTITION BY a.attrelid ORDER BY a.attnum) AS attnum, pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc,dsc.description,t.typbasetype,t.typtype  FROM pg_catalog.pg_namespace n  JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid)  JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid)  JOIN pg_catalog.pg_type t ON (a.atttypid = t.oid)  LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND a.attnum = def.adnum)  LEFT JOIN pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid)  LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND dc.relname='pg_class')  LEFT JOIN pg_catalog.pg_namespace dn ON (dc.relnamespace=dn.oid AND dn.nspname='pg_catalog')  WHERE a.attnum > 0 AND NOT a.attisdropped  AND n.nspname LIKE 'public' AND c.relname LIKE 'enc') c WHERE true  ORDER BY nspname,c.relname,attnum ",oids={})
16:43:16.372 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.373 (1)  FE=> Describe(portal=null)
16:43:16.373 (1)  FE=> Execute(portal=null,limit=0)
16:43:16.373 (1)  FE=> Sync
16:43:16.378 (1)  <=BE ParseComplete [null]
16:43:16.378 (1)  <=BE BindComplete [null]
16:43:16.378 (1)  <=BE RowDescription(12)
16:43:16.378 (1)         Field(,NAME,64,T)
16:43:16.378 (1)         Field(,NAME,64,T)
16:43:16.378 (1)         Field(,NAME,64,T)
16:43:16.378 (1)         Field(,OID,4,T)
16:43:16.378 (1)         Field(,BOOL,1,T)
16:43:16.378 (1)         Field(,INT4,4,T)
16:43:16.378 (1)         Field(,INT2,2,T)
16:43:16.378 (1)         Field(,INT8,8,T)
16:43:16.378 (1)         Field(,TEXT,65535,T)
16:43:16.378 (1)         Field(,TEXT,65535,T)
16:43:16.378 (1)         Field(,OID,4,T)
16:43:16.378 (1)         Field(,CHAR,1,T)
16:43:16.378 (1)  <=BE DataRow(len=29)
16:43:16.378 (1)  <=BE DataRow(len=27)
16:43:16.379 (1)  <=BE DataRow(len=26)
16:43:16.379 (1)  <=BE DataRow(len=24)
16:43:16.379 (1)  <=BE DataRow(len=27)
16:43:16.379 (1)  <=BE DataRow(len=29)
16:43:16.379 (1)  <=BE DataRow(len=27)
16:43:16.379 (1)  <=BE DataRow(len=25)
16:43:16.379 (1)  <=BE DataRow(len=32)
16:43:16.379 (1)  <=BE DataRow(len=30)
16:43:16.379 (1)  <=BE DataRow(len=30)
16:43:16.379 (1)  <=BE DataRow(len=29)
16:43:16.379 (1)  <=BE CommandStatus(SELECT 12)
16:43:16.379 (1)  <=BE ReadyForQuery(T)
2015-10-03 16:43:16.383 -0400 [INFO] (transaction): SQL: DROP TABLE IF EXISTS "enc_56103de32bfcfc80_bl_tmp000"
16:43:16.384 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@78318ac2, maxRows=0, fetchSize=0, flags=5
16:43:16.384 (1)  FE=> Parse(stmt=null,query="DROP TABLE IF EXISTS "enc_56103de32bfcfc80_bl_tmp000"",oids={})
16:43:16.384 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.384 (1)  FE=> Describe(portal=null)
16:43:16.384 (1)  FE=> Execute(portal=null,limit=1)
16:43:16.384 (1)  FE=> Sync
16:43:16.384 (1)  <=BE ParseComplete [null]
16:43:16.384 (1)  <=BE BindComplete [null]
16:43:16.384 (1)  <=BE NoData
16:43:16.386 (1)  <=BE NoticeResponse(NOTICE: table "enc_56103de32bfcfc80_bl_tmp000" does not exist, skipping
  Location: File: tablecmds.c, Routine: DropErrorMsgNonExistent, Line: 667
  Server SQLState: 00000)
SQLWarning: 
16:43:16.387 (1)  <=BE CommandStatus(DROP TABLE)
16:43:16.387 (1)  <=BE ReadyForQuery(T)
2015-10-03 16:43:16.387 -0400 [INFO] (transaction): > 0.00 seconds
16:43:16.387 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Connection$TransactionCommandHandler@37fef327, maxRows=0, fetchSize=0, flags=22
16:43:16.387 (1)  FE=> Parse(stmt=S_1,query="COMMIT",oids={})
16:43:16.387 (1)  FE=> Bind(stmt=S_1,portal=null)
16:43:16.387 (1)  FE=> Execute(portal=null,limit=1)
16:43:16.387 (1)  FE=> Sync
16:43:16.388 (1)  <=BE ParseComplete [S_1]
16:43:16.388 (1)  <=BE BindComplete [null]
16:43:16.388 (1)  <=BE CommandStatus(COMMIT)
16:43:16.388 (1)  <=BE ReadyForQuery(I)
2015-10-03 16:43:16.396 -0400 [INFO] (transaction): SQL: CREATE TABLE IF NOT EXISTS "enc_56103de32bfcfc80_bl_tmp000" ("encounterid" INT4, "patientid" INT4, "doctorid" INT4, "date" TIMESTAMP, "time" VARCHAR(1000), "starttime" TIMESTAMP, "endtime" TIMESTAMP, "reason" TEXT, "visittype" VARCHAR(1000), "roomno" VARCHAR(1000), "status" VARCHAR(1000), "deleteflag" INT4)
16:43:16.396 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@1426370c, maxRows=0, fetchSize=0, flags=5
16:43:16.396 (1)  FE=> Parse(stmt=null,query="BEGIN",oids={})
16:43:16.396 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.396 (1)  FE=> Execute(portal=null,limit=0)
16:43:16.396 (1)  FE=> Parse(stmt=null,query="CREATE TABLE IF NOT EXISTS "enc_56103de32bfcfc80_bl_tmp000" ("encounterid" INT4, "patientid" INT4, "doctorid" INT4, "date" TIMESTAMP, "time" VARCHAR(1000), "starttime" TIMESTAMP, "endtime" TIMESTAMP, "reason" TEXT, "visittype" VARCHAR(1000), "roomno" VARCHAR(1000), "status" VARCHAR(1000), "deleteflag" INT4)",oids={})
16:43:16.396 (1)  FE=> Bind(stmt=null,portal=null)
16:43:16.396 (1)  FE=> Describe(portal=null)
16:43:16.396 (1)  FE=> Execute(portal=null,limit=1)
16:43:16.396 (1)  FE=> Sync
16:43:16.401 (1)  <=BE ParseComplete [null]
16:43:16.401 (1)  <=BE BindComplete [null]
16:43:16.401 (1)  <=BE CommandStatus(BEGIN)
16:43:16.401 (1)  <=BE ParseComplete [null]
16:43:16.401 (1)  <=BE BindComplete [null]
16:43:16.401 (1)  <=BE NoData
16:43:16.401 (1)  <=BE CommandStatus(CREATE TABLE)
16:43:16.401 (1)  <=BE ReadyForQuery(T)
2015-10-03 16:43:16.402 -0400 [INFO] (transaction): > 0.01 seconds
16:43:16.402 (1) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Connection$TransactionCommandHandler@9f9146d, maxRows=0, fetchSize=0, flags=22
16:43:16.402 (1)  FE=> Bind(stmt=S_1,portal=null)
16:43:16.402 (1)  FE=> Execute(portal=null,limit=1)
16:43:16.402 (1)  FE=> Sync
16:43:16.403 (1)  <=BE BindComplete [null]
16:43:16.403 (1)  <=BE CommandStatus(COMMIT)
16:43:16.403 (1)  <=BE ReadyForQuery(I)
16:43:16.407 (1)  FE=> Terminate
2015-10-03 16:43:16.419 -0400 [INFO] (transaction): {done:  0 / 1, running: 0}
2015-10-03 16:43:16.448 -0400 [INFO] (task-0000): Connecting to jdbc:postgresql://localhost:5432/relevant_staging options {user=postgres, loglevel=3, tcpKeepAlive=true, loginTimeout=300, socketTimeout=1800}
16:43:16.452 (2) PostgreSQL 9.4 JDBC4.1 (build 1200)
16:43:16.452 (2) Trying to establish a protocol version 3 connection to localhost:5432
16:43:16.452 (2) Receive Buffer Size is 408300
16:43:16.452 (2) Send Buffer Size is 146988
16:43:16.452 (2)  FE=> StartupPacket(user=postgres, database=relevant_staging, client_encoding=UTF8, DateStyle=ISO, TimeZone=America/New_York, extra_float_digits=2)
16:43:16.454 (2)  <=BE AuthenticationReqMD5(salt=474be956)
16:43:16.454 (2)  FE=> Password(md5digest=md5109c3c2856176c3d1572c927aa09f791)
16:43:16.456 (2)  <=BE AuthenticationOk
16:43:16.457 (2)  <=BE ParameterStatus(application_name = )
16:43:16.457 (2)  <=BE ParameterStatus(client_encoding = UTF8)
16:43:16.457 (2)  <=BE ParameterStatus(DateStyle = ISO, MDY)
16:43:16.457 (2)  <=BE ParameterStatus(integer_datetimes = on)
16:43:16.457 (2)  <=BE ParameterStatus(IntervalStyle = postgres)
16:43:16.457 (2)  <=BE ParameterStatus(is_superuser = on)
16:43:16.457 (2)  <=BE ParameterStatus(server_encoding = UTF8)
16:43:16.457 (2)  <=BE ParameterStatus(server_version = 9.1.14)
16:43:16.457 (2)  <=BE ParameterStatus(session_authorization = postgres)
16:43:16.457 (2)  <=BE ParameterStatus(standard_conforming_strings = on)
16:43:16.457 (2)  <=BE ParameterStatus(TimeZone = America/New_York)
16:43:16.457 (2)  <=BE BackendKeyData(pid=62856,ckey=797625222)
16:43:16.457 (2)  <=BE ReadyForQuery(I)
16:43:16.457 (2) simple execute, handler=org.postgresql.core.SetupQueryRunner$SimpleResultHandler@798e1c52, maxRows=0, fetchSize=0, flags=23
16:43:16.457 (2)  FE=> Parse(stmt=null,query="SET extra_float_digits = 3",oids={})
16:43:16.457 (2)  FE=> Bind(stmt=null,portal=null)
16:43:16.457 (2)  FE=> Execute(portal=null,limit=1)
16:43:16.457 (2)  FE=> Sync
16:43:16.458 (2)  <=BE ParseComplete [null]
16:43:16.458 (2)  <=BE BindComplete [null]
16:43:16.458 (2)  <=BE CommandStatus(SET)
16:43:16.458 (2)  <=BE ReadyForQuery(I)
16:43:16.458 (2)     compatible = 90400
16:43:16.458 (2)     loglevel = 3
16:43:16.458 (2)     prepare threshold = 5
16:43:16.459 (2)     types using binary send = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:16.459 (2)     types using binary receive = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,DATE,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:16.459 (2)     integer date/time = true
2015-10-03 16:43:16.460 -0400 [INFO] (task-0000): SQL: SET search_path TO "public"
16:43:16.461 (2) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@3527bb0b, maxRows=0, fetchSize=0, flags=21
16:43:16.461 (2)  FE=> Parse(stmt=null,query="SET search_path TO "public"",oids={})
16:43:16.461 (2)  FE=> Bind(stmt=null,portal=null)
16:43:16.461 (2)  FE=> Describe(portal=null)
16:43:16.461 (2)  FE=> Execute(portal=null,limit=1)
16:43:16.461 (2)  FE=> Sync
16:43:16.462 (2)  <=BE ParseComplete [null]
16:43:16.462 (2)  <=BE BindComplete [null]
16:43:16.462 (2)  <=BE NoData
16:43:16.462 (2)  <=BE CommandStatus(SET)
16:43:16.462 (2)  <=BE ReadyForQuery(I)
2015-10-03 16:43:16.462 -0400 [INFO] (task-0000): > 0.00 seconds
2015-10-03 16:43:16.463 -0400 [INFO] (task-0000): Copy SQL: COPY "enc_56103de32bfcfc80_bl_tmp000" ("date", "time", "reason") FROM STDIN
2015-10-03 16:43:16.465 -0400 [WARN] (task-0000): An output plugin is compiled with old Embulk plugin API. Please update the plugin version using "embulk gem install" command, or contact a developer of the plugin to upgrade the plugin code using "embulk migrate" command: class org.embulk.output.jdbc.AbstractJdbcOutputPlugin$PluginPageOutput
DriverManager.getConnection("jdbc:sqlserver://10.1.120.111\MSSQLSERVER:1433;databaseName=datamart")
    trying com.microsoft.sqlserver.jdbc.SQLServerDriver
getConnection returning com.microsoft.sqlserver.jdbc.SQLServerDriver
2015-10-03 16:43:16.607 -0400 [INFO] (task-0000): SQL: SELECT encounterID, patientID, doctorID, date, time, startTime, endTime, reason, VisitType, roomNo, STATUS, deleteFlag FROM "enc" WHERE date > '2015-09-01'
2015-10-03 16:43:16.662 -0400 [INFO] (task-0000): > 0.05 seconds
2015-10-03 16:43:16.768 -0400 [INFO] (task-0000): Fetched 500 rows.
2015-10-03 16:43:16.806 -0400 [INFO] (task-0000): Fetched 1,000 rows.
2015-10-03 16:43:16.856 -0400 [INFO] (task-0000): Fetched 2,000 rows.
2015-10-03 16:43:16.949 -0400 [INFO] (task-0000): Fetched 4,000 rows.
2015-10-03 16:43:17.109 -0400 [INFO] (task-0000): Fetched 8,000 rows.
2015-10-03 16:43:17.492 -0400 [INFO] (task-0000): Fetched 16,000 rows.
2015-10-03 16:43:18.096 -0400 [INFO] (task-0000): Fetched 32,000 rows.
2015-10-03 16:43:19.322 -0400 [INFO] (task-0000): Fetched 64,000 rows.
2015-10-03 16:43:21.828 -0400 [INFO] (task-0000): Fetched 128,000 rows.
2015-10-03 16:43:24.732 -0400 [INFO] (task-0000): Loading 197,225 rows (7,759,125 bytes)
16:43:24.733 (2)  FE=> Query(CopyStart)
16:43:24.733 (2)  <=BE CopyInResponse
16:43:24.734 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.735 (2)  FE=> CopyData(65536)
16:43:24.736 (2)  FE=> CopyData(65536)
16:43:24.736 (2)  FE=> CopyData(65536)
16:43:24.736 (2)  FE=> CopyData(65536)
16:43:24.736 (2)  FE=> CopyData(65536)
16:43:24.783 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.784 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.837 (2)  FE=> CopyData(65536)
16:43:24.975 (2)  FE=> CopyData(65536)
16:43:24.976 (2)  FE=> CopyData(65536)
16:43:24.977 (2)  FE=> CopyData(65536)
16:43:24.977 (2)  FE=> CopyData(65536)
16:43:24.978 (2)  FE=> CopyData(65536)
16:43:24.979 (2)  FE=> CopyData(65536)
16:43:24.979 (2)  FE=> CopyData(65536)
16:43:24.980 (2)  FE=> CopyData(65536)
16:43:25.076 (2)  FE=> CopyData(65536)
16:43:25.076 (2)  FE=> CopyData(65536)
16:43:25.076 (2)  FE=> CopyData(65536)
16:43:25.078 (2)  FE=> CopyData(65536)
16:43:25.078 (2)  FE=> CopyData(65536)
16:43:25.079 (2)  FE=> CopyData(65536)
16:43:25.079 (2)  FE=> CopyData(65536)
16:43:25.080 (2)  FE=> CopyData(65536)
16:43:25.080 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.137 (2)  FE=> CopyData(65536)
16:43:25.138 (2)  FE=> CopyData(65536)
16:43:25.208 (2)  FE=> CopyData(65536)
16:43:25.208 (2)  FE=> CopyData(65536)
16:43:25.208 (2)  FE=> CopyData(65536)
16:43:25.209 (2)  FE=> CopyData(65536)
16:43:25.209 (2)  FE=> CopyData(65536)
16:43:25.209 (2)  FE=> CopyData(65536)
16:43:25.209 (2)  FE=> CopyData(65536)
16:43:25.209 (2)  FE=> CopyData(65536)
16:43:25.209 (2)  FE=> CopyData(65536)
16:43:25.267 (2)  FE=> CopyData(65536)
16:43:25.267 (2)  FE=> CopyData(65536)
16:43:25.267 (2)  FE=> CopyData(65536)
16:43:25.268 (2)  FE=> CopyData(65536)
16:43:25.268 (2)  FE=> CopyData(65536)
16:43:25.268 (2)  FE=> CopyData(65536)
16:43:25.268 (2)  FE=> CopyData(65536)
16:43:25.268 (2)  FE=> CopyData(65536)
16:43:25.268 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.325 (2)  FE=> CopyData(65536)
16:43:25.326 (2)  FE=> CopyData(65536)
16:43:25.394 (2)  FE=> CopyData(65536)
16:43:25.394 (2)  FE=> CopyData(65536)
16:43:25.394 (2)  FE=> CopyData(65536)
16:43:25.395 (2)  FE=> CopyData(65536)
16:43:25.395 (2)  FE=> CopyData(65536)
16:43:25.395 (2)  FE=> CopyData(65536)
16:43:25.395 (2)  FE=> CopyData(65536)
16:43:25.395 (2)  FE=> CopyData(65536)
16:43:25.395 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.464 (2)  FE=> CopyData(65536)
16:43:25.465 (2)  FE=> CopyData(65536)
16:43:25.465 (2)  FE=> CopyData(65536)
16:43:25.465 (2)  FE=> CopyData(65536)
16:43:25.516 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.517 (2)  FE=> CopyData(65536)
16:43:25.611 (2)  FE=> CopyData(65536)
16:43:25.611 (2)  FE=> CopyData(65536)
16:43:25.611 (2)  FE=> CopyData(65536)
16:43:25.611 (2)  FE=> CopyData(65536)
16:43:25.611 (2)  FE=> CopyData(25877)
16:43:25.611 (2)  FE=> CopyDone
16:43:25.657 (2)  <=BE CommandStatus(COPY 197225)
16:43:25.657 (2)  <=BE ReadyForQuery(I)
2015-10-03 16:43:25.657 -0400 [INFO] (task-0000): > 0.92 seconds (loaded 197,225 rows in total)
16:43:25.659 (2)  FE=> Terminate
2015-10-03 16:43:25.659 -0400 [INFO] (transaction): {done:  1 / 1, running: 0}
2015-10-03 16:43:25.661 -0400 [INFO] (transaction): Connecting to jdbc:postgresql://localhost:5432/relevant_staging options {user=postgres, loglevel=3, tcpKeepAlive=true, loginTimeout=300, socketTimeout=28800}
16:43:25.662 (3) PostgreSQL 9.4 JDBC4.1 (build 1200)
16:43:25.662 (3) Trying to establish a protocol version 3 connection to localhost:5432
16:43:25.662 (3) Receive Buffer Size is 408300
16:43:25.662 (3) Send Buffer Size is 146988
16:43:25.662 (3)  FE=> StartupPacket(user=postgres, database=relevant_staging, client_encoding=UTF8, DateStyle=ISO, TimeZone=America/New_York, extra_float_digits=2)
16:43:25.664 (3)  <=BE AuthenticationReqMD5(salt=6a8070b4)
16:43:25.664 (3)  FE=> Password(md5digest=md50f98714482529c16eccd017ae2cf393b)
16:43:25.666 (3)  <=BE AuthenticationOk
16:43:25.666 (3)  <=BE ParameterStatus(application_name = )
16:43:25.666 (3)  <=BE ParameterStatus(client_encoding = UTF8)
16:43:25.666 (3)  <=BE ParameterStatus(DateStyle = ISO, MDY)
16:43:25.666 (3)  <=BE ParameterStatus(integer_datetimes = on)
16:43:25.666 (3)  <=BE ParameterStatus(IntervalStyle = postgres)
16:43:25.667 (3)  <=BE ParameterStatus(is_superuser = on)
16:43:25.667 (3)  <=BE ParameterStatus(server_encoding = UTF8)
16:43:25.667 (3)  <=BE ParameterStatus(server_version = 9.1.14)
16:43:25.667 (3)  <=BE ParameterStatus(session_authorization = postgres)
16:43:25.667 (3)  <=BE ParameterStatus(standard_conforming_strings = on)
16:43:25.667 (3)  <=BE ParameterStatus(TimeZone = America/New_York)
16:43:25.667 (3)  <=BE BackendKeyData(pid=62858,ckey=676230832)
16:43:25.667 (3)  <=BE ReadyForQuery(I)
16:43:25.667 (3) simple execute, handler=org.postgresql.core.SetupQueryRunner$SimpleResultHandler@1cb40342, maxRows=0, fetchSize=0, flags=23
16:43:25.667 (3)  FE=> Parse(stmt=null,query="SET extra_float_digits = 3",oids={})
16:43:25.667 (3)  FE=> Bind(stmt=null,portal=null)
16:43:25.667 (3)  FE=> Execute(portal=null,limit=1)
16:43:25.667 (3)  FE=> Sync
16:43:25.667 (3)  <=BE ParseComplete [null]
16:43:25.667 (3)  <=BE BindComplete [null]
16:43:25.667 (3)  <=BE CommandStatus(SET)
16:43:25.667 (3)  <=BE ReadyForQuery(I)
16:43:25.668 (3)     compatible = 90400
16:43:25.668 (3)     loglevel = 3
16:43:25.668 (3)     prepare threshold = 5
16:43:25.668 (3)     types using binary send = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:25.668 (3)     types using binary receive = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,DATE,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:25.668 (3)     integer date/time = true
2015-10-03 16:43:25.669 -0400 [INFO] (transaction): SQL: SET search_path TO "public"
16:43:25.669 (3) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@15c16f19, maxRows=0, fetchSize=0, flags=21
16:43:25.669 (3)  FE=> Parse(stmt=null,query="SET search_path TO "public"",oids={})
16:43:25.669 (3)  FE=> Bind(stmt=null,portal=null)
16:43:25.670 (3)  FE=> Describe(portal=null)
16:43:25.670 (3)  FE=> Execute(portal=null,limit=1)
16:43:25.670 (3)  FE=> Sync
16:43:25.670 (3)  <=BE ParseComplete [null]
16:43:25.670 (3)  <=BE BindComplete [null]
16:43:25.670 (3)  <=BE NoData
16:43:25.670 (3)  <=BE CommandStatus(SET)
16:43:25.670 (3)  <=BE ReadyForQuery(I)
2015-10-03 16:43:25.670 -0400 [INFO] (transaction): > 0.00 seconds
2015-10-03 16:43:25.671 -0400 [INFO] (transaction): SQL: INSERT INTO "enc" ("date", "time", "reason") SELECT "date", "time", "reason" FROM "enc_56103de32bfcfc80_bl_tmp000"
16:43:25.671 (3) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@1a17dd6f, maxRows=0, fetchSize=0, flags=5
16:43:25.671 (3)  FE=> Parse(stmt=null,query="BEGIN",oids={})
16:43:25.671 (3)  FE=> Bind(stmt=null,portal=null)
16:43:25.671 (3)  FE=> Execute(portal=null,limit=0)
16:43:25.671 (3)  FE=> Parse(stmt=null,query="INSERT INTO "enc" ("date", "time", "reason") SELECT "date", "time", "reason" FROM "enc_56103de32bfcfc80_bl_tmp000"",oids={})
16:43:25.671 (3)  FE=> Bind(stmt=null,portal=null)
16:43:25.671 (3)  FE=> Describe(portal=null)
16:43:25.671 (3)  FE=> Execute(portal=null,limit=1)
16:43:25.671 (3)  FE=> Sync
16:43:26.155 (3)  <=BE ParseComplete [null]
16:43:26.155 (3)  <=BE BindComplete [null]
16:43:26.155 (3)  <=BE CommandStatus(BEGIN)
16:43:26.155 (3)  <=BE ParseComplete [null]
16:43:26.155 (3)  <=BE BindComplete [null]
16:43:26.155 (3)  <=BE NoData
16:43:26.155 (3)  <=BE CommandStatus(INSERT 0 197225)
16:43:26.155 (3)  <=BE ReadyForQuery(T)
2015-10-03 16:43:26.155 -0400 [INFO] (transaction): > 0.48 seconds (197,225 rows)
16:43:26.155 (3) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Connection$TransactionCommandHandler@be6d228, maxRows=0, fetchSize=0, flags=22
16:43:26.155 (3)  FE=> Parse(stmt=S_1,query="COMMIT",oids={})
16:43:26.155 (3)  FE=> Bind(stmt=S_1,portal=null)
16:43:26.155 (3)  FE=> Execute(portal=null,limit=1)
16:43:26.155 (3)  FE=> Sync
16:43:26.178 (3)  <=BE ParseComplete [S_1]
16:43:26.178 (3)  <=BE BindComplete [null]
16:43:26.178 (3)  <=BE CommandStatus(COMMIT)
16:43:26.178 (3)  <=BE ReadyForQuery(I)
16:43:26.178 (3)  FE=> Terminate
2015-10-03 16:43:26.183 -0400 [INFO] (transaction): Connecting to jdbc:postgresql://localhost:5432/relevant_staging options {user=postgres, loglevel=3, tcpKeepAlive=true, loginTimeout=300, socketTimeout=1800}
16:43:26.183 (4) PostgreSQL 9.4 JDBC4.1 (build 1200)
16:43:26.184 (4) Trying to establish a protocol version 3 connection to localhost:5432
16:43:26.184 (4) Receive Buffer Size is 408300
16:43:26.184 (4) Send Buffer Size is 146988
16:43:26.184 (4)  FE=> StartupPacket(user=postgres, database=relevant_staging, client_encoding=UTF8, DateStyle=ISO, TimeZone=America/New_York, extra_float_digits=2)
16:43:26.185 (4)  <=BE AuthenticationReqMD5(salt=a60a7f85)
16:43:26.186 (4)  FE=> Password(md5digest=md5555a096fd95f961b13a3c4f886dff546)
16:43:26.187 (4)  <=BE AuthenticationOk
16:43:26.187 (4)  <=BE ParameterStatus(application_name = )
16:43:26.187 (4)  <=BE ParameterStatus(client_encoding = UTF8)
16:43:26.187 (4)  <=BE ParameterStatus(DateStyle = ISO, MDY)
16:43:26.187 (4)  <=BE ParameterStatus(integer_datetimes = on)
16:43:26.187 (4)  <=BE ParameterStatus(IntervalStyle = postgres)
16:43:26.187 (4)  <=BE ParameterStatus(is_superuser = on)
16:43:26.187 (4)  <=BE ParameterStatus(server_encoding = UTF8)
16:43:26.187 (4)  <=BE ParameterStatus(server_version = 9.1.14)
16:43:26.187 (4)  <=BE ParameterStatus(session_authorization = postgres)
16:43:26.188 (4)  <=BE ParameterStatus(standard_conforming_strings = on)
16:43:26.188 (4)  <=BE ParameterStatus(TimeZone = America/New_York)
16:43:26.188 (4)  <=BE BackendKeyData(pid=62859,ckey=1557919149)
16:43:26.188 (4)  <=BE ReadyForQuery(I)
16:43:26.188 (4) simple execute, handler=org.postgresql.core.SetupQueryRunner$SimpleResultHandler@43619446, maxRows=0, fetchSize=0, flags=23
16:43:26.188 (4)  FE=> Parse(stmt=null,query="SET extra_float_digits = 3",oids={})
16:43:26.188 (4)  FE=> Bind(stmt=null,portal=null)
16:43:26.188 (4)  FE=> Execute(portal=null,limit=1)
16:43:26.188 (4)  FE=> Sync
16:43:26.188 (4)  <=BE ParseComplete [null]
16:43:26.188 (4)  <=BE BindComplete [null]
16:43:26.188 (4)  <=BE CommandStatus(SET)
16:43:26.188 (4)  <=BE ReadyForQuery(I)
16:43:26.188 (4)     compatible = 90400
16:43:26.188 (4)     loglevel = 3
16:43:26.188 (4)     prepare threshold = 5
16:43:26.189 (4)     types using binary send = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:26.189 (4)     types using binary receive = TIMESTAMPTZ,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,DATE,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
16:43:26.189 (4)     integer date/time = true
2015-10-03 16:43:26.189 -0400 [INFO] (transaction): SQL: SET search_path TO "public"
16:43:26.190 (4) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@4c6bba7d, maxRows=0, fetchSize=0, flags=21
16:43:26.190 (4)  FE=> Parse(stmt=null,query="SET search_path TO "public"",oids={})
16:43:26.190 (4)  FE=> Bind(stmt=null,portal=null)
16:43:26.190 (4)  FE=> Describe(portal=null)
16:43:26.190 (4)  FE=> Execute(portal=null,limit=1)
16:43:26.190 (4)  FE=> Sync
16:43:26.191 (4)  <=BE ParseComplete [null]
16:43:26.191 (4)  <=BE BindComplete [null]
16:43:26.191 (4)  <=BE NoData
16:43:26.191 (4)  <=BE CommandStatus(SET)
16:43:26.191 (4)  <=BE ReadyForQuery(I)
2015-10-03 16:43:26.191 -0400 [INFO] (transaction): > 0.00 seconds
2015-10-03 16:43:26.191 -0400 [INFO] (transaction): SQL: DROP TABLE IF EXISTS "enc_56103de32bfcfc80_bl_tmp000"
16:43:26.191 (4) simple execute, handler=org.postgresql.jdbc2.AbstractJdbc2Statement$StatementResultHandler@41e8d917, maxRows=0, fetchSize=0, flags=21
16:43:26.191 (4)  FE=> Parse(stmt=null,query="DROP TABLE IF EXISTS "enc_56103de32bfcfc80_bl_tmp000"",oids={})
16:43:26.191 (4)  FE=> Bind(stmt=null,portal=null)
16:43:26.191 (4)  FE=> Describe(portal=null)
16:43:26.191 (4)  FE=> Execute(portal=null,limit=1)
16:43:26.191 (4)  FE=> Sync
16:43:26.195 (4)  <=BE ParseComplete [null]
16:43:26.195 (4)  <=BE BindComplete [null]
16:43:26.195 (4)  <=BE NoData
16:43:26.195 (4)  <=BE CommandStatus(DROP TABLE)
16:43:26.195 (4)  <=BE ReadyForQuery(I)
2015-10-03 16:43:26.196 -0400 [INFO] (transaction): > 0.00 seconds
16:43:26.196 (4)  FE=> Terminate
2015-10-03 16:43:26.200 -0400 [INFO] (main): Committed.
2015-10-03 16:43:26.201 -0400 [INFO] (main): Next config diff: {"in":{},"out":{}}
    skipping: java.sql.DriverInfo
    skipping: java.sql.DriverInfo

Support int4, varchar column.

I want to insert data into RDBMS like the following.

  • embulk string value into varchar column.
  • embulk long value into int4 column.

RDBMS is PostgreSQL.

Support MERGE mode

Execute upsert sql just like REPLACE INTO query in MySQL to merge records from input with exesting recoreds in table.

PostgreSQL import use UNLOGGED TABLE

Unlogged table is used when making the temporary table for import, they become quick?
(When WAL is output to SSD, SSD can be used lengthily.)

how to load incremental data from oracle to oracle

Hi
in my environment.I want to load the increamental data from oracle to oracle.for example
in host a.it has a table test and has one row.after i finish loading data from host a to host b.the table test in host a has anthoer new data and it has two row. i want to load only the new data to host b .

Redshift output process could be much more efficient.

The Redshift output process could be quite a bit more efficient.

Possible improvements, in order of difficulty:
• Add ENCODE lzo compression option to every column on the temp table.
  º Writing to disk is typically one of the slowest parts of the COPY process.
• Load all extracts into a single temp table in Redshift.
  º Redshift can load multiple files in parallel. This is best practice for COPY speed.
  º Also the tables created are not true temp tables and incur significant creation overheard.
• Use ALTER TABLE … APPEND to move loaded data into final table rather than INSERT INTO
  º This is a recent addition to Redshift. It moves the data for one table into another logically with no physical reading or writing of the data on disk. http://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE_APPEND.html

I'll try to create a pull request that addresses the issues.

Columns dropped in output due to case problem

There is a problem with copying columns that have upper-case characters in their names.

Here is the table DDL in MySQL:
CREATE TABLE account_buyertransaction (
id int(11),
session_id varchar(64),
referenceId varchar(32),
paymentMerchant varchar(32),
status tinyint(1),
creation_date datetime,
user_id bigint(20),
payment_reason longtext,
payment_reason_hash longtext
) ;

Note that 'referenceId' and 'paymentMerchant' have upper-case characters in their names. Case is ignored in SQL column names. The matching table in Redshift (Postgres) has lower-case column names.

Here are the logs for an Embulk run. Notice that 'CREATE TABLE' has the two fields with lower-case names. Also, notice that the COPY command does not include the two fields- it only has the columns with all lower-case characters in their names.

2016-04-04 22:29:50.800 +0000 INFO: SQL: DROP TABLE IF EXISTS "account_buy_33cbe340_bl_tmp002"
...
2016-04-04 22:29:51.413 +0000 INFO: SQL: CREATE TABLE IF NOT EXISTS "account_buy_33cbe340_bl_tmp002" ("id" INT4, "session_id" VARCHAR(192), "referenceid" VARCHAR(96), "paymentmerchant" VARCHAR(96), "status" INT2, "creation_date" TIMESTAMP, "user_id" INT8, "payment_reason" VARCHAR(256), "payment_reason_hash" VARCHAR(256))
...
2016-04-04 22:29:53.443 +0000 INFO: Copy SQL: COPY "account_buy_33cbe340_bl_tmp002" ("id", "session_id", "status", "creation_date", "user_id", "payment_reason", "payment_reason_hash") ? GZIP DELIMITER '\t' NULL '\N' ESCAPE TRUNCATECOLUMNS ACCEPTINVCHARS STATUPDATE OFF COMPUPDATE OFF

OracleOutputPluginTest#testInsertDirectDirectMethod fails

Caused by: java.lang.AssertionError
    at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12146)
    at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
    at org.embulk.output.jdbc.StandardBatchInsert.flush(StandardBatchInsert.java:74)
    at org.embulk.output.jdbc.StandardBatchInsert.finish(StandardBatchInsert.java:87)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$PluginPageOutput$4.run(AbstractJdbcOutputPlugin.java:976)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$RetryableSQLExecution.call(AbstractJdbcOutputPlugin.java:1070)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$RetryableSQLExecution.call(AbstractJdbcOutputPlugin.java:1058)
    at org.embulk.spi.util.RetryExecutor.run(RetryExecutor.java:100)
    at org.embulk.spi.util.RetryExecutor.runInterruptible(RetryExecutor.java:77)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:1042)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:1035)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$PluginPageOutput.finish(AbstractJdbcOutputPlugin.java:972)
    at org.embulk.exec.LocalExecutorPlugin$ScatterTransactionalPageOutput.finish(LocalExecutorPlugin.java:496)
    at org.embulk.spi.PageBuilder.finish(PageBuilder.java:244)
    at org.embulk.standards.CsvParserPlugin.run(CsvParserPlugin.java:393)
    at org.embulk.spi.FileInputRunner.run(FileInputRunner.java:154)
    at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor.runInputTask(LocalExecutorPlugin.java:294)
    at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor.access$000(LocalExecutorPlugin.java:212)
    at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor$1.call(LocalExecutorPlugin.java:257)
    at org.embulk.exec.LocalExecutorPlugin$ScatterExecutor$1.call(LocalExecutorPlugin.java:253)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    ... 3 more

SSL value always true

Following the merge of #117 and the release of 0.6.2, my PR seems to be faulty.

It seems that the SSL value of the enum is always set to verify, leading to the property ssl=true in the connection string.

I can't manage to debug this as the command embulk run -L /Users/antoine/Documents/embulk-output-jdbc/embulk-output-redshift/build template.yml.liquid complains about a missing gemspec file.

I think that this value (https://github.com/embulk/embulk-output-jdbc/blob/master/embulk-output-redshift/src/main/java/org/embulk/output/RedshiftOutputPlugin.java#L70) should be "\"disable\"", but it doesn't explain why the value seems to be set to verify everytime. The mapping between a string in the configuration file and the Ssl enum seems to doesn't work as putting ssl: foo doesn't throw an exception.

Any help would be appreciated as the latest released version is currently breaking if people did not import the Redshift certificate manually as it tries to verify it

Partial delete option before insert/merge

Here's the use case: doing an incremental update of all the records that changed on day X. embulk-output-jdbc supports this now, except that it is not idempotent. If we want to reload old data cleanly, we have to do manual DELETEs. It seems like to support incremental updates in an idempotent way, the plugin should run a "DELETE ... WHERE ..." clause before the insert or merge phase.

output:
type: jdbc
...
delete_where: modified_date between {{ env.date1 }} and {{ env.date2 }}

Why not just do a DELETE in advance? We do this in our old software. One argument is that the DELETE should be part of the transaction that supplies the new records.

If I post this are you folks interested? I don't have the facilities to test this on all of the supported DBs. I'll be testing on MySQL and Redshift.

Invalid INSERT INTO statement is executed when no record exists

If file input plugin does not read any record because of last_path parameter, postgresql output plugin (and maybe other plugins using jdbc plugin) throws following exception.

jarvis% embulk run psql-test-config.yml
2015-07-02 18:45:14.624 +0900: Embulk v0.6.16
2015-07-02 18:45:17.163 +0900 [INFO] (transaction): Loaded plugin embulk-output-postgresql (0.3.0)
2015-07-02 18:45:17.244 +0900 [INFO] (transaction): Listing local files at directory '/Users/usr0101931/devel/psql-test/csv' filtering filename by prefix 'sample_'
2015-07-02 18:45:17.254 +0900 [INFO] (transaction): Loading files []
2015-07-02 18:45:17.415 +0900 [INFO] (transaction): Connecting to jdbc:postgresql://localhost:5432/psql_test options {user=usr0101931, tcpKeepAlive=true, loginTimeout=300, socketTimeout=1800}
2015-07-02 18:45:17.542 +0900 [INFO] (transaction): SQL: SET search_path TO "public"
2015-07-02 18:45:17.546 +0900 [INFO] (transaction): > 0.00 seconds
2015-07-02 18:45:17.547 +0900 [INFO] (transaction): Using insert mode
2015-07-02 18:45:17.724 +0900 [INFO] (transaction): Connecting to jdbc:postgresql://localhost:5432/psql_test options {user=usr0101931, tcpKeepAlive=true, loginTimeout=300, socketTimeout=28800}
2015-07-02 18:45:17.739 +0900 [INFO] (transaction): SQL: SET search_path TO "public"
2015-07-02 18:45:17.740 +0900 [INFO] (transaction): > 0.00 seconds
2015-07-02 18:45:17.741 +0900 [INFO] (transaction): SQL: INSERT INTO "sample" ("id", "account", "time", "purchase", "comment")
2015-07-02 18:45:17.757 +0900 [ERROR] (transaction): Operation failed (0:42601)
java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: syntax error at end of input
  ポジション: 71
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.commit(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:355)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.transaction(org/embulk/output/jdbc/AbstractJdbcOutputPlugin.java:292)
    at org.embulk.exec.BulkLoader$4$1$1.transaction(org/embulk/exec/BulkLoader.java:493)
    at org.embulk.exec.LocalExecutorPlugin.transaction(org/embulk/exec/LocalExecutorPlugin.java:37)
    at org.embulk.exec.BulkLoader$4$1.run(org/embulk/exec/BulkLoader.java:489)
    at org.embulk.spi.util.Filters$RecursiveControl.transaction(org/embulk/spi/util/Filters.java:97)
    at org.embulk.spi.util.Filters.transaction(org/embulk/spi/util/Filters.java:50)
    at org.embulk.exec.BulkLoader$4.run(org/embulk/exec/BulkLoader.java:484)
    at org.embulk.spi.FileInputRunner$RunnerControl$1$1.run(org/embulk/spi/FileInputRunner.java:117)
    at org.embulk.standards.CsvParserPlugin.transaction(org/embulk/standards/CsvParserPlugin.java:121)
    at org.embulk.spi.FileInputRunner$RunnerControl$1.run(org/embulk/spi/FileInputRunner.java:111)
    at org.embulk.spi.util.Decoders$RecursiveControl.transaction(org/embulk/spi/util/Decoders.java:77)
    at org.embulk.spi.util.Decoders$RecursiveControl$1.run(org/embulk/spi/util/Decoders.java:73)
    at org.embulk.standards.GzipFileDecoderPlugin.transaction(org/embulk/standards/GzipFileDecoderPlugin.java:30)
    at org.embulk.spi.util.Decoders$RecursiveControl.transaction(org/embulk/spi/util/Decoders.java:68)
    at org.embulk.spi.util.Decoders.transaction(org/embulk/spi/util/Decoders.java:33)
    at org.embulk.spi.FileInputRunner$RunnerControl.run(org/embulk/spi/FileInputRunner.java:108)
    at org.embulk.standards.LocalFileInputPlugin.resume(org/embulk/standards/LocalFileInputPlugin.java:80)
    at org.embulk.standards.LocalFileInputPlugin.transaction(org/embulk/standards/LocalFileInputPlugin.java:70)
    at org.embulk.spi.FileInputRunner.transaction(org/embulk/spi/FileInputRunner.java:63)
    at org.embulk.exec.BulkLoader.doRun(org/embulk/exec/BulkLoader.java:480)
    at org.embulk.exec.BulkLoader.access$100(org/embulk/exec/BulkLoader.java:36)
    at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:342)
    at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:338)
    at org.embulk.spi.Exec.doWith(org/embulk/spi/Exec.java:24)
    at org.embulk.exec.BulkLoader.run(org/embulk/exec/BulkLoader.java:338)
    at org.embulk.command.Runner.run(org/embulk/command/Runner.java:149)
    at org.embulk.command.Runner.main(org/embulk/command/Runner.java:101)
    at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:497)
    at RUBY.run(file:/Users/usr0101931/.embulk/bin/embulk!/embulk/command/embulk_run.rb:351)
    at classpath_3a_embulk.command.embulk.(root)(classpath:embulk/command/embulk.rb:47)
    at classpath_3a_embulk.command.embulk.(root)(classpath_3a_embulk/command/classpath:embulk/command/embulk.rb:47)
    at org.embulk.cli.Main.main(org/embulk/cli/Main.java:11)
Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at end of input
  ポジション: 71
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2270)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1998)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:570)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:406)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:334)
    at org.embulk.output.jdbc.JdbcOutputConnection.executeUpdate(JdbcOutputConnection.java:460)
    at org.embulk.output.jdbc.JdbcOutputConnection.collectInsert(JdbcOutputConnection.java:279)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.doCommit(AbstractJdbcOutputPlugin.java:588)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$2.run(AbstractJdbcOutputPlugin.java:348)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(AbstractJdbcOutputPlugin.java:892)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(AbstractJdbcOutputPlugin.java:889)
    at org.embulk.spi.util.RetryExecutor.run(RetryExecutor.java:100)
    at org.embulk.spi.util.RetryExecutor.runInterruptible(RetryExecutor.java:77)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:885)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:878)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.commit(AbstractJdbcOutputPlugin.java:343)
    at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.transaction(AbstractJdbcOutputPlugin.java:292)
    at org.embulk.exec.BulkLoader$4$1$1.transaction(BulkLoader.java:493)
    at org.embulk.exec.LocalExecutorPlugin.transaction(LocalExecutorPlugin.java:37)
    at org.embulk.exec.BulkLoader$4$1.run(BulkLoader.java:489)
    at org.embulk.spi.util.Filters$RecursiveControl.transaction(Filters.java:97)
    at org.embulk.spi.util.Filters.transaction(Filters.java:50)
    at org.embulk.exec.BulkLoader$4.run(BulkLoader.java:484)
    at org.embulk.spi.FileInputRunner$RunnerControl$1$1.run(FileInputRunner.java:117)
    at org.embulk.standards.CsvParserPlugin.transaction(CsvParserPlugin.java:121)
    at org.embulk.spi.FileInputRunner$RunnerControl$1.run(FileInputRunner.java:111)
    at org.embulk.spi.util.Decoders$RecursiveControl.transaction(Decoders.java:77)
    at org.embulk.spi.util.Decoders$RecursiveControl$1.run(Decoders.java:73)
    at org.embulk.standards.GzipFileDecoderPlugin.transaction(GzipFileDecoderPlugin.java:30)
    at org.embulk.spi.util.Decoders$RecursiveControl.transaction(Decoders.java:68)
    at org.embulk.spi.util.Decoders.transaction(Decoders.java:33)
    at org.embulk.spi.FileInputRunner$RunnerControl.run(FileInputRunner.java:108)
    at org.embulk.standards.LocalFileInputPlugin.resume(LocalFileInputPlugin.java:80)
    at org.embulk.standards.LocalFileInputPlugin.transaction(LocalFileInputPlugin.java:70)
    at org.embulk.spi.FileInputRunner.transaction(FileInputRunner.java:63)
    at org.embulk.exec.BulkLoader.doRun(BulkLoader.java:480)
    at org.embulk.exec.BulkLoader.access$100(BulkLoader.java:36)
    at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:342)
    at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:338)
    at org.embulk.spi.Exec.doWith(Exec.java:24)
    at org.embulk.exec.BulkLoader.run(BulkLoader.java:338)
    at org.embulk.command.Runner.run(Runner.java:149)
    at org.embulk.command.Runner.main(Runner.java:101)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:470)
    at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:328)
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:71)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:346)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:204)
    at org.jruby.ast.CallTwoArgNode.interpret(CallTwoArgNode.java:59)
    at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
    at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
    at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
    at org.jruby.ast.BeginNode.interpret(BeginNode.java:83)
    at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
    at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
    at org.jruby.ast.CaseNode.interpret(CaseNode.java:138)
    at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
    at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
    at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
    at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
    at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
    at classpath_3a_embulk.command.embulk.__file__(classpath:embulk/command/embulk.rb:47)
    at classpath_3a_embulk.command.embulk.load(classpath:embulk/command/embulk.rb)
    at org.jruby.Ruby.runScript(Ruby.java:866)
    at org.jruby.Ruby.runScript(Ruby.java:859)
    at org.jruby.Ruby.runNormally(Ruby.java:728)
    at org.jruby.Ruby.runFromMain(Ruby.java:577)
    at org.jruby.Main.doRunFromMain(Main.java:395)
    at org.jruby.Main.internalRun(Main.java:290)
    at org.jruby.Main.run(Main.java:217)
    at org.jruby.Main.main(Main.java:197)
    at org.embulk.cli.Main.main(Main.java:11)

Error: org.postgresql.util.PSQLException: ERROR: syntax error at end of input
  ポジション: 71

The reason of the exception is invalid INSERT INTO statement.

2015-07-02 18:45:17.741 +0900 [INFO] (transaction): SQL: INSERT INTO "sample" ("id", "account", "time", "purchase", "comment")

I could avoid the exception by inserting the following codes at the top of doCommit method of AbstractJdbcOutputPlugin class. However, I don't know whether this modification is OK or not for REPLACE option.

    protected void doCommit(JdbcOutputConnection con, PluginTask task, int taskCount)
        throws SQLException
    {
        if (task.getIntermediateTables().get().size() == 0) {
            return;
        }

Suggestion: Option to control how merge_direct updates existing rows

When using merge_direct on MySQL, there are cases that some columns must not be updated, or must be updated conditionally.
I implemented an option to manually specify ON DUPLICATE KEY UPDATE clause patch. This option gives full control of how to update.

The drawback is that this option cannot cover other RDBMSs like PostgreSQL.
I appreciate if my change is merged, but I am afraid that such a method is not suitable for this project.
Let me know the opinion of maintainers. Thanks!

Postgres upsert merge

For discussion;
Postgres 9.5 supports UPSERT inserts aka

INSERT INTO test (some_key) VALUES ('a')
    ON CONFLICT DO update SET some_val = EXCLUDED.some_val WHERE something = 1..

Would this be interesting to proceed with? I've done solution which replaces current merge mechanism with upsert + introduces merge_condition parameter (https://github.com/kakoni/embulk-output-jdbc)

Connection of StandardBatchInsert not closed

The connection of StandardBatchInsert isn't closed.
In close() method of StandardBatchInsert, there is a comment 'caller should close the connection'.
But connection is a private field and there's no getter method, so a caller can't close the connection.
Why does close() method not close the connection?

when load data to oracle ,it raise table not exists error while it actually exists

i am try to load data from oracle to oracle in deferent machine.i tried to delete ~/.embulk/jruby* directories and reinstalled embulk-input-oracle.it works .now i encounter another program.
i configure .yml file as below

in:
type: oracle
driver_path: /oracle/11.2.4/jdbc/lib/ojdbc6.jar
host: 10.89.13.56
user: sync
password: m123
database: desync
query: |
select * from sync.test
out: {type: oracle, driver_path: /oracle/11.2.4/jdbc/lib/ojdbc6.jar, host: 10.89.13.57,
user: flume, password: m123, database: flume, table: test, mode: insert, insert_method: normal}

host 10.89.13.56
SQL> select * from sync.test;

COL

a

SQL>
host 10.89.13.57
SQL> select * from flume.test
2 ;

no rows selected

SQL>

when i run embulk run. it raise error as below
root@localhost embulk-master]# embulk run newl.yml
2015-08-23 13:27:14.000 +0800: Embulk v0.7.2
2015-08-23 13:27:16.089 +0800 INFO: Loaded plugin embulk-input-oracle (0.6.0)
2015-08-23 13:27:16.124 +0800 INFO: Loaded plugin embulk-output-oracle (0.4.1)
2015-08-23 13:27:16.745 +0800 INFO: Connecting to jdbc:oracle:thin:@10.89.13.57:1521:flume options {user=flume}
2015-08-23 13:27:17.027 +0800 INFO: Using insert mode
2015-08-23 13:27:17.086 +0800 INFO: SQL: CREATE TABLE "test_5d959b400d59f80_bl_tmp000" ("COL" CLOB)
2015-08-23 13:27:17.103 +0800 INFO: > 0.02 seconds
2015-08-23 13:27:17.147 +0800 INFO: {done: 0 / 1, running: 0}
2015-08-23 13:27:17.194 +0800 INFO: Connecting to jdbc:oracle:thin:@10.89.13.57:1521:flume options {user=flume}
2015-08-23 13:27:17.198 +0800 INFO: Connecting to jdbc:oracle:thin:@10.89.13.57:1521:flume options {user=flume}
2015-08-23 13:27:17.239 +0800 INFO: Prepared SQL: INSERT INTO "test_5d959b400d59f80_bl_tmp000" ("COL") VALUES (?)
2015-08-23 13:27:17.242 +0800 WARN: An output plugin is compiled with old Embulk plugin API. Please update the plugin version using "embulk gem install" command, or contact a developer of the plugin to upgrade the plugin code using "embulk migrate" command: class org.embulk.output.jdbc.AbstractJdbcOutputPlugin$PluginPageOutput
2015-08-23 13:27:17.326 +0800 INFO: SQL: select * from sync.test

2015-08-23 13:27:17.337 +0800 INFO: > 0.01 seconds
2015-08-23 13:27:17.346 +0800 INFO: Loading 1 rows
2015-08-23 13:27:17.388 +0800 INFO: > 0.04 seconds (loaded 1 rows in total)
2015-08-23 13:27:17.389 +0800 INFO: {done: 1 / 1, running: 0}
2015-08-23 13:27:17.389 +0800 INFO: Connecting to jdbc:oracle:thin:@10.89.13.57:1521:flume options {user=flume}
2015-08-23 13:27:17.419 +0800 INFO: SQL: INSERT INTO "test" ("COL") SELECT "COL" FROM "test_5d959b400d59f80_bl_tmp000"
2015-08-23 13:27:17.427 +0800 ERROR: Operation failed (942:42000)
2015-08-23 13:27:17.436 +0800 INFO: Connecting to jdbc:oracle:thin:@10.89.13.57:1521:flume options {user=flume}
2015-08-23 13:27:17.466 +0800 INFO: SQL: DROP TABLE "test_5d959b400d59f80_bl_tmp000"
2015-08-23 13:27:17.505 +0800 INFO: > 0.04 seconds
org.embulk.exec.PartialExecutionException: java.lang.RuntimeException: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist

at org.embulk.exec.BulkLoader$LoaderState.buildPartialExecuteException(org/embulk/exec/BulkLoader.java:328)
at org.embulk.exec.BulkLoader.doRun(org/embulk/exec/BulkLoader.java:526)
at org.embulk.exec.BulkLoader.access$100(org/embulk/exec/BulkLoader.java:33)
at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:339)
at org.embulk.exec.BulkLoader$1.run(org/embulk/exec/BulkLoader.java:335)
at org.embulk.spi.Exec.doWith(org/embulk/spi/Exec.java:25)
at org.embulk.exec.BulkLoader.run(org/embulk/exec/BulkLoader.java:335)
at org.embulk.EmbulkEmbed.run(org/embulk/EmbulkEmbed.java:179)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
at RUBY.run(/root/.embulk/bin/embulk!/embulk/runner.rb:77)
at RUBY.run(/root/.embulk/bin/embulk!/embulk/command/embulk_run.rb:274)
at RUBY.<top>(/root/.embulk/bin/embulk!/embulk/command/embulk_main.rb:2)
at org.jruby.RubyKernel.require(org/jruby/RubyKernel.java:940)
at RUBY.(root)(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:1)
at root.$_dot_embulk.bin.embulk.embulk.command.embulk_bundle.<top>(file:/root/.embulk/bin/embulk!/embulk/command/embulk_bundle.rb:55)
at java.lang.invoke.MethodHandle.invokeWithArguments(java/lang/invoke/MethodHandle.java:599)
at org.embulk.cli.Main.main(org/embulk/cli/Main.java:20)

Caused by: java.lang.RuntimeException: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist

at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.commit(AbstractJdbcOutputPlugin.java:352)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.transaction(AbstractJdbcOutputPlugin.java:289)
at org.embulk.exec.BulkLoader$4$1$1.transaction(BulkLoader.java:490)
at org.embulk.exec.LocalExecutorPlugin.transaction(LocalExecutorPlugin.java:36)
at org.embulk.exec.BulkLoader$4$1.run(BulkLoader.java:486)
at org.embulk.spi.util.Filters$RecursiveControl.transaction(Filters.java:96)
at org.embulk.spi.util.Filters.transaction(Filters.java:49)
at org.embulk.exec.BulkLoader$4.run(BulkLoader.java:481)
at org.embulk.input.jdbc.AbstractJdbcInputPlugin.transaction(AbstractJdbcInputPlugin.java:147)
at org.embulk.plugin.compat.InputPluginWrapper.transaction(InputPluginWrapper.java:57)
at org.embulk.exec.BulkLoader.doRun(BulkLoader.java:477)
at org.embulk.exec.BulkLoader.access$100(BulkLoader.java:33)
at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:339)
at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:335)
at org.embulk.spi.Exec.doWith(Exec.java:25)
at org.embulk.exec.BulkLoader.run(BulkLoader.java:335)
at org.embulk.EmbulkEmbed.run(EmbulkEmbed.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:457)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:318)
at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:45)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:289)
at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:128)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:114)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:273)
at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:79)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:83)
at org.jruby.ir.instructions.CallBase.interpret(CallBase.java:419)
at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:321)
at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
at org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:82)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:198)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:184)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:201)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:289)
at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
at org.jruby.ir.interpreter.Interpreter.INTERPRET_ROOT(Interpreter.java:116)
at org.jruby.ir.interpreter.Interpreter.execute(Interpreter.java:103)
at org.jruby.ir.interpreter.Interpreter.execute(Interpreter.java:32)
at org.jruby.ir.IRTranslator.execute(IRTranslator.java:42)
at org.jruby.Ruby.runInterpreter(Ruby.java:837)
at org.jruby.Ruby.loadFile(Ruby.java:2901)
at org.jruby.runtime.load.LibrarySearcher$ResourceLibrary.load(LibrarySearcher.java:245)
at org.jruby.runtime.load.LibrarySearcher$FoundLibrary.load(LibrarySearcher.java:35)
at org.jruby.runtime.load.LoadService.tryLoadingLibraryOrScript(LoadService.java:895)
at org.jruby.runtime.load.LoadService.smartLoadInternal(LoadService.java:540)
at org.jruby.runtime.load.LoadService.requireCommon(LoadService.java:425)
at org.jruby.runtime.load.LoadService.require(LoadService.java:391)
at org.jruby.RubyKernel.requireCommon(RubyKernel.java:947)
at org.jruby.RubyKernel.require19(RubyKernel.java:940)
at org.jruby.RubyKernel$INVOKER$s$1$0$require19.call(RubyKernel$INVOKER$s$1$0$require19.gen)
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodOneOrNBlock.call(JavaMethod.java:364)
at org.jruby.internal.runtime.methods.AliasMethod.call(AliasMethod.java:61)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:289)
at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:77)
at org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:82)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:198)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:184)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:201)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:313)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:163)
at root.$_dot_embulk.bin.embulk.embulk.command.embulk_bundle.invokeOther74:require(file:/root/.embulk/bin/embulk!/embulk/command/embulk_bundle.rb)
at root.$_dot_embulk.bin.embulk.embulk.command.embulk_bundle.RUBY$script(file:/root/.embulk/bin/embulk!/embulk/command/embulk_bundle.rb:55)
at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:599)
at org.jruby.ir.Compiler$1.load(Compiler.java:111)
at org.jruby.Ruby.runScript(Ruby.java:821)
at org.jruby.Ruby.runScript(Ruby.java:813)
at org.jruby.Ruby.runNormally(Ruby.java:751)
at org.jruby.Ruby.runFromMain(Ruby.java:573)
at org.jruby.Main.doRunFromMain(Main.java:403)
at org.jruby.Main.internalRun(Main.java:298)
at org.jruby.Main.run(Main.java:225)
at org.jruby.Main.main(Main.java:197)
at org.embulk.cli.Main.main(Main.java:20)

Caused by: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:1036)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1336)
at oracle.jdbc.driver.OracleStatement.executeUpdateInternal(OracleStatement.java:1845)
at oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:1810)
at oracle.jdbc.driver.OracleStatementWrapper.executeUpdate(OracleStatementWrapper.java:294)
at org.embulk.output.jdbc.JdbcOutputConnection.executeUpdate(JdbcOutputConnection.java:460)
at org.embulk.output.jdbc.JdbcOutputConnection.collectInsert(JdbcOutputConnection.java:279)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.doCommit(AbstractJdbcOutputPlugin.java:592)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$2.run(AbstractJdbcOutputPlugin.java:345)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(AbstractJdbcOutputPlugin.java:901)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin$8.call(AbstractJdbcOutputPlugin.java:898)
at org.embulk.spi.util.RetryExecutor.run(RetryExecutor.java:100)
at org.embulk.spi.util.RetryExecutor.runInterruptible(RetryExecutor.java:77)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:894)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.withRetry(AbstractJdbcOutputPlugin.java:887)
at org.embulk.output.jdbc.AbstractJdbcOutputPlugin.commit(AbstractJdbcOutputPlugin.java:340)
... 83 more

Error: java.lang.RuntimeException: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist

i don't know why it report table or view does not exist.because in both oracle database .the test table exists actually
thanks for you replay

Redshift S3 permissions more complex than needed

I could not get the STS Federated session key feature to work, so I added an option to just use the given S3 access&secret keys.

We do not use this feature at all in our (large) AWS deployment, and our very competent security people don't think it's all that useful. I'll post a pull request when I have time.

Skip error records option

Support option for skipping error records instead of aborting.
Error records should be written in log file (or another OutputPlugin?).
And user may want to know number of error records.

Unique temporary files are not unique

If I run several Embulk copy jobs to the same target schema on a Redshift instance, I often get an error that a tmp table is missing a field. This is because two Embulk jobs are using the exact same tmp file key and so one job is trying to use another jobs' tmp table.

Please change the tmp table name generator to be more unique. Maybe include the process ID value?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.