GithubHelp home page GithubHelp logo

forcedotcom / dataloader Goto Github PK

View Code? Open in Web Editor NEW
466.0 73.0 294.0 195.34 MB

Salesforce Data Loader

License: BSD 3-Clause "New" or "Revised" License

Shell 0.44% Java 98.74% PLSQL 0.15% Batchfile 0.30% Python 0.38%

dataloader's Introduction

Feature requests

Submit your feature request as an idea on Salesforce IdeaExchange. Make sure to use "Platform / Data Import & Integration" as the category for your idea.

Prerequisites

Java Runtime Environment (JRE) is required to install and run Data Loader. Review the installation instructions of the latest release for the required JRE version.

Installing Data Loader

Salesforce officially supports Data Loader for Windows and macOS. All other operating systems are unsupported. The list of supported macOS and Windows versions and CPU architecture for a released version of Data Loader is provided in the Release Notes for that release.

Follow the installation instructions for macOS and Windows.

Installing on Linux:

  • Extract contents of Data Loader zip file
  • Rename install.command as install.sh
  • Run the command in a shell terminal: ./install.sh

Running Data Loader in GUI mode

For running Data Loader on macOS or Windows, follow the instructions.

For running Data Loader on Linux, type the following command in a command shell:

./dataloader.sh

OR

java -jar dataloader-x.y.z.jar

Consult the documentation for the details of how to configure and use Data Loader.

Running Data Loader in Batch mode

Batch mode is officially supported only on Windows. To run Data Loader in Batch mode on Windows, see Batch mode for Windows.

Execute the following command on Mac (Replace dataloader_console with dataloader.sh on Linux):

./dataloader_console <config dir containing process-conf.xml and config.properties files> <process name> run.mode=batch

Alternately execute one of the following commands:

java -jar dataloader-x.y.z.jar <config dir containing process-conf.xml and config.properties files> <process name> run.mode=batch

OR

java -jar dataloader-x.y.z.jar salesforce.config.dir=<config dir containing process-conf.xml and config.properties files> process.name=<process name> run.mode=batch 

Commands to create an encryption key file, encrypt a password, or decrypt a password

See Batch mode for Windows for the detailed steps to create an encryption key file, encrypt a password, or decrypt a password on Windows.

Batch mode requires specifying an encrypted password in process-conf.xml, config.properties, or as a command line argument. The first step in encrypting a password is to create an encryption key file on Mac or Linux.

Execute the following command to generate an encryption key file on Mac (Replace dataloader_console with dataloader.sh on Linux):

./dataloader_console -k [<encryption key file>]  run.mode=encrypt 

OR

java -jar dataloader-x.y.z.jar -k [<encryption key file>]  run.mode=encrypt 

Execute the following command to encrypt a password on Mac (Replace dataloader_console with dataloader.sh on Linux):

./dataloader_console -e <password in plain text> [<encryption key file>] run.mode=encrypt 

OR

java -jar dataloader-x.y.z.jar -e <password in plain text> [<encryption key file>] run.mode=encrypt

Execute the following command to decrypt a password on Mac (Replace dataloader_console with dataloader.sh on Linux):

./dataloader_console -d <encrypted password> [<encryption key file>] run.mode=encrypt 

OR

java -jar dataloader-x.y.z.jar -d <encrypted password> [<encryption key file>] run.mode=encrypt

NOTE: these commands use the default encryption key file ${HOME}/.dataloader/dataloader.key if an encryption key file is not specified.

Reporting an issue

Collect the following information before reaching out to Salesforce Support or reporting an issue on github:

  • Data Loader version, desktop operating system type and version, operation being performed, and screenshots of the issue.
  • Config files: config.properties, log4j2.properties or log-conf.xml, process-conf.xml.
  • log file:
    • Set the log level to “debug” in Advanced Config dialog (v58 and later). If the log level is not visible in Advanced Settings dialog (v57 or earlier) or if the log level is not changeable in Advanced Settings dialog, set "root" log level to "debug" in log-conf.xml.
    • Rerun data loader to reproduce the issue.
    • Send the log output located in the file shown by “Logging output file” info in the Advanced Settings dialog of their data loader. Logging output file info is shown in the Advanced Settings dialog as of v58.
    • If you are using v58 or earlier, the default location of the debug log is <tempdir>/sdl.log
    • The default tempdir is %USER%\AppData\Local\Temp on Windows
    • The default tempdir is ${TMPDIR} on macOS
  • Provide a sample csv file containing as few as possible columns and rows to reproduce the issue.
  • Provide the following information about your org (it is available in the log file if data loader version > 58.0.0, the log level is set to debug, and the user logs in):
    • Org id: Setup >> Company Information >> value of Salesforce organization id field
    • instance: Setup >> Company Information >> value of Instance field
    • User id: follow the instructions in this article.

NOTE: Remove all personal, business-specific, and all other sensitive information from the files you share (e.g. config files, log files, screenshots, csv files, and others) before reporting an issue, especially on a public forum such as github.

Building Data Loader

See the property setting for "<maven.compiler.release>" property in pom.xml to find out the JDK version to compile with.

    git clone [email protected]:forcedotcom/dataloader.git
    cd dataloader
    git submodule init
    git submodule update
    mvn clean package -DskipTests 
        or
    ./dlbuilder.sh

dataloader_v<x.y.z>.zip will be created in the root directory of the local git clone.

Debugging Data Loader

To run data loader for debugging with an IDE (remote debugging, port 5005), run the following command in the git clone root folder:

./rundl.sh -d

OR

java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005 -cp target/dataloader-x.y.z.jar com.salesforce.dataloader.process.DataLoaderRunner salesforce.config.dir=./configs

Testing Data Loader

See the testing wiki

Resources

For more information, see the Salesforce Data Loader Guide.

Questions can be directed to the open source forum.

Dependencies and plugins

Update SWT by running python3 <root of the git clone>/updateSWT.py <root of the git clone>. Requires python 3.9 or later.

All other dependencies and plugins are downloaded by maven from the central maven repo. Run mvn versions:display-dependency-updates to see which dependencies need an update. It will list all dependencies whose specified version in pom.xml needs an update. Run mvn versions:use-latest-releases to update these dependencies. Run mvn versions:display-plugin-updates again to check which plugins still need an update and update their versions manually.

dataloader's People

Contributors

ahalevy avatar apoorvashenoy avatar ashitsalesforce avatar brytonpilling avatar colin-jarvissfdc avatar dependabot[bot] avatar diracz avatar dpham avatar dsiebold avatar dsipasseuth avatar federecio avatar gnguyen-sfdc avatar jefflai avatar jjang-sfdc avatar jjangsfdc avatar jjinsfdc avatar john-brock avatar jthurst01 avatar kenichi-ando avatar kinstella avatar kthompso avatar matinlotfali avatar reubencornel avatar rjmazzeo avatar superfell avatar svc-scm avatar tee-tee-dev avatar tggagne avatar vividboarder avatar xbiansf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dataloader's Issues

Possibility to escape @ character in sqlString property (daodatabaseSqlConfig)

in mySql the @ character can be used to set dynamic variables, but the @ characters present in the sqlString are always parsed to some config variable.
for exemple, this query cannot be executed
SELECT (@A:=20) as variable FROM table_name;

I think the problem resides in the replaceSqlParams of com.salesforce.dataloader.dao.database.databaseContext replaceSqlParams function:

terminal warning delays load "No value provided for field"

See fixed JAR solution

When performing database-to-salesforce loads, there's a terminal nag holding up the batching process.

[java] 2017-05-18 17:50:56,959 WARN
[execute] visitor.BulkLoadVisitor writeSingleColumn (BulkLoadVisitor.java:222)
- No value provided for field: RecordTypeId
- No value provided for field: RecordTypeId
- No value provided for field: RecordTypeId
- No value provided for field: RecordTypeId
- No value provided for field: RecordTypeId
- No value provided for field: RecordTypeId
(hundreds of thousands of times)

Naturally some of the source data may contain NULL column values - the nag gets streamed to the terminal for each NULL field of each record, delaying the creation of the next Bulk API Batch. The output comes from Line 222 of BulkVisitorLoader.java and has no configuration to disable it.

See reports elsewhere:

How to insert environment variables into process-conf.xml

I was kind of hoping I could use the ${env.variablename} syntax simialr to what's used with ant's build.xml file.

I have a script that runs my dataloader from the commandline. It looks like:

DrozBook:dataloader tgagne$ cat ~/bin/process 
#!/bin/bash
dataloaderdir=~/dataloader
version=27.0.1
version=30.0.0
version=34.0
version=33.0.0

if [ $# -gt 1 ]; then
    export ARG2=$2
fi

time java \
    -cp $dataloaderdir/dataloader-$version-uber.jar:$dataloaderdir/ojdbc6.jar:$dataloaderdir/postgresql-9.4-1201.jdbc4.jar:$dataloaderdir/sqljdbc4.jar \
    -Dsalesforce.config.dir=`pwd`  \
    com.salesforce.dataloader.process.ProcessRunner process.name=$1 | grep --line-buffered -vi "will be ignored since destination column is empty"

Inside process-conf.xml I wanted to use the argument to change the name of the CSV file I was reading from for an import.

<entry key="dataAccess.name" value="${env.ARG2}" />

But alas, dataloader didn't swallow it.

Caused by: com.salesforce.dataloader.exception.DataAccessObjectInitializationException: File: /Users/tgagne/git/customer/dataloader/${env.ARG2} not found.  Open failed.

The way this is done in build.xml is with the tag

<property environment="env"/>

But sticking it near the top of process-conf.xml just blows-up the stack.

​Is there something I'm missing or is it impossible to do such a thing with process-conf.xml?

After building on mac os x, dataloader complains while parsing process-conf.xml that it can't find a bad path for process-conf.xml.

After cloning and building dataloader this AM, it complains while parsing that it can't find the path of process-conf.xml, because it's messed-up its path! I do not have this problem with the 33.0.0 version downloaded (and copied) from installed version of the jar for the Windows download. I only get this error using a .jar I built from the github repo.

java -cp /Users/tgagne/dataloader/dataloader-33.0.0-uber.jar:/Users/tgagne/dataloader/ojdbc6.jar -Dsalesforce.config.dir=/Users/tgagne/git/cust-sf-config/sql com.salesforce.dataloader.process.ProcessRunner process.name=importAttribute

2015-04-08 12:15:47,725 INFO [main] process.ProcessConfig getBeanFactory (ProcessConfig.java:103) - Loading process configuration from config file: /Users/tgagne/git/cust-sf-config/sql/process-conf.xml
2015-04-08 12:15:47,872 INFO [main] xml.XmlBeanDefinitionReader loadBeanDefinitions (XmlBeanDefinitionReader.java:315) - Loading XML bean definitions from file [/Users/tgagne/git/cust-sf-config/sql/Users/tgagne/git/cust-sf-config/sql/process-conf.xml]

Notice how the conf directory is duplicated in the file[] above

2015-04-08 12:15:47,874 ERROR [main] process.ProcessConfig getProcessInstance (ProcessConfig.java:96) - Error loading process: importAttribute configuration from config file: /Users/tgagne/git/cust-sf-config/sql/process-conf.xml
org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from file [/Users/tgagne/git/cust-sf-config/sql/Users/tgagne/git/cust-sf-config/sql/process-conf.xml]; nested exception is java.io.FileNotFoundException: Users/tgagne/git/cust-sf-config/sql/process-conf.xml (No such file or directory)

Should the new API version be 34.0.0 like the previous one was 33.0.0?

Just forked and tried rebuilding. I'm getting an error on my mac. It's not that I think the version number has anything to do with it. It just may be coincidental.

[INFO] --- osxappbundle-maven-plugin:1.0-alpha-2:bundle (default) @ dataloader ---
[INFO] Setting property: classpath.resource.loader.class => 'org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader'.
[INFO] Setting property: file.resource.loader.class => 'org.apache.velocity.runtime.resource.loader.FileResourceLoader'.
[INFO] Setting property: file.resource.loader.path => ''.
[INFO] Setting property: resource.loader => 'file,classpath'.
[INFO] Building zip: /Users/tgagne/git/dataloader/target/dataloader-34.0-app.zip
[INFO] 
[INFO] --- exec-maven-plugin:1.1:exec (make-pretty-dmg) @ dataloader ---
[INFO] bash: /Users/tgagne/git/dataloader/yoursway-create-dmg/create-dmg: No such file or directory
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.531 s
[INFO] Finished at: 2015-04-22T15:13:15-04:00
[INFO] Final Memory: 35M/442M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.1:exec (make-pretty-dmg) on project dataloader: Result of /bin/sh -c cd /Users/tgagne/git/dataloader && bash make-pretty-dmg.sh /Users/tgagne/git/dataloader /Users/tgagne/git/dataloader/target/dataloader-34.0 /Users/tgagne/git/dataloader/target/dataloader-34.0-pretty.dmg execution is: '127'. -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.1:exec (make-pretty-dmg) on project dataloader: Result of /bin/sh -c cd /Users/tgagne/git/dataloader && bash make-pretty-dmg.sh /Users/tgagne/git/dataloader /Users/tgagne/git/dataloader/target/dataloader-34.0 /Users/tgagne/git/dataloader/target/dataloader-34.0-pretty.dmg execution is: '127'.
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
    at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
    at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
    at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
    at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Result of /bin/sh -c cd /Users/tgagne/git/dataloader && bash make-pretty-dmg.sh /Users/tgagne/git/dataloader /Users/tgagne/git/dataloader/target/dataloader-34.0 /Users/tgagne/git/dataloader/target/dataloader-34.0-pretty.dmg execution is: '127'.
    at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:260)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
    ... 20 more

Loading of log-conf.xml

Using the CLI I set the CONF directory to c:\load\conf.
I run from c:\load
The \conf directory has log-conf.xml. The logging configuration is not recognized and I receive the console output '[main] controller.Controller initLog (Controller.java:389) - Using built-in logging configuration, no log-conf.xml'

Controller.java is using the user.dir property to construct the File at line 380. This returns "c:\load"

Shouldn't the log-conf.xml be loaded from the directory returned from the getConfigDir() on line 400.

EncryptedDataSource has incorrect package statement

Currently it reads:

package com.salesforce.dataloader.dao.database;

But it's not in that directory. It's in the dao directory.

So we can either change the directory it's in or change the package statement.

Any suggestions? I'm probably the only person using it right now, so modifying my database-conf and updating the doc in the class shouldn't be a big deal.

Auto-map fields that don't include "__c" in header

It would be helpful if the automapping process also mapped fields where the source CSV does not include "__c" in the field name. I've run into this a few times where we have a pre-existing CSV that we can't control, but we have created a matching schema whose api names match the column labels minus "__c".

Quick-and-dirty patch which achieves this:

diff --git a/src/main/java/com/salesforce/dataloader/ui/MappingDialog.java b/src/main/java/com/salesforce/dataloader/ui/MappingDialog.java
index 41f5f40..aae8ee5 100644
--- a/src/main/java/com/salesforce/dataloader/ui/MappingDialog.java
+++ b/src/main/java/com/salesforce/dataloader/ui/MappingDialog.java
@@ -522,6 +522,8 @@
                 mappingSource = fieldName;
             } else if (mapper.hasDaoColumn(fieldLabel)) {
                 mappingSource = fieldLabel;
+            } else if (mapper.hasDaoColumn(fieldName.replace("__c", ""))) {
+                mappingSource = fieldName.replace("__c", "");
             }

             if(mappingSource != null) {

Should return nonzero status code on error parsing csv

The data loader running via process.bat with an invalid csv file exits with a status code of 0 and prints a javastack trace. It would be more consistent if a non-zero status code was returned on this error like the one that is returned when you have a classpath error or other types of errors.

problem in loading attachment files in sfdc account object with file name containing special character using data loader cliq in unix

problem in loading attachment files in sfdc account object with file name containing special character using data loader cliq in unix 

What steps will reproduce the problem?

  1. running cliq for unix to upload sfdc attachment file containing special character in file name.

1.insert any attachment containing a file name which has special character (i used the file name as "MedInfoFAQs_3772_olai_pat_was ist zypadhera™ und wozu wird es angewendet.mht")

2.upload the attachment using cliq tool generated script sh file for sfdc account related attachment object

3.In output log it is showing an error like "Error converting value to correct data type: /mkt/crml4/inbound/STAR_PROD/Full/STAR_FAQ_12122011/MedInfoFAQs_3772_olai_pat_was ist zypadhera™ und wozu wird es angewendet.mht (No such file or directory)

Expected result should be uploading successful for this attachment file containing special char in file name.

CLIQ version used:
Operating system: UNIX

Where is the DAO_READ_BATCH_SIZE configured?

I ran across this code, and wondered where I could configure the fetch size.

int fetchSize;
            try {
                fetchSize = config.getInt(Config.DAO_READ_BATCH_SIZE);
                if(fetchSize > Config.MAX_DAO_READ_BATCH_SIZE) {
                    fetchSize = Config.MAX_DAO_READ_BATCH_SIZE;
                }
            } catch (ParameterLoadException e) {
                // warn about getting batch size parameter, otherwise continue w/ default
                logger.warn(Messages.getFormattedString("DatabaseDAO.errorGettingBatchSize", new String[] {
                        String.valueOf(Config.DEFAULT_DAO_READ_BATCH_SIZE), e.getMessage() }));
                fetchSize = Config.DEFAULT_DAO_READ_BATCH_SIZE;
            }
            statement.setFetchSize(fetchSize);

Can the database password be encrypted in database-conf.xml?

I want to push my database-conf.xml file to a git repository, but am uncomfortable doing so with my password in the clear.

I haven't read in the documentation that DB passwords can be encrypted like the SFDC passwords can.

Does anyone know?

v 33.0.2 build is broken.

I downloaded the latest from git and followed instructions and when I run the mvn clean package -DskipTests command I get the below compilation error:

[INFO] Compiling 157 source files to /Users/anarasimhan/dev/dataloader/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] /Users/anarasimhan/dev/dataloader/src/main/java/com/salesforce/dataloader/dao/csv/CSVFileReader.java:[256,28] cannot find symbol
symbol  : constructor CSVReader(java.io.FileInputStream,java.lang.String,char[])
location: class com.sforce.async.CSVReader
[ERROR] /Users/anarasimhan/dev/dataloader/src/main/java/com/salesforce/dataloader/dao/csv/CSVFileReader.java:[258,28] cannot find symbol
symbol  : constructor CSVReader(java.io.FileInputStream,char[])
location: class com.sforce.async.CSVReader
[INFO] 2 errors 

From what I could tell the v33.0.2 of force-wsc does'nt seem to have the latest code for that repo in it and the latest code is not reflected in the built JAR file.

Build Failure : Some files do not have the expected license header

Am trying to build dataloader locally and understand what it does inside.

When I run "mvn clean package -DskipTests" the build ends with the error:

[INFO] Missing header in: /Users/tgagne/work/git/dataloader/src/test/java/com/salesforce/dataloader/process/ProcessExtractTestBase.java
[INFO] Missing header in: /Users/tgagne/work/git/dataloader/src/test/java/com/salesforce/dataloader/process/CsvUpsertProcessTest.java
[INFO] Missing header in: /Users/tgagne/work/git/dataloader/src/test/java/com/salesforce/dataloader/TestBase.java
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.760 s
[INFO] Finished at: 2015-04-07T10:38:09-04:00
[INFO] Final Memory: 32M/510M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.mycila.maven-license-plugin:maven-license-plugin:1.8.0:check (default) on project dataloader: Some files do not have the expected license header -> [Help 1]

I've confirmed /Users/tgagne/work/git/dataloader/license.txt exists. I've even tried editing the

tag to use the full path as was suggested elsewhere on the 'net.

I just cloned the repo this AM (7Apr2015).

Field mapping is invalid: OPRID => Alias, PS_OPRID__c

Inside LoadMapper, it seems to think the sfdc field name is "Alias, PS_OPRID__c," instead of there being two separate fields, Alias and PS_OPRID__c, each mapped to the same source field, OPRID.

Curiously, this works for literal mappings like...

"en_US"=LocaleSidKey, LanguageLocaleKey

But not

OPRID=Alias, PS_OPRID__c

Provide steps and/or pre-requisite for running dataloader unit test on sandbox environment.

Hi,

I'd really like to be able to run the unit test of data loader on a dedicated sandbox.
Some custom fields are not present in a default sandbox so i have to manually create them.

Is it possible to have some guidance (or a script) for setting up the environment and running tests ?

Hacking around the project would be easier/safer if testing could be done before making pull requests.

Thx

Add a way to configure the exit code when running CLI.

Hi,

When running the data loader in command line, it would be nice to be able to configure the data loader to return a "configurable" exit code based on number of line in success / error or error code from SalesForce API. (either bulk or soap but they should be kept separate as they are not the same).

it should be kept simple but would meet most basic (don't want to go through ratio stuff, etc...) needs like :
no success, no error
full success, no error
no success, full error
partial success, partial error
per error type mapping (job based)

Upsert from database takes /forever/

I'm investigating why moving LOTS of rows from a database into Salesforce takes so long. Basically, the process reports "Loading: upsert" then does nothing for hours. Nothing is written to the output. No batches are created. Nothing is sent to Salesforce.

I suspect the entire result-set is being read before anything is loaded to Salesforce. If one wants to migrate a few million rows this is basically useless.

Has anyone else tripped over this?

I'm looking at DatabaseReader.java, line 145 which reads:
// execute the query and save the result set
dbContext.setDataResultSet(statement.executeQuery());

which looks like it plans to spool all the results before doing anything. There doesn't seem to be any looping over the rows directly into Salesforce.

Upsert from Database using Parent\:fieldname mapping throws index out of range

Trying to automate a migration from Oracle to Salesforce. database-conf.xml and process-conf.xml all seem dandy, as does the mapping for CaseComment. 30.0 GUI operates fine, but the command line crashes.

2015-03-25 17:19:39,052 INFO [importCaseComment] process.ProcessRunner run (ProcessRunner.java:132) - Checking the data access object connection
2015-03-25 17:19:39,373 INFO [importCaseComment] process.ProcessRunner run (ProcessRunner.java:137) - Setting field types
2015-03-25 17:19:39,730 INFO [importCaseComment] process.ProcessRunner run (ProcessRunner.java:141) - Setting object reference types
2015-03-25 17:19:40,402 INFO [importCaseComment] process.ProcessRunner run (ProcessRunner.java:145) - Creating Map
2015-03-25 17:19:40,403 INFO [importCaseComment] action.OperationInfo instantiateAction (OperationInfo.java:95) - Instantiating action for operation: upsert
2015-03-25 17:19:40,435 INFO [importCaseComment] controller.Controller executeAction (Controller.java:120) - executing operation: upsert
2015-03-25 17:19:40,436 INFO [importCaseComment] action.AbstractAction execute (AbstractAction.java:120) - Loading: upsert
2015-03-25 17:19:40,651 ERROR [importCaseComment] action.AbstractAction handleException (AbstractAction.java:204) - Exception occured during loading
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.LinkedList.checkElementIndex(LinkedList.java:555)
at java.util.LinkedList.get(LinkedList.java:476)
at com.salesforce.dataloader.action.AbstractAction.openSuccessWriter(AbstractAction.java:250)
at com.salesforce.dataloader.action.AbstractAction.execute(AbstractAction.java:125)
at com.salesforce.dataloader.controller.Controller.executeAction(Controller.java:121)
at com.salesforce.dataloader.process.ProcessRunner.run(ProcessRunner.java:149)
at com.salesforce.dataloader.process.ProcessRunner.run(ProcessRunner.java:100)
at com.salesforce.dataloader.process.ProcessRunner.main(ProcessRunner.java:253)
2015-03-25 17:19:40,654 ERROR [importCaseComment] progress.NihilistProgressAdapter doneError (NihilistProgressAdapter.java:58) - Index: 0, Size: 0

Dataloader or API interprets all DB values for User.IsActive as true

I can't figure this out.

If inside the mapping file I use either "TRUE", "FALSE, "1", or "0" in a mapping like:

"FALSE"=IsActive

The value sets correctly.

However, if I try pulling in those values as either booleans, strings, or integers from a database dataloader or the API doesn't seem to notice and assumes everything is true.

So imagine I'm tying to load a few hundred users and I can't set them active or not based on the data. They have to be all or nothing.

Has anyone else noticed this?

sfdc dumps available

Hi, I have all latest dumps of salesforce for all types of salesforce exams if anyone wants you can mail me at
[email protected]

These are original questions from the certification exam and very useful to pass the exam.
I have all latest dumps of following exams

Salesforce Administrator (ADM 201)
Salesforce Sales Cloud Consultant (CON 201)
Salesforce Service Cloud Consultant
Platform Developer I
App Builder

Immediately i will sent dumps latest one,which helps you to pass

I will give you as-surety for all latest dumps

Project abandoned?

It seems like nothing meaningful has been done for some time now in the project. API version numbers have been bumped but no new features have been added nor have any pull requests been merged.

Should the community agree on a common fork to follow instead of the primary project?

cannot load process-conf.xml

The following error occurred.
I think the cause is 3f48c2f.

ERROR [main] process.ProcessConfig getProcessInstance (ProcessConfig.java:96) - Error loading process: HyoukaInfoSetteiExport configuration from config file: C:\**********\config\process-conf.xml
org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from URL [file://C:/**********/config/process-conf.xml]; nested exception is java.net.UnknownHostException: C

Mapping for field ignored Error

Is there a way to eliminate the else statement on Line 80 of LoadMapper.java?

        } else {
            logger.info("Mapping for field " + entry.getKey() + " will be ignored since destination column is empty");
        }

Having blank destination columns on the success/error log without the message on the debug is critical for our data teams to process reference values efficiently. This was not present in v27 and we are requesting either a way to turn off or toggle this particular feature.

Can't export on v29.0

Hello Force team.

I've started using Apex Data Loader, the latest available version but it doesn't allow me to finish the export process when I click the "Finish" button.

When I click it simply doesn't do anything.

This happens when I have logs disabled.
If I enable it, it will take me to the dialog where I choose the logs folder and then it allows me to finish the export.

This workaround is fine for me, just thought it would be good to point out this issue for those who don't use the logs feature.

Constant Values in Extract

The documentation here indicates that constants are supported during extract. However, that doesn't seem to work.

Looks like the CSVFileWriter gets is columnNames from the SOQL Query.

SNI support

Now that Data Loader requires JRE1.8 I would have expected the Service Name Indicator (SNI) extension to be included in the SSL Client Hello message, as this is now the default behaviour, In fact this is not the case.

Because I recently ran into a situation where the SNI extension was required I set out to see what would be needed to get Data Loader to send the SNI extension.

In the end I only needed to change the getContent method in HttpClientTransport (see code below). Note that I was not able to test the proxy settings.

I also updated the version of the Apache httpclient library dependency to 4.5.2 in the pom.xml file.

I would suggest including SNI support and updating the httpclient version in future releases

public InputStream getContent() throws IOException {
    //DefaultHttpClient client = new DefaultHttpClient();
    HttpClientBuilder clientBuilder = HttpClientBuilder.create();

    if (config.getProxy().address() != null) {
        String proxyUser = config.getProxyUsername() == null ? "" : config.getProxyUsername();
        String proxyPassword = config.getProxyPassword() == null ? "" : config.getProxyPassword();

        Credentials credentials;
        if (config.getNtlmDomain() != null && !config.getNtlmDomain().equals("")) {
            String computerName = InetAddress.getLocalHost().getCanonicalHostName();
            credentials = new NTCredentials(proxyUser, proxyPassword, computerName, config.getNtlmDomain());
        } else {
            credentials = new UsernamePasswordCredentials(proxyUser, proxyPassword);
        }

        InetSocketAddress proxyAddress = (InetSocketAddress) config.getProxy().address();
        HttpHost proxyHost = new HttpHost(proxyAddress.getHostName(), proxyAddress.getPort(), "http");
        //client.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxyHost);
        clientBuilder.setProxy(proxyHost);

        AuthScope scope = new AuthScope(proxyAddress.getHostName(), proxyAddress.getPort(), null, null);
        //client.getCredentialsProvider().setCredentials(scope, credentials);
        CredentialsProvider provider = new BasicCredentialsProvider();
        provider.setCredentials(scope, credentials);
        clientBuilder.setDefaultCredentialsProvider(provider);
    }

    InputStream input = null;

    byte[] entityBytes = entityByteOut.toByteArray();
    HttpEntity entity = new ByteArrayEntity(entityBytes);
    post.setEntity(entity);

    try (CloseableHttpClient client = clientBuilder.build()) {

        // no changes in this section

    } finally {
        post.releaseConnection();
    }

    return input;
}

`

Ok button on Settings disapear on Windows 10

Hi,

I try to modify some option on the dataloader and I can't see the ok button to apply my modification.
I try on v39.0 and v38.0.

Could you please correct this bug?
Thanks in advance

Where is dataloader-34.0-uber.jar in the download

I am confused about its usage.. so i have older version of CLIQ / dataloader uploading to sales force in our informatica / websphere application.
It was 2.6 from previous developer. I am new in this tool and asked to update the version to the latest 3.4 to test its bulk upload speed.
So far i am unable to locate, in this entire download, anything remotely called "dataloader-34.0-uber.jar"

Using a client certificate with Dataloader

We've a client that is requesting 2-way SSL for all connections to Salesforce. The documentation on Dataloader I've found so far doesn't mention the word "certificate" anywhere.

Does anyone know if such a feature exists, buried somewhere in a properties file that hasn't been documented yet?

Incorrect CSV file format when running on Linux

RFC 4180 states that CRLF must be used to separate records in a CSV file. This works fine when running the data loader on Windows, but when running the data loader on Linux, it inserts a LF instead of a CRLF.

LoadRateCalculator -- does it work?

Even though I zeroed-out the number of records to process value to patch #59, I'm bothered that the messages for records-per-second is so off. At one point during a batch upload it was reporting 600 million records/minute.

I suspect it's dividing the whole of the records processed so far by the duration of the most recent batch--so each subsequent batch reports higher-and-higher rates.

Has anyone else noticed this? Has anyone else looked into the *RateCalculator classes?

Silent failure on certain malformed CSV

I had an issue where, for reasons related to my extract/transform script, certain fields in a CSV file terminated in a carriage return (CR) without line feed (LF), and my transform script failed to escape those fields with quotes. When I tried loading this CSV file into Data Loader, it failed to produce an error message but also wouldn't allow me to continue any further in the wizard, getting stuck on the Initializing popup. I noticed most other malformed CSV files result in errors.

Actually it turns out this error also occurs with full CRLF before comma. But this is how I first ran into it.

Can not extract rows with NULL relation into database

We have been using the following SOQL to extract from Contact object and insert them into a local database :

select Id, HomePhone, Account.AccountNumber

with the new API, we are getting the following exception :

FATAL dataloader.dao.database.DatabaseContext:175 - Error getting value for SQL parameter: AccountNumber. Please make sure that the value exists in the configuration file or is passed in. Database configuration: insertContactUpdate.

it works fine if the contacts have an account attached to them, but throws that exception if one of the contacts doesn't have an account attached to it. I've looked at the code and it looks like that Salesforce WS doesn't return "Account.AccountNumber" as a field in sObject if it is null. it doesn't do it for other fields so if for example HomePhone is null, it will be returned with a null value. I am not sure if it is a bug in Salesforce API or it was done on purpose.

Fix documentation to start DL on Mac

Running java -jar target/dataloader-32.0.0-uber.jar -XstartOnFirstThread results in:
***WARNING: Display must be created on main thread due to Cocoa restrictions.
Exception in thread "main" org.eclipse.swt.SWTException: Invalid thread access
at org.eclipse.swt.SWT.error(SWT.java:4361)
at org.eclipse.swt.SWT.error(SWT.java:4276)
at org.eclipse.swt.SWT.error(SWT.java:4247)
at org.eclipse.swt.widgets.Display.error(Display.java:1068)
at org.eclipse.swt.widgets.Display.createDisplay(Display.java:825)
at org.eclipse.swt.widgets.Display.create(Display.java:808)
at org.eclipse.swt.graphics.Device.(Device.java:130)
at org.eclipse.swt.widgets.Display.(Display.java:699)
at org.eclipse.swt.widgets.Display.(Display.java:690)
at org.eclipse.swt.widgets.Display.getDefault(Display.java:1386)
at com.salesforce.dataloader.ui.LoaderWindow.(LoaderWindow.java:83)
at com.salesforce.dataloader.controller.Controller.createAndShowGUI(Controller.java:207)
at com.salesforce.dataloader.process.DataLoaderRunner.main(DataLoaderRunner.java:45)

Simple change target to be:
java -XstartOnFirstThread -jar target/dataloader-32.0.0-uber.jar

local-proj-repo does not contain force-wsc. Cannot build project from sources.

All sources are up to date.

mvn clean package gives:

[ERROR] Failed to execute goal on project dataloader: Could not resolve dependen
cies for project com.force:dataloader:jar:29.0.0: The following artifacts could
not be resolved: com.force.api:force-wsc:jar:29.0.0, com.force.api:force-partner
-api:jar:29.0.0: Could not find artifact com.force.api:force-wsc:jar:29.0.0 in l
ocal-proj-repo (file://D:\projects\github\dataloader/local-proj-repo/) -> [Help 1]

Data Loader Command Line Tool 20x slower than UI

I have posted this issue here http://salesforce.stackexchange.com/questions/107422/data-loader-command-line-tool-slow. Im pretty sure it is because it keeps writing logs to the command line but I cant work out how to turn that off.

I am trying to automate a data load using the Command Line tools and have everything working smoothly but it is taking 20x longer than if I do it through the Data Loader GUI.

Below is an exert from my process-conf.xml

<bean id="csvUpsertOrderItem"
      class="com.salesforce.dataloader.process.ProcessRunner"
      singleton="false">
    <description>Upsert Transaction Headers into Orders standard object.</description>
    <property name="name" value="csvUpsertOrderItem"/>
    <property name="configOverrideMap">
        <map>
            <entry key="sfdc.debugMessages" value="false"/>
            <entry key="sfdc.endpoint" value="CUSTOM ENDPOINT"/>
            <entry key="sfdc.username" value="USERNAME"/>
            <entry key="sfdc.password" value="ENCRYPTED PASSWORD"/>
            <entry key="process.encryptionKeyFile" value="C:\Program Files (x86)\salesforce.com\Data Loader\bin\key.txt"/>
            <entry key="sfdc.timeoutSecs" value="540"/>
            <entry key="sfdc.loadBatchSize" value="2000"/>
            <entry key="sfdc.entity" value="OrderItem"/>
            <entry key="process.operation" value="upsert"/>
            <entry key="sfdc.useBulkApi" value="true"/>
            <entry key="sfdc.bulkApiSerialMode" value="true"/>
            <entry key="sfdc.externalIdField" value="SlId__c"/>
            <entry key="process.mappingFile" value="C:\Users\User\Google Drive\Automation\OrdersLine21Jan.sdl"/>
            <entry key="process.outputError" value="C:\Users\User\downloads\Logs\errorUpsertOrderItem.csv"/>
            <entry key="process.outputSuccess" value="C:\Users\User\downloads\Logs\successUpsertOrderItem.csv"/>
            <entry key="dataAccess.name" value="C:\Users\User\Google Drive\Automation\JAEG_TransactionDetails.csv" />
            <entry key="dataAccess.type" value="csvRead" />
        </map>
    </property>     </bean>

From my research, it seems to be something to do with either the debug log (I think most likely) or batch size.

I have set the sfdc.debugMessages to 'false' so it is not writing the files but it does seem to write it to the command screen. I feel this could be causing the problem, is there a default log setting? Maybe a process command setting?

In the data loader document http://resources.docs.salesforce.com/200/6/en-us/sfdc/pdf/salesforce_data_loader.pdf it says the max sfdc.loadBatchSize is 200 but through the UI it sets it to 2000 when batch is true. If that does restrict it, that could explain it.

Parser was expecting element 'urn:partner.soap.sforce.com:filterable' but found 'urn:partner.soap.sforce.com:encrypted'

I was testing the dataloader today using an object with a platform-encrypted field. Dataloader complained in both the CLI and GUI versions:

2016-01-04 14:25:04,640 INFO  [exportAccount] process.ProcessRunner run (ProcessRunner.java:137) - Setting field types
2016-01-04 14:25:04,987 ERROR [exportAccount] client.PartnerClient runOperation (PartnerClient.java:332) - Error while calling web service operation: describeSObject, error was: Unexpected element. Parser was expecting element 'urn:partner.soap.sforce.com:filterable' but found 'urn:partner.soap.sforce.com:encrypted'
com.sforce.ws.ConnectionException: Unexpected element. Parser was expecting element 'urn:partner.soap.sforce.com:filterable' but found 'urn:partner.soap.sforce.com:encrypted'

Has anyone ran across this yet?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.