GithubHelp home page GithubHelp logo

dalibo / sqlserver2pgsql Goto Github PK

View Code? Open in Web Editor NEW
514.0 38.0 117.0 754 KB

Migration tool to convert a Microsoft SQL Server Database into a PostgreSQL database, as automatically as possible

Home Page: http://dalibo.github.io/sqlserver2pgsql

License: GNU General Public License v3.0

Perl 95.95% TSQL 3.32% Shell 0.73%

sqlserver2pgsql's Introduction

sqlserver2pgsql

This is a migration tool to convert a Microsoft SQL Server Database into a PostgreSQL database, as automatically as possible.

It is written in Perl and has received a fair amount of testing.

It does three things:

  • convert a SQL Server schema to a PostgreSQL schema
  • produce a Pentaho Data Integrator (Kettle) job to migrate all the data from SQL Server to PostgreSQL (optional)
  • produce an incremental version of this job to migrate what has changed in the database from the previous run. This is created when the migration job is also created.

Please drop me a word (on github) if you use this tool, feedback is great. I also like pull requests :)

Notes, warnings:

This tool will never be completely finished. For now, it works with all the SQL Server databases I and anyone asking for help in an issue had to migrate. If it doesn't work with yours, feel free to modify this, send me patches, or the SQL dump from your SQL Server Database, with the problem you are facing. I'll try to improve the code, but I need this SQL dump. Create an issue in github !

It won't migrate PL procedures, the languages are too different.

I usually only test this script under Linux. It works on Windows, as I had to do it once with Windows, and on any Unix system.

You'll need to install a few things to make it work. See INSTALL.md

Install

See https://github.com/dalibo/sqlserver2pgsql/blob/master/INSTALL.md

Usage

Ok, I have installed Kettle and Java, I have sqlserver2pgsql.pl, what do I do now ?

You'll need several things:

  • The connection parameters to the SQL Server database: IP address, port, username, password, database name, instance name if not default
  • Access to an empty PostgreSQL database (where you want your migrated data)
  • A text file containing a SQL dump of the SQL Server database

To get this SQL Dump, follow this procedure, in SQL Server's management interface:

  • Under SQL Server Management Studio, Right click on the database you want to export
  • Select Tasks/Generate Scripts
  • Click "Next" on the welcome screen (if it hasn't already been desactivated)
  • Select your database
  • In the list of things you can export, just change "Script Indexes" from False to True, then "Next"
  • Select Tables then "Next"
  • Select the tables you want to export (or select all), then "Next"
  • Script to file, choose a filename, then "Next"
  • Select unicode encoding (who knows…, maybe someone has put accents in objects names, or in comments)
  • Finish

You'll get a file containing a SQL script. Get it on the server you'll want to run sqlserver2pgsql.pl from.

If you just want to convert this schema, run:

./sqlserver2pgsql.pl -f sqlserver_sql_dump   \
                     -b output_before_script \
                     -a output_after_script  \
                     -u output_unsure_script

The sqlserver2pgsql Perl script processes your SQL raw dump "sqlserver_sql_dump" and produces these three scripts:

  • output_before_script: contains what is needed to import data (types, tables and columns)

  • output_after_script: contains the rest (indexes, constraints)

  • output_unsure_script: contains objects where we attempt to migrate, but cannot guarantee, such as views

-conf uses a conf file. All options below can also be set there. Command line options will overwrite conf options. There is an example of such a conf file (example_conf_file)

You can also use the -i, -num and/or -nr options:

-i : Generate an "ignore case" schema, using citext, to emulate MSSQL's case insensitive collation. It will create citext fields, with check constraints. This type is slower on string comparison operations.

-nr : Don't convert the dbo schema to public. By default, this conversion is done, as it converts MSSQL's default schema (dbo) to PostgreSQL's default schema (public)

-relabel_schemas is a list of schemas to remap. The syntax is : source1=>dest1;source2=>dest2. Don't forget to quote this option or the shell might alter it there is a default dbo=>public remapping, that can be cancelled with -nr. Use double quotes instead of simple quotes on Windows.

-num : Converts numeric (xxx,0) to the appropriate smallint, integer or bigint. It won't keep the constraint on the size of the scale of the numeric. smallint, integer and bigint types are much faster than numeric, ano often used only as surrogate keys, so the scale is often not important.

-keep_identifier_case: don't convert the dump to all lower case. This is not recommended, as you'll have to put every identifier (column, table…) in double quotes…

-camel_to_snake: convert the object name (table, column, index...) from CamelCase to snake_case. Only do this if you are willing to change all your queries (or you use an ORM for instance).

-col_map_file: specifies an output text file containing SQL-Server and PostgreSQL schemas, tables and columns names (1 line per column).

-col_map_file_header: add a header line to the col_map_file (no header by default).

-col_map_file_delimiter: specify a field delimiter for the col_map_file (TAB by default).

-validate_constraints=yes/after/no: for foreign keys, if yes: foreign keys are created as valid in the after script (default) if no: they are created as not valid (enforced only for new rows) if after: they are created as not valid, but the statements to validate them are put in the unsure file

If you want to also import data:

./sqlserver2pgsql.pl -b before.sql -a after.sql -u unsure.sql -k kettledir \ 
    -sd source -sh 192.168.0.2 -sp 1433 -su dalibo -sw mysqlpass \
    -pd dest -ph localhost -pp 5432 -pu dalibo -pw mypgpass -f sql_server_schema.sql

-k is the directory where you want to store the kettle xml files (there will be one for each table to copy, plus the one for the job)

You'll also need to specify the connection parameters. They will be stored inside the kettle files (in cleartext, so don't make this directory public):

-sd : sql server database

-sh : sql server host

-si : sql server host instance

-sp : sql server port (usually 1433)

-su : sql server username

-sw : sql server password

-pd : postgresql database

-ph : postgresql host

-pp : postgresql port

-pu : postgresql username

-pw : postgresql password

-sforce_ssl : force a SSL connection to your SQL Server database. Required if ForceEncryption option is set to 'Yes'

-pforce_ssl : force a SSL connection to your PostgreSQL database. ssl=on should be set on the PostgreSQL server

-f : the SQL Server structure dump file

-ignore_errors : ignore insert errors (not advised, you'll need to examine kettle's logs, and it will be slower)

-pi : The parallelism used in kettle jobs to read from SQL Server (1 by default, the jdbc driver frequently errors out when larger)

-po : The parallelism used in kettle jobs to write to PostgresSQL: there will be this amount of sessions used to insert into PostgreSQL. Default to 8

-sort_size=100000: sort size to use for incremental jobs. Default is 10000, to try to be on the safe side (see below).

We don't sort in databases for two reasons: the sort order (collation for strings for example) can be different between SQL Server and PostgreSQL, and we don't want to stress the servers more than needed anyway. But sorting a lot of data in Java can generate a Java Out of Heap Memory error.

If you get Out of Memory errors, raise the Java Heap memory (in the kitchen script) as much as you can. If you still have the problem, reduce this sort size. You can also try reducing parallelism, having one or two sorts instead of 8 will of course consume less memory.

The last problem is that if the sort_size is small, kettle is going to generate a very large amount of temporary files, and then read them back sorted. So you may hit the "too many open files" limit of your system (default 1024 on linux for instance). So you'll have to do some tuning here:

  • First, use as much Java memory as you can: set the JAVAXMEM environment variable to 4096 (megabytes) or more if you can afford it. The more the better.
  • If you still get Out Of Memory errors, put a smaller sort size, until you can do the sorts (decrease it tenfold each time for example). You'll obviously lose some performance
  • If then you get the too many open files error, raise the maximum number of open files. In most Linux distributions, this is editing /etc/security/limits.conf and putting
@userName soft nofile 65535
@userName hard nofile 65535

(replace userName with your user name). Log in again, and verify with ulimit -n that you are now allowed to open 65535 files. You may also have to raise the maximum number of open files on the system: echo the new value to /proc/sys/fs/file-max.

You'll need a lot of temporary space on disk to do these sorts...

You can also edit only the offending transformation with Spoon (Kettle's GUI), so that only this one is slowed down.

When Kettle crashed on one of these problems, the temporary files aren't removed. They are usually in /tmp (or in your temp directory in Windows), and start with out_. Don't forget to remove them.

-use_pk_if_possible=0/1/public.table1,myschema.table2: enable the generation of jobs doing sorts in the databases (order by in the select part of Kettle's table inputs).

1 will ask to try for all tables, or you can give a list of tables (if for example, you cannot make these tables work with a reasonable sort size). Anyway, sqlserver2pgsql will only accept to do sorts in the database if the primary key can be guaranteed to be sorted the same way in PostgreSQL and SQL Server. That means that it only accepts if the key is made only of numeric and date/timestamp types. If not, the standard, kettle-sorting incremental job will be generated.

Now you've generated everything. Let's do the import:

  # Run the before script (creates the tables)
  psql -U mypguser mypgdatabase -f name_of_before_script
  # Run the kettle job:
  cd my_kettle_installation_directory
  ./kitchen.sh -file=full_path_to_kettle_job_dir/migration.kjb -level=detailed
  # Run the after script (creates the indexes, constraints...)
  psql -U mypguser mypgdatabase -f name_of_after_script

If you want to dig deeper into the kettle job, you can use kettle_report.pl to display the individual table's transfer performance (you'll need to redirect kitchen's output to a file). Then, if needed, you'll be able to modify the Kettle job to optimize it, using Spoon, Kettle's GUI

You can also give a try to the incremental job:

./kitchen.sh -file=full_path_to_kettle_job_dir/incremental.kjb -level=detailed

This one is highly experimental. I need your feedback ! :). You should only run an incremental job on an already loaded database.

It may fail for a variety of reasons, mainly out of memory errors. If you have other unique constraints beyond the primary key, the series of queries generated by sqlserver2pgsql may generate conflicting updates. So test it several times before the migration day, if you really want to try this method. The "normal" method is safer, but of course, you'll start from scratch, and have those long indexes builds at the end.

By the way, to be able to insert data into all tables, it deactivates triggers at the beginning and activates them back at the end of the job. So, if the job fails, those triggers won't be reactivated.

You can also use a configuration file if you like:

./sqlserver2pgsql.pl -conf example_conf_file -f mydatabase_dump.sql

There is an example configuration file provided. You can also mix the configuration file with command line options. Command line options have the priority over values set in the configuration file.

FAQ

See https://github.com/dalibo/sqlserver2pgsql/blob/master/FAQ.md

Licence

GPL v3 : http://www.gnu.org/licenses/gpl.html

sqlserver2pgsql's People

Contributors

aminosbh avatar beaud76 avatar biinari avatar daamien avatar dgarciach avatar fallen-s4e avatar fljdin avatar jcallico avatar jfrux avatar kmosolov avatar madtibo avatar marco44 avatar maresb avatar rjuju avatar sebpcspkr avatar spanevin avatar stuartjbrown avatar yverry avatar zoozalp avatar zygimantaskazlauskas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqlserver2pgsql's Issues

Strategy to apply when the migration is to be done with schema/data update?

Hello,
I really appreciate the migration tool.
I am facing a situation which isn't exactly the purpose of the tool : I have DBs in Microsoft SQLServer and I want to migrate them into Postgresql, but I have changes into data definitions at the same time (I know it is'nt the "best pratices" way).
Would you advice me to use the tools even in this situation?
And if this is the case, have you some kind of manner to do this?
If you have, I think it should be interesting to add documentation about.

metadata attributes 'EXEC sys.sp_addextendedproperty' not understood

Hey,

MSSQL dumps with metadata attributes cause an error:

$> perl sqlserver2pgsql/sqlserver2pgsql.pl -conf dbtx.conf 
Cannot understand this comment: EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'-redacted-' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'-redacted-', @level2type=N'CONSTRAINT',@level2name=N'-redacted-'
 at sqlserver2pgsql/sqlserver2pgsql.pl line 1968, <$file> line 4979.
	main::parse_dump() called at sqlserver2pgsql/sqlserver2pgsql.pl line 2748

According to docs [1] and comments [2][3] on the net, these are used to store metadata for e.g. GUI designers and maybe other tooling.

Maybe just skip them with a warning or something instead of bailing out?

[1] https://msdn.microsoft.com/en-gb/library/ms180047.aspx
[2] https://stackoverflow.com/questions/3912761
[3] https://stackoverflow.com/questions/3856077

Object creation order

Object creation order is changed for views. I know views are unsupported/unsure but most of views in my case works but order is changed.

E.g. There are two views AAS and T120X in my sql dump. AAS is based on T120X. In SQL Dump file definition of T120X occurs first and then AAS. But in unsure file after conversion, AAS comes first and then T120X. Is there any parameter which I can pass to fix?

sqlserver2pgsql does not preserve case in table and column names

My SqlServer schema is a legacy hodgepodge with absolutely no naming standards. My app is dotNet with EntityFramework, so should be able to translate to using PgSql almost transparently.

Unfortunately the SQLServer table [dbo].[Users] becomes "public"."users". And the usage information says The -keep_identifier_case option is "(not advised)".

I think that is, itself, ill-advised. In most cases, to seamlessly migrate a SqlServer application to PostGres, you will need to retain the case. Certainly, in my case, with a dotNet app, it's the only way to make things work without a lot of extra effort.

I'm not saying that it's not a good idea to keep identifiers in lower case, merely that it's generally a better idea to not change case when applications already have an expectation.

Help sqlserver2pgsql.pl

Hello,

I have a new project that will migrate a SQL Server database into PostgreSQL. I found this tool, but i don't know how to use it.

For example, I installed perl like is written here https://learn.perl.org/installing/windows.html, I created a dumb sql file for my schema and i put it into a folder migrateDatabase.

In that folder I copied sqlserver2pgsql.pl file.

From my command prompt I executed this command line :

C:\migrateDatabase>perl sqlserver2pgsql.pl -f input_sql_dump.sql -b output_before_script -a output_after_script -u output_unsure_script

I recieve this error:

Bareword found where operator expected at C:\migrateDatabase\sqlserver2pgsql.pl line 342, near "s/{colname}/[$colname]/gr"
Bareword found where operator expected at C:\migrateDatabase\sqlserver2pgsql.pl line 364, near "s/{colname}/"$colname"/r"
syntax error at C:\migrateDatabase\sqlserver2pgsql.pl line 342, near "s/{colname}/[$colname]/gr"
syntax error at C:\migrateDatabase\sqlserver2pgsql.pl line 364, near "s/{colname}/"$colname"/r"
BEGIN not safe after errors--compilation aborted at C:\migrateDatabase\sqlserver2pgsql.pl line 4111.

What i do wrong?

Can you help me?

Maybe it will be great if you can create a video tutorial for this tool.

Thanks a lot!

ARITHABORT setting

Hi @dalibo - Thanks for this great script. It's been very useful.

One recommendation: the MSSQL ARITHABORT setting, which terminates queries when there's an overflow/divide by zero error, is not handled. Since postgres naturally fails on these errors already, this probably only needs a simple warning logger message.

Let me know if you want a pull request .. might be easier for you guys to just do it, since this is so simple. Thanks again,

Column names "Time" and "Name" kettle converts to "time" and "name" - ERROR

./sqlserver2pgsql.pl -i -b ./out/before.sql -a out/after.sql -u out/unsure.sql -keep_identifier_case -k out/kettledir \ -sd source -sh 192.168.0.127 -sp 1433 -sd nkr -si SQL -su sa -sw sa \ -pd dest -ph 192.168.0.127 -pp 5432 -pd nkr -pu postgres -pw postgres -f ./in/nkr-generate.sql

before and after scripts generates flawlessly

What problem on doing migrate with kitchen.sh?

2015/04/27 15:29:34 - Table input.0 - Finished processing (I=657, O=0, R=0, W=657, U=0, E=0)
2015/04/27 15:29:34 - Table output.2 - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : Unexpected batch update error committing the database connection.
2015/04/27 15:29:34 - Table output.7 - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : Unexpected batch update error committing the database connection.
2015/04/27 15:29:34 - Table output.2 - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseBatchException:
2015/04/27 15:29:34 - Table output.2 - Error updating batch
2015/04/27 15:29:34 - Table output.2 - Batch entry 0 INSERT INTO "public"."ExtCompositions" ("Mask", "mask_change_owner", "CompositionId", "ParentId", "Node", "time", "name", "Shifr", "Marka", "Volume", "Plast", "Del", "id") VALUES ( 4, 4, 641378322, 0, 0, '2015-04-07 10:04:26.000000 +03:00:00', '6004-004', '00000000002', '00-00000202', 100, 0, 0, 3) was aborted. Call getNextException to see the cause.
2015/04/27 15:29:34 - Table output.2 -
2015/04/27 15:29:34 - Table output.2 - at org.pentaho.di.core.database.Database.createKettleDatabaseBatchException(Database.java:1351)
2015/04/27 15:29:34 - Table output.2 - at org.pentaho.di.core.database.Database.emptyAndCommit(Database.java:1340)
2015/04/27 15:29:34 - Table output.2 - at org.pentaho.di.trans.steps.tableoutput.TableOutput.dispose(TableOutput.java:571)
2015/04/27 15:29:34 - Table output.2 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:96)
2015/04/27 15:29:34 - Table output.2 - at java.lang.Thread.run(Thread.java:745)
2015/04/27 15:29:34 - Table output.2 - Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "public"."ExtCompositions" ("Mask", "mask_change_owner", "CompositionId", "ParentId", "Node", "time", "name", "Shifr", "Marka", "Volume", "Plast", "Del", "id") VALUES ( 4, 4, 641378322, 0, 0, '2015-04-07 10:04:26.000000 +03:00:00', '6004-004', '00000000002', '00-00000202', 100, 0, 0, 3) was aborted. Call getNextException to see the cause.

sqlserver2pgsql fails when using empty password for connections in conf file

Hello
When using empty password for connections in conf file, the scripts return error 1 with message:
"You have to provide all connection information, if using -k or kettle directory set in configuration file".

To avoid this, I put sqlserver password="", and it runs ok. But the generated kettle files are then false : they contain "" instead of . I correct them with a 'sed -i'.

Maybe a note should be added about empty password...I know it's not the normal use but it exists (in dev environment).

:)

Feature Request: option to snake_case names

Hello again

i was thinking if its possible to add an option to snake_case names when transferring to Postgres, as that is the preferred naming convention in postgres, and makes it easier to read

camelCase -> camel_case
PascalCase -> pascal_case
ForeignKeyID -> foreign_key_id
etc

i found a regex that seems to work on most platforms, but cant test it in perl as the online emulator didnt seem to work correctly with perl.
The only thing is to lowercase the entire string after this and it should be good to go

RegEx:
([a-z])([A-Z]+)
Replacement:
$1_$2

Check out http://fiddle.re/y5xjcn and try with javascript and add the global flag

Error

When running following command in Windows7:
sqlserver2pgsql.pl -f C:\sqlserver2pgsql-75f8899\files\interfon.sql -b C:\sqlserver2pgsql-75f8899\files\before.txt -a C:\sqlserver2pgsql-75f8899\files\after.txt -u C:\sqlserver2pgsql-75f8899\files\unsure.txt

I get following error message:
Line <CREATE TABLE [dbo].[ABO] (> (1) not understood. This is a bug at C:\sqlserver2pgsql75f8899\sqlserver2pgsql.pl line 2153, <$file> line 1.

What can be the solution?
Thx,
Natha
Attached SQLServer 2000 dump file. (I added txt suffix for upload on GH)
interfon.sql.txt

Incremental kettle script fails for columns of type smallmoney

Got the following error from the incremental kettle script:

The data type of field #8 is not the same as the first row received: you're mixing rows with different layout. Field [Column1 BigNumber] does not have the same data type as field [Column1 Number(10, 4)].

The original column in T-SQL is defined as:

[Column1] [smallmoney] NULL,

The "before" PostgreSQL script defined it as:

"Column1" numeric,

Before I add another custom conversion, I'm wondering if numeric is the correct data type to use for smallmoney?

LOCK_ESCALATION keyword not understood

Hey,

MSSQL dumps that use LOCK_ESCALATION cause an error:

$> perl sqlserver2pgsql/sqlserver2pgsql.pl -conf dbtx.conf 
Line <ALTER TABLE [dbo].[Test] SET (LOCK_ESCALATION = DISABLE)
> (1508) not understood. This is a bug at sqlserver2pgsql/sqlserver2pgsql.pl line 2124, <$file> line 1508.

Clustered indexes

Hi, thanks for releasing the source for this project - really helps get us off the ground.

I'm running into one of several issues, first one is easy - the format of the [schema_name].[table_name] is different in our dump file (perhaps because of how we did the dump, or perhaps the version, we only have [table_name], and subsequently, the regexes are a little wonky, and rigid.

A bigger problem though, we have the following DDL:

CREATE TABLE [t_link_short](                                                                                                                                   
  [link_id] [bigint] IDENTITY(1,1) NOT NULL,                                                                                                                   
  [cust_id] [int] NULL,
  [creation_time] [datetime] NULL,
  [redirect_url] [varchar](8000) NOT NULL                                                                                                                      
) ON [PRIMARY]                                                                                                                                                 


ALTER TABLE [t_link_short] ADD  CONSTRAINT [pk_t_link_short] PRIMARY KEY CLUSTERED                                                                             
(
  [link_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

The script bails out on the clustered primary key creation - it looks stubbed out in the code. Even if I define the column as a primary key in the initial CREATE statement, it's ignored, and no pk is defined. Any suggestions on how to flesh out the clustered index part of the code given that we use this format?

add option to skip errors

Hey,

after massaging the MSSQL dump to work around the other small issues, the migration of a complex DB fails without mentioning any specific cause:

Table output.6 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected batch update error committing the database connection.
Table output.6 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseBatchException: 
Table output.6 - Error updating batch
Table output.6 - Batch entry 2 INSERT INTO "public"."-redacted-" ("ID", "Name", "-redacted") VALUES ( '-redacted-',  NULL,  NULL) was aborted.  Call getNextException to see the cause.
Table output.6 - 
Table output.6 - 	at org.pentaho.di.core.database.Database.createKettleDatabaseBatchException(Database.java:1386)
Table output.6 - 	at org.pentaho.di.core.database.Database.emptyAndCommit(Database.java:1375)
Table output.6 - 	at org.pentaho.di.trans.steps.tableoutput.TableOutput.dispose(TableOutput.java:575)
Table output.6 - 	at org.pentaho.di.trans.step.RunThread.run(RunThread.java:96)
Table output.6 - 	at java.lang.Thread.run(Thread.java:745)
Table output.6 - Caused by: java.sql.BatchUpdateException: Batch entry 2 INSERT INTO "public"."-redacted-" ("ID", "Name", "-redacted-") VALUES ( '-redacted-',  NULL,  NULL) was aborted.  Call getNextException to see the cause.
Table output.6 - 	at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2743)
Table output.6 - 	at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:461)
Table output.6 - 	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1928)
Table output.6 - 	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405)
Table output.6 - 	at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2892)
Table output.6 - 	at org.pentaho.di.core.database.Database.emptyAndCommit(Database.java:1362)
Table output.6 - 	... 3 more
[dbo].[-redacted-] - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!
[dbo].[-redacted-] - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!
Kitchen - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Finished with errors

The trouble, apart from not specifiying any actual cause, is that it bails out completely.

It would be nice if the behaviour could be configurable, so that the errors could be logged, but processing of other rows and other tables could continue unaffected.

If my limited research is correct, the kettle job/transformation can be set up with a general and/or specific handler for such errors, but a quick attempty at adding one to the <step_error_handling> element did not work.

Bug at sqlserver2pgsql.pl line 2085 when creating views

After stepping by the issue with sparse columns i ran into this issue:

Line <create view
> (4545) not understood. This is a bug at sqlserver2pgsql.pl line 2085, <$file> line 4545.

Which creates a CTE view with all ancestors to a place

The code for the view looks like this:

GO
/****** Object:  View [core].[PlaceAncestors]    Script Date: 2017-02-21 09:50:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create view 
[core].[PlaceAncestors]
as 

with parents as 
(
  select Id BaseId, Id, ParentId, Name, TypeId
  from core.Places cp2
  where ParentId is not null
  union all 
  select p.BaseId, cp1.Id, cp1.ParentId, cp1.Name, cp1.TypeId
  from parents p
    inner join core.Places cp1 on p.ParentId = cp1.Id
      and (cp1.Id <> cp1.ParentId or cp1.ParentId is null)	    
)
select IsNull(BaseId,0) BaseId, IsNull(Id,0) Id, ParentId, IsNull(Name,'') Name, ISNULL(TypeId, 100) TypeId
from parents
where BaseId <> Id


GO
/****** Object:  View [dbo].[vwMessageHistory]    Script Date: 2017-02-21 09:50:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

PRIMARY XML INDEX cause the script to abort

The following definition causes the script to abort:

CREATE PRIMARY XML INDEX [IX_Table1_Data_Primary] ON [public].[Table1]
(
    [Data]
)

I'd love to hear what should be done about it. Ignore it?

MSSQL Timestamp should map to bytea

When I convert schema from mssql to postgress

Timestamp -> timestamp

They have different meaning in msssql vs postgres

Timestamp should map to bytea

Other wise data migration will fail.

Great tool BTW

datetime2 kettle import character varying

I'm trying to import the data with kettle as explained in the readme but it fails on a table with a datetime2 saying that the column is of type "timestamp without time zone" but the expression uses "character varying".
I think that's something similar to #35

Column definitions containing NOT FOR REPLICATION make the script fail.

The following table definition makes the script fail:

CREATE TABLE [public].[Sample](
    [Col1] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
    [Col2] [varchar](50) NOT NULL,
) 

The reason is that the following regex doesn't support the NOT FOR REPLICATION clause:

^\t\[(.*)\] (?:\[(.*)\]\.)?\[(.*)\](\(.+?\))?( IDENTITY\(\d+,\s*\d+\))?(?: ROWGUIDCOL ?)? (NOT NULL|NULL)(?:\s+CONSTRAINT \[.*\])?(?:\s+DEFAULT \((.*)\))?(?:,|$)?

this one does:

^\t\[(.*)\] (?:\[(.*)\]\.)?\[(.*)\](\(.+?\))?( IDENTITY\(\d+,\s*\d+\))?(?: ROWGUIDCOL ?)? (?:NOT FOR REPLICATION )?(NOT NULL|NULL)(?:\s+CONSTRAINT \[.*\])?(?:\s+DEFAULT \((.*)\))?(?:,|$)?

A patch will be submitted on my next pull request.

Create postgres default value when SQL Server column not identity and has default value

Looks like default values set in SQL Server are not translating over to postgres, such as this:

[sign_in_count] [int] NOT NULL DEFAULT ((0))

In postgres, I'm getting this (missing the default):

sign_in_count | integer | not null

I've got a ton of default values to migrate over. I'd like to confirm that I'm not missing something obvious. Thanks. I'm using SQL Server 2005

Error in generated kettle script.

I'm not able to figure out why the generate kettle script always fails for this, and only this, table.

Here is the original table:

CREATE TABLE [dbo].[DEALER_PROFILE](
    [DealerCode] [varchar](20) NOT NULL,
    [DealerName] [varchar](30) NOT NULL,
    [RegionCode] [char](2) NOT NULL,
    [SalesDistrictCode] [char](2) NULL,
    [ServiceDistrictCode] [char](2) NULL,
    [Address1] [varchar](30) NULL,
    [Address2] [varchar](30) NULL,
    [Address3] [varchar](30) NULL,
    [City] [varchar](25) NULL,
    [Province] [char](3) NULL,
    [PostalCode] [char](7) NULL,
    [OperationsStartDate] [date] NULL,
    [TerminationDate] [date] NULL,
    [StatusCode] [char](1) NULL,
    [FacilityTypeCode] [char](1) NULL,
    [ShowroomCode] [bit] NULL CONSTRAINT [DF_DEALER_PROFILE_ShowroomCode]  DEFAULT ((0)),
    [LanguageCode] [char](2) NULL,
    [Phone] [char](10) NULL,
    [Fax] [char](10) NULL,
    [IsActive] [bit] NULL,
    [DealerPrefix] [varchar](20) NULL,
    [RegionDescription] [nvarchar](50) NULL,
    [ZoneCode] [char](2) NULL,
    [ZoneDescription] [nvarchar](50) NULL,
    [ModifiedBy] [varchar](50) NOT NULL CONSTRAINT [DF_DEALER_PROFILE_ModifiedBy]  DEFAULT (user_name()),
    [ModifiedDate] [datetime] NOT NULL CONSTRAINT [DF_DEALER_PROFILE_ModifiedDate]  DEFAULT (getdate()),
    [CreatedBy] [varchar](50) NOT NULL CONSTRAINT [DF_DEALER_PROFILE_CreatedBy]  DEFAULT (user_name()),
    [CreatedDate] [datetime] NOT NULL CONSTRAINT [DF_DEALER_PROFILE_CreatedDate]  DEFAULT (getdate()),
    [DealerSalesGroupId] [int] NULL,
    [DisplayDealerName] [varchar](100) NULL
 CONSTRAINT [PK_DEALER_PROFILE] PRIMARY KEY CLUSTERED 
(
    [DealerCode] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

This is the first error I get:

2016/07/11 13:26:32 - Table output.3 - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : Unexpected batch update error committing the database connection.

But if I copy and paste the generated sql directly on the Query window and run it, it works without any problem.

Multiple conditions inside check constraint

We have this constraint in SQL Server:

ALTER TABLE [dbo].[H_DATOS_HIST] WITH NOCHECK ADD CONSTRAINT [CKC_KAMIKAZE_H_DATOS_] CHECK (([KAMIKAZE]>=(0) AND [KAMIKAZE]<=(1)))

The migrated version is:

ALTER TABLE "public"."h_datos_hist" ADD CONSTRAINT "ckc_kamikaze_h_datos_" CHECK (kamikaze]>=(0) and [kamikaze<=(1));

Where the middle square brackets have not been taken into account.

KETTLE_EMPTY_STRING_DIFFERS_FROM_NULL=Y is tricky

Hello again !
The sqlserver2pgsql.pl script checks the Kettle 's conf line containing
/KETTLE_EMPTY_STRING_DIFFERS_FROM_NULL\s_=\s_Y/

But Kettle itself seems to recognize only /^KETTLE_EMPTY_STRING_DIFFERS_FROM_NULL=Y$/
I met a problem with a space after the Y, the propertie wasn't taken into account and it fails with "ERREUR: une valeur NULL viole la contrainte NOT NULL de la colonne xxxx »" .
I dropped the ending space and then it was OK.

So I suggest you to modify the script ton take this into account :)

Bye

Error when converting sql_variant data type column

Hi,

Just tried to run the script (table structure only). It looks like it barfed on attempting to convert a sql_variant data type

Cannot determine the PostgreSQL's datatype corresponding to sql_variant. This is a bug
at C:\Users\Will\Downloads\dalibo-sqlserver2pgsql-d1ef8b7\dalibo-sqlserver2pgsql-d1ef8b7\sqlserver2pgsql.pl line 264, <$file> line 193.
main::convert_type("sql_variant", undef, "Value", "ActualProp_EquipmentActual", undef, "public") called at C:\Users\Will\Downloads\dalibo-sqlserver2pgsql-d1ef8b7\dalibo-sqlserver2pgsql-d1ef8b7\sqlserver2pgsql.pl line 983
main::add_column_to_table("public", "ActualProp_EquipmentActual", "Value", undef, "sql_variant", undef, undef, "NULL") called at C:\Users\Will\Downloads\dalibo-sqlserver2pgsql-d1ef8b7\dalibo-sqlserver2pgsql-d1ef8b7\sqlserver2pgsql.pl line 1103
main::parse_dump() called at C:\Users\Will\Downloads\dalibo-sqlserver2pgsql-d1ef8b7\dalibo-sqlserver2pgsql-d1ef8b7\sqlserver2pgsql.pl line 2444

Not able to generate Table DDL for more than one table

Hi Team
I'm trying to generate Postgres compatible code for below SQL Server create table script. The input has two tables, however the output generates DDL only for first table. Not sure if this is an issue or I'm missing something. Please help / advise:

SQL-Server-Input-File
`### CREATE TABLE [dbo].[AWBuildVersion](
[SystemInformationID] [tinyint] IDENTITY(1,1) NOT NULL,
[Database Version] nvarchar NOT NULL,
[VersionDate] [datetime] NOT NULL,
[ModifiedDate] [datetime] NOT NULL CONSTRAINT [DF_AWBuildVersion_ModifiedDate] DEFAULT (getdate()),
CONSTRAINT [PK_AWBuildVersion_SystemInformationID] PRIMARY KEY CLUSTERED
([SystemInformationID] ASC));

CREATE TABLE [dbo].[DatabaseLog](
[DatabaseLogID] [int] IDENTITY(1,1) NOT NULL,
[PostTime] [datetime] NOT NULL,
[DatabaseUser] [sysname] NOT NULL,
[Event] [sysname] NOT NULL,
[Schema] [sysname] NULL,
[Object] [sysname] NULL,
[TSQL] nvarchar NOT NULL,
[XmlEvent] [xml] NOT NULL,
[SRC_I] [int] NULL,
[ProtocolTypeID] [int] NOT NULL DEFAULT ('9'),
CONSTRAINT [PK_DatabaseLog_DatabaseLogID_ProtocolTypeID] PRIMARY KEY CLUSTERED
([DatabaseLogID] ASC,[ProtocolTypeID] ASC));`

Postgres Output Script
`\set ON_ERROR_STOP
\set ECHO all
BEGIN;

CREATE TABLE "public"."awbuildversion"(
"systeminformationid" smallint NOT NULL,
"database version" varchar(25) NOT NULL,
"versiondate" timestamp NOT NULL,
"modifieddate" timestamp NOT NULL);

COMMIT;
`

  • Varun

Error while importing (This is a bug at ./sqlserver2pgsql.pl line 1897)

Hello

I'm getting this error. Can you please let me know what is the issue and fix it as well?

Is there any way I can skip it?

$ ./sqlserver2pgsql.pl -conf conf_file -f /home/deepak/work/projects/nucleus/fract-data-unicode.sql 
WARNING: the source database is set as ARITHABORT OFF.
         It means that for SQL Server, 10/0 = NULL.
         You'll probably have problems porting that to PostgreSQL.
Line <INSERT [dbo].[RegistryUpload] ([pKey], [JobStartDate], [JobEndDate], [APINumber], [StateNumber], [CountyNumber], [OperatorName], [WellName], [Latitude], [Longitude], [Projection], [TVD], [TotalBaseWaterVolume], [TotalBaseNonWaterVolume], [StateName], [CountyName], [FFVersion], [FederalWell]) VALUES (N'cdc7d9cc-1bb9-4458-8060-fd8309f78185', CAST(N'2013-01-04 00:00:00.000' AS DateTime), CAST(N'2013-01-04 00:00:00.000' AS DateTime), N'04030470660000', N'04', N'030', N'Aera Energy LLC', N'King 78WBR-19', 35.4716124, -119.7453804, N'NAD83', 1801, 77742, NULL, N'California', N'Kern', 1, 0)
> (151) not understood. This is a bug at ./sqlserver2pgsql.pl line 1897, <$file> line 151.

Time is not supported. Cannot determine the PostgreSQL's datatype corresponding to time

Cannot determine the PostgreSQL's datatype corresponding to time. This is a bug
at sqlserver2pgsql.pl line 246
main::convert_type('time', 7, 'CreatedTime', 'BackOffice.ApplicationGrou
ps', 'undef', 'public') called at sqlserver2pgsql.pl line 956
main::add_column_to_table('public', 'BackOffice.ApplicationGroups', 'CreatedTime', 'undef', 'time', '(7)', 'undef', 'NULL') called at sqlserver2pgsql.pl
line 1073
main::parse_dump() called at sqlserver2pgsql.pl line 2386

Be a little bit more clear about usage

It's far from obvious that

sqlserver2pgsql.pl -f my_sqlserver_script.txt -b name_of_before_script -a name_of_after_script -u name_of_unsure_script

The before script contains what is needed to import data (types, tables and columns). The after script contains the rest (indexes, constraints). It should be run after data is imported. The unsure script contains objects where we attempt to migrate, but cannot guarantee, such as views.

will in fact generate before, after and unsure sql scripts, I first thought I have to provide them !

Incorrect conversion for datetimeoffset

The following table definition:

CREATE TABLE [public].[Table1](
    [EventId] [int] IDENTITY(1,1) NOT NULL,
    [CreateDate] [datetimeoffset](7) NOT NULL,
)

generates the following PostgreSQL code:

CREATE TABLE "public"."Table1"( 
    "EventId" int NOT NULL,
    "CreateDate" timestamp with time zone(7) NOT NULL);

The code above is not valid. If I remove "(7)" then the script works.

Is this the correct thing to do?

Kettle filename takes both paths

Hello again!

Went into the part of the kettlefiles now for a test run of transfers as well, and ran into this while running on windows:

<filename>D:\postgres\D:\kettlefiles\publicweb-Sections.ktr</filename>

It should omit the D:\postgres\ part of that path, but cant really understand how to solve it.

I did run sqlserver2pgsql.pl in my D:\postgres folder, and had set the -k parameter to -k "D:\kettlefiles"

It might work if i do -k "..\kettlefiles"

Or can the migration.kjb find the files even if they are relative to it?

DateTimeOffset

Hi,

I tried the script on my sql server dump and this error is output:

Cannot determine the PostgreSQL's datatype corresponding to datetimeoffset. This is a bug
at ./sqlserver2pgsql.pl line 247
main::convert_type('datetimeoffset', 7, 'CreatoIl', 'SYS_Stati', undef, 'public') called at ./sqlserver2pgsql.pl line 957
main::add_column_to_table('public', 'SYS_Stati', 'CreatoIl', undef, 'datetimeoffset', '(7)', undef, 'NULL') called at ./sqlserver2pgsql.pl line 1074
main::parse_dump() called at ./sqlserver2pgsql.pl line 2389

Starting db is on a SQL2008R2

Thanks

Option to specify SSL for postgres connection

Hi Team
Can you please allow / enable the option to allow for SSL for incoming connections from kettle jobs (*.KTR) when it doing the data transfer from SQL Server to PostgreSQL?

if this is not a doable options in main codebase, can you please share a workaround?

Thanks, Varun

Xml Schema Collections cause the script to abort

Given the following table definition:

CREATE TABLE [sample].[EventQueue](
    [EventId] [int] IDENTITY(1,1) NOT NULL,
    [Data] [xml](CONTENT [sample].[Event]) NULL
 CONSTRAINT [PK_EventQueue] PRIMARY KEY CLUSTERED 
 (
    [EventId] ASC
 ) ON [PRIMARY]
) 

The script aborts with the following error:

Cannot parse colqual <(CONTENT [sample].[Event])>

I'll try to come up with a solution but any suggestion will be greatly appreciated.

Correctly transform filtered indexes

I have a couple of filtered indexes in my MSSQL which the script don't translate correctly
For example:

CREATE UNIQUE NONCLUSTERED INDEX [UC_OnePartBase] ON [obj].[Parts]
(
	[ObjectId] ASC
)
WHERE ([ParentId] IS NULL)

Translate into
CREATE UNIQUE INDEX "UC_OnePartBase" ON "obj"."Parts" ("ObjectId" ASC);

But should be
CREATE UNIQUE INDEX "UC_OnePartBase" ON "obj"."Parts" ("ObjectId" ASC) WHERE ("ParentId" is null);

Following
https://www.postgresql.org/docs/current/static/indexes-partial.html
And
http://stackoverflow.com/a/16236566/5863847

If this isn't possible, can they be moved the "unsure" file instead of the "post"?

Error while import

Hi,
I've tried convert schema from MS SQL 2000
schema.zip

perl ../sqlserver2pgsql/sqlserver2pgsql.pl -b before.sql -a after.sql -u unsure.sql -f s.sql
Line <if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[FK_ConcContractH_ConsContract]') and OBJECTPROPERTY(id, N'IsForeignKey') = 1)

(1) not understood. This is a bug at ../sqlserver2pgsql/sqlserver2pgsql.pl line 1897, <$file> line 1.

Columns on Table types not parsed correctly if type and length not separated by one space

The following table causes the script to abort:

CREATE TYPE [sample.[IDValueTable] AS TABLE(
    [ID] [uniqueidentifier] NULL,
    [Key] [varchar](255) NULL
)

The problem is on the following regex:

^\t\[(.*)\] \[(.*)\](?: \((\d+(?:,\d+)?)\))?(?: (?:NOT )?NULL),?$

which expects exactly one space between the column type and the length.

In my case, the sql server script was generated with no space between the type and the length.

I propose we use the following regex instead:

^\t\[(.*)\] \[(.*)\](?:\s*?\((\d+(?:,\d+)?)\))?(?:\s+?(?:NOT\s+?)?NULL),?$

I have tested this change and its working for me.

I'll include this fix on my next pull request.

target a database schema

Hello,

Amazing script you have written.

I would like to target a single postgres database with multiple schemas. So that I can load say database "A" from MS SQL into a master postres database containing multiple schemas (in this case into schema A). And then the same for B, C ... So that I have a master postgres database with schema A, B, C. Representing various databases I have loaded from various MS SQL databases.

I notice the target is a database, how would I specify a database and a schema here?
-pd : postgresql database

I did run into some other issues (which I would love help with),
but the one above is the most important

  1. some columns in the MS SQL database are called "table". Yeah a reserved keyword, and amazingly bad form to name as a column.
  2. During load I get the following messages
    2014/03/27 01:46:54 - Table output.0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : An error occurred intialising this step:
    2014/03/27 01:46:54 - Table output.0 - Couldn't execute SQL: TRUNCATE TABLE "public".CountyDim
    2014/03/27 01:46:54 - Table output.0 -
    2014/03/27 01:46:54 - Table output.0 - ERROR: relation "public.countydim" does not exist
  3. And then the script.sql I export from MS SQL does not like lines of the following form
    EXEC sys.sp_addextendedproperty @name=N'Dictionary', @value=N'Mast table of IDNs. Integrated Delivery Network, A network of facilities and providers that work together to offer a continuum of care to a specific geographic area or market. ' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'IDNDim'
    rem GO

Thanks for the help!

Handle SPARSE columns

First, thank you for a really nice script and effort on this!

I have ran into a problem with the script and SPARSE columns
https://msdn.microsoft.com/en-us/library/cc280604.aspx
Which i use in some tables

Cannot understand [DecimalValue] [decimal](18, 2) SPARSE NULL

It was easily solved by removing SPARSE from the column declaration in the base sql file

But i reported it just in case :)

add the ability to denormalize varchar/text (instead of using citext extension)?

Hello,

To avoid the use of citext extension (FAQ), I suggest you to add an option to provide denormalizing functionnality for varchar/text columns.

Here's an implementation example:
For a source column named column_a you will have a final denormalized column (upper for example) with the same name (column_a ) and the original data stored into column_a_initial_col.
This implies too adding a trigger when inserting into column_a (insert true value in column_a_initial_col, insert upper value in column_a) / idem in updating / idem in deleting.

Do you think it's something that should be added?

Error "Cannot understand this comment" should probably be a Warning

Cannot understand this comment: EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Records the user that issued the checkblock' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'data_entry_check_block', @level2type=N'CONSTRAINT',@level2name=N'FK_data_entry_check_block_issued_by'
 at sqlserver2pgsql/sqlserver2pgsql.pl line 2035, <$file> line 1508.
        main::parse_dump() called at sqlserver2pgsql/sqlserver2pgsql.pl line 2831

I'm not sure if this error would always be issued only for this sort of thing (that is, extended properties should be able to be safely dropped), but when it says "Cannot understand this comment", I would think it should be safe to just issue a warning. Alternatively, if the same message could be issued for things of import, perhaps it shouldn't say "comment"--perhaps instruction would be a better wording.

Kettle script fails for XML columns

The kettle script is failing for XML columns with the following error:

2016-07-14 13:56:07 EDT ERROR:  column "Xml" is of type xml but expression is of type character varying at character 220
2016-07-14 13:56:07 EDT HINT:  You will need to rewrite or cast the expression.

Since this error was similar to the ones experienced with uuid and date I decided to do a custom casting for XML as well. I changed the script to select XML columns from SQL Server like this:

convert(varchar(max), [Xml]) AS "Xml"

Didn't work. I then added a cast for the XML type as we are doing with the other types. This time I got the following error:

2016-07-14 14:21:56 EDT ERROR:  cannot drop cast from character varying to xml because it is required by the database system
2016-07-14 14:21:56 EDT STATEMENT:  DROP CAST IF EXISTS (varchar as xml)

Any ideas about how to address this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.