dbsrgits / sql-translator Goto Github PK
View Code? Open in Web Editor NEWSQL::Translator (SQLFairy)
Home Page: http://sqlfairy.sourceforge.net/
SQL::Translator (SQLFairy)
Home Page: http://sqlfairy.sourceforge.net/
DOC :
Finally, we should mention that a foreign key must reference columns that either are a primary key or form a unique constraint.
--- a/lib/SQL/Translator/Producer/PostgreSQL.pm
+++ b/lib/SQL/Translator/Producer/PostgreSQL.pm
@@ -569,10 +569,21 @@ sub create_index
= $index->name
|| join('_', $table_name, 'idx', ++$index_name{ $table_name });
- my $type = $index->type || NORMAL;
my @fields = $index->fields;
return unless @fields;
+ my $idx = join '', @fields;
+ my $constraints = $index->table->get_constraints;
+ for my $c ( @$constraints ) {
+ my $udx = join '', map{ ref $_? $_->name : $_ } $c->field_names;
+ if( $idx eq $udx ) {
+ $index->type( 'UNIQUE' );
+ last;
+ }
+ }
+ my $type = $index->type || NORMAL;
+
+
my $index_using;
my $index_where;
for my $opt ( $index->options ) {
This patch forces index to be UNIQUE
if it is part of foreign constraint. The deployment script became:
"provider" smallint NOT NULL,
PRIMARY KEY ("id")
);
-CREATE INDEX "contractor_idx_contractor_type_id" on "contractor" ("contractor_type_id");
+CREATE UNIQUE INDEX "contractor_idx_contractor_type_id" on "contractor" ("contractor_type_id");
;
--
Without this patch we get error:
ERROR: there is no unique constraint matching given keys for referenced table "contractor_type"
$(which dbic-migration) --schema_class HyperMouse::Schema --database PostgreSQL -Ilib install
Since this database is not versioned, we will assume version 2
Reading configurations from /home/kes/work/projects/tucha/monkeyman/share/fixtures/2/conf
failed to run SQL in /home/kes/work/projects/tucha/monkeyman/share/migrations/PostgreSQL/deploy/2/001-auto.sql: DBIx::Class::DeploymentHandler::DeployMethod::SQL::Translator::try {...} (): DBI Exception: DBD::Pg::db do failed: ERROR: there is no unique constraint matching given keys for referenced table "contractor_type" at inline delegation in DBIx::Class::DeploymentHandler for deploy_method->deploy (attribute declared in /home/kes/work/projects/tucha/monkeyman/local/lib/perl5/DBIx/Class/DeploymentHandler/WithApplicatorDumple.pm at line 51) line 18
(running line 'ALTER TABLE "contractor" ADD CONSTRAINT "contractor_fk_contractor_type_id" FOREIGN KEY ("contractor_type_id") REFERENCES "contractor_type" ("contractor_type_id") ON DELETE RESTRICT ON UPDATE RESTRICT DEFERRABLE') at /home/kes/work/projects/tucha/monkeyman/local/lib/perl5/DBIx/Class/DeploymentHandler/DeployMethod/SQL/Translator.pm line 248.
DBIx::Class::Storage::TxnScopeGuard::DESTROY(): A DBIx::Class::Storage::TxnScopeGuard went out of scope without explicit commit or error. Rolling back. at /home/kes/work/projects/tucha/monkeyman/local/bin/dbic-migration line 0
DBIx::Class::Storage::TxnScopeGuard::DESTROY(): A DBIx::Class::Storage::TxnScopeGuard went out of scope without explicit commit or error. Rolling back. at /home/kes/work/projects/tucha/monkeyman/local/bin/dbic-migration line 0
Makefile:123: recipe for target 'dbdeploy' failed
make: *** [dbdeploy] Error 255
This maybe applied after #82
DBG>$from_field
SQL::Translator::Schema::Field {
_ERROR => ,
comments => [],
data_type => timestamp with time zone,
default_value => \'9999-12-31 23:59:59'::timestamp with time zone,
extra => {},
is_auto_increment => 0,
is_nullable => 0,
is_primary_key => 0,
name => known_till,
order => 4,
size => [
0,
],
table => SQL::Translator::Schema::Table person,
}
DBG>$to_field
SQL::Translator::Schema::Field {
_ERROR => ,
comments => [],
data_type => timestamp with time zone,
default_value => 9999-12-31 23:59:59,
extra => {},
is_auto_increment => 0,
is_nullable => 0,
is_primary_key => 0,
name => known_till,
order => 4,
size => [
0,
],
sql_data_type => 0,
table => SQL::Translator::Schema::Table person,
}
generates correct upgrade sql:
ALTER TABLE person ALTER COLUMN known_till SET DEFAULT '9999-12-31 23:59:59'::timestamp with time zone;
and wrong downgrade:
ALTER TABLE person ALTER COLUMN known_till SET DEFAULT 9999-12-31 23:59:59;
My PostgreSQL database has domains:
CREATE DOMAIN tkol AS NUMERIC(10,3)
The domains are not created when I run:
$(which dbic-migration) --force --schema_class App::Schema --database PostgreSQL -Ilib prepare
Currently I manually copy 001-a_domains.sql between deployment migrations after each prepare
This command line:
hobbes@metalbaby:~$ sqlt -f DBI -t GraphViz --db-user username --db-password '...' \
--dsn 'dbi:JDBC:hostname=localhost:9001;url=jdbc:sqlserver://10.0.0.5'
returns:
Error: translate: Error with parser 'SQL::Translator::Parser::DBI': JDBC not supported
at /usr/share/perl5/SQL/Translator/Parser/DBI.pm line 154.
If you're not familiar with DBD::JDBC
, here is my synopsis: the thing has at least two components: a standalone java application which speaks JDBC on one side and listens for incoming connections on the other. It acts as a proxy... The perl part of DBD::JDBC
talks to the little server. So, a variety of RBDMS are made available to perl's DBI
via this bridge. (I guess DBI is to perl as JDBC is to Java?)
So I looked around a little and determined that I should implement SQL::Translator::Parser::DBI::JDBC
. I inspected SQL::Translator::Parser::DBI::SQLServer
for guidance. I quit when I saw the following in the DBD::JDBC
README
file.
NOT YET IMPLEMENTED
DBI-defined methods, includingDBI->data_sources('JDBC'), $dbh->data_sources the metadata methods $dbh->table_info, $dbh->tables, $dbh->type_info_all, $dbh->type_info, $dbh->column_info, $dbh->primary_key_info, $dbh->primary_key, $dbh->foreign_key_info
Meanwhile, I'll return to trying to use unixODBC
to talk to MS SQL Server.
I have downloaded ZIP of the release v0.11024 from GitHub, unpacked and executed the following command:
perl Makefile.PL PREFIX=/usr/local/install/sql-translator/sql-translator-0.11024
Execution resulted in the following output:
include /home/andrius/src/sql-translator-0.11024/inc/Module/Install.pm
String found where operator expected at Makefile.PL line 58, near "readme_from 'lib/SQL/Translator.pm'"
(Do you need to predeclare readme_from?)
syntax error at Makefile.PL line 58, near "readme_from 'lib/SQL/Translator.pm'"
BEGIN not safe after errors--compilation aborted at Makefile.PL line 61.
After commenting out the line 58 in Makefile.PL (non-crucial, I suppose), I was able to proceed with the installation.
I use DBIC DeploymentHandler to generate DDL files for upgrading my database during schema changes. The generated SQL is valid but could be improved. Currently I use SQLite for development but it might apply to other DBMS as well.
My issue:
Result
class which is not nullable and has no default value.$Schema::VERSION
and call App::DH
with command write_ddl
.SQL::Translator
will simply generate ALTER TABLE foo ADD COLUMN bar
but could do better by generating the fallback-style: create temporary table, copy data, recreate original table, insert back.My reasoning:
ADD COLUMN
will fail because the NOT NULL
constraint is violated.NULL
could be inserted which has the same result but I could edit the SQL much easier and just replace NULL
by any reasonable default value.1
which would make the SQL actually work in many cases.Before I really understood the issue I talked through this on IRC with ribasushi and he came up with this solution:
so I think what you actually want
is an {extra} field of 'initially_populated_from_column'
which is handled just like 'renamed_from' for columns themselves
https://metacpan.org/source/ILMARI/SQL-Translator-0.11021/lib/SQL/Translator/Diff.pm#L390
then the boilerplate can literally generate what you want without any hand editing
and remains usable outside of your particular case as well ( it is a useful feature in general )
probably just 'initially_populated_from' - takes both a scalar ( a column name ) and a scalarref ( a literal default )
relates to Homebrew/homebrew-core#40047
I saw there is some build config change with this commit, 32be849#diff-5d3ba18294715d9415e9e732852bfec6.
I am not quite familiar with perl build system, so I might need some help on upgrading the brew formula. :)
In create_trigger PostgreSQL and DB2 use the trigger action as-is, which matches the documentation for Trigger::action
, but SQLite and MySQL wrap the trigger action in BEGIN ... END
causing an error if the action already includes BEGIN ... END
.
The SQL standard defines domains, which are user defined types that come with validity checks. These (as far as I saw) are supported in various parsers, but not by any producers.
It would be nice to support these.
How to reproduce: put space before type.
--- a/lib/Schema/Result/Document.pm
+++ b/lib/Schema/Result/Document.pm
@@ -21,7 +21,7 @@ $X->add_columns(
is_nullable => 1,
},
document_type_id => {
- data_type => 'integer',
+ data_type => ' integer',
},
docn => {
data_type => 'varchar',
Generated migration script (Notice extra space before integer for document_type_id
column):
CREATE TABLE "document" (
"id" serial NOT NULL,
"owner_id" integer,
"document_type_id" integer NOT NULL,
PRIMARY KEY ("id")
);
I do not know how long this wrong data type lurking at production code. But we noticed it when started to use 'Mojolicious::Plugin::GraphQL', which issues 'document_type_id' unknown data type: integer
error.
PS. In theory we can create { data_type => 'integer NOT NULL' }
and this will works.
Should we put quotes around data type to be more safe?
$sqlt_table->add_index(
name => "idx",
fields => ["foo(100)"], // size required for text columns in MySQL
);
Related SO question: https://stackoverflow.com/questions/6859955/how-do-you-specify-index-length-when-using-dbixclass
This is from DBIx::Class::Migration
:
What I see that in deploy/001-auto.sql
it produces invalid SQL:
INDEX `custom_field_map_values_idx_value_text` (`value_text(1000)`),
But in upgrade/001-auto.sql
, it looks valid:
ALTER TABLE custom_field_map_values DROP INDEX custom_field_map_values_idx_value_text,
DROP INDEX custom_field_map_values_idx_value_options,
ADD INDEX custom_field_map_values_idx_value_text (value_text(1000)),
ADD INDEX custom_field_map_values_idx_value_options (value_options(1000));
I was in need to support functions & triggers in schemas generated from DBIx::Class, so I started to add this features in this branch.
t/46xml-to-pg.t
with some clarification for procedure parameter sql
t/60roundtrip.t
CREATE FUNCTION
statement from Schema object in diff - after clarifying sql
meaning (IMHO it should contain complete SQL statement for creating function, every database has its own specs)In deployment script I have:
CREATE TABLE "saldoanal" (
"nyear" smallint NOT NULL,
"nmonth" smallint NOT NULL,
...
CONSTRAINT "saldoanal_kol_sum" CHECK (balance_ingerity_check( .... ))
);
then after table statements function is created:
CREATE FUNCTION "balance_ingerity_check" (....)
RETURNS boolean
.....
And of course trying to deploy schema cause error:
ERROR: function balance_ingerity_check(tbuhschet, tkol, tmoney, tkol, tmoney) does not exist
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
full error message:
failed to run SQL in /home/kes/work/projects/safevpn/repo2/share/migrations/PostgreSQL/deploy/76/001-auto.sql: DBIx::Class::DeploymentHandler::DeployMethod::SQL::Translator::try {...} (): DBI Exception: DBD::Pg::db do failed: ERROR: function balance_ingerity_check(tbuhschet, tkol, tmoney, tkol, tmoney) does not exist
HINT: No function matches the given name and argument types. You might need to add explicit type casts. at inline delegation in DBIx::Class::DeploymentHandler for deploy_method->deploy (attribute declared in /home/kes/work/projects/safevpn/repo2/local/lib/perl5/DBIx/Class/DeploymentHandler/WithApplicatorDumple.pm at line 51) line 18
(running line 'CREATE TABLE "saldoanal" ( "nyear" smallint NOT NULL, "nmonth" smallint NOT NULL, "schet" tbuhschet NOT NULL, "analitid1" integer DEFAULT 0 NOT NULL, "analitid2" integer DEFAULT 0 NOT NULL, "koldeb" tkol DEFAULT '0' NOT NULL, "sumdeb" tmoney DEFAULT '0' NOT NULL, "kolkred" tkol DEFAULT '0' NOT NULL, "sumkred" tmoney DEFAULT '0' NOT NULL, PRIMARY KEY ("nyear", "nmonth", "schet", "analitid1", "analitid2"), CONSTRAINT "saldoanal_kol_sum" CHECK (balance_ingerity_check(Schet, KolDeb, SumDeb, KolKred, SumKred)) )') at /home/kes/work/projects/safevpn/repo2/local/lib/perl5/DBIx/Class/DeploymentHandler/DeployMethod/SQL/Translator.pm line 248.
DBIx::Class::Storage::TxnScopeGuard::DESTROY(): A DBIx::Class::Storage::TxnScopeGuard went out of scope without explicit commit or error. Rolling back. at /home/kes/work/projects/safevpn/repo2/local/bin/dbic-migration line 0
DBIx::Class::Storage::TxnScopeGuard::DESTROY(): A DBIx::Class::Storage::TxnScopeGuard went out of scope without explicit commit or error. Rolling back. at /home/kes/work/projects/safevpn/repo2/local/bin/dbic-migration line 0
Makefile:148: recipe for target 'dbdeploy' failed
make: *** [dbdeploy] Error 255
The independent objects must be created first.
I tend to use PK cols which are named after the table e.g. table 'foos' has PK 'foos_id'. When renaming tables via ::Diff ->extra( renamed_from => 'foos'
then I get one of two possible breakages:
Can't alter field in another table at .../SQL/Translator/Producer/PostgreSQL.pm line 769.
I'm fine with a warning, but it should probably only print once per process? Or maybe once per Translator instance?
Maybe a new setting warn_unimplemented
, with values once
(default), each
, or 0
?
Somewhat related, in my local wrapper object I prevent all DROP statements universally, replacing them with a SQL comment and a warning, just to make sure I never deploy a production disaster. It might be nice to make that into a standard feature that all producers are aware of.
Originally posted by @nrdvana in #161 (comment)
I use App::DH and DBIC-DeploymentHandler to create migration files, currently for SQLite and PostgreSQL. After I've made my application enforce PRAGMA foreign_keys = on
for SQLite I found that the generated SQL is invalid. For complex table changes a temporary table is created like this:
CREATE TEMPORARY TABLE mytable_temp_alter (
-- copy columns
FOREIGN KEY ( mycolumn_id ) REFERENCES othertable(id)
);
This is invalid SQL.
For SQLite I see no other solution that to just skip the FKs for the temporary table. The new main table will have FKs again and if PRAGMA foreign_keys
is on they will be checked during insertion.
I filed this bug before at DBIC-DeploymentHandler. For more details and research see frioux/DBIx-Class-DeploymentHandler#76
Right now, when generating a schema from your database, we only pull foreign key constraints. It's not that hard to pull the other types of constraints, too (you search the pg_catalog.pg_constraint
on different contype
where f is fk, x is exclue, u is unique, c is check etc). Then, you can use the pg_get_constraintdef to recover the DDL for generating it, which we SHOULD be able to just parse using the regular parser
Since SQLite doesn't support altering existing columns, sql-translator achieves this by creating a new table with the correct schema and copying the data over. It first copies the data into a temporary table, though. If the table includes a foreign key constraint it cannot be satisfied because the temporary table is in a different SQLite schema. Should sql-translator create the new table in the main database so that foreign keys work or do applications need to disable (or just not enable to begin with) foreign key constraints for sql translator?
on fedora 19 with mariadb:
yum install perl-SQL-Translator
[username@hostname ~] mysqldump -u root -pmysql_root_password database_name > example.sql
[username@hostname ~] sqlt-graph -f MySQL -o example.png -t png example.sql
error
(line 36): Invalid statement: Was expecting comment, or use, or set, or drop, or create, or alter, or insert, or delimiter, or empty statement
Error: translate: Error with parser 'SQL::Translator::Parser::MySQL':
no results at /usr/bin/sqlt-graph line 195.
https://gist.github.com/1152817 is a short program that takes 2 simple MySQL tables and outputs invalid SQL when trying to diff them.
HI,
Any particular reason no one added the new JSON and HStore types to the PostgreSQL parser?
The PostgreSQL producer works with array types, however, the parser does not seem to. Running the following snippet will not produce any output:
use SQL::Translator;
my $sql = <<END;
CREATE TABLE "test" (
"array" text[] NOT NULL
);
END
my $t = SQL::Translator->new;
$t->parser('PostgreSQL');
$t->producer('PostgreSQL');
print $t->translate( { data => $sql } ) or die $t->error;
If I remove the brackets, it works. Running version '0.11020'
Thanks.
SQL/Translator/Producer/PostgreSQL.pm has this line:
'CURRENT_TIMESTAMP' => 'CURRENT_TIMESTAMP',
It causes the string to be quoted regardless of a ref or string is passed to default_value. A reference must be generated by the _apply_default_value function, so the line should be changed to:
'CURRENT_TIMESTAMP' => \'CURRENT_TIMESTAMP',
In fact, the 'now()' replacement should also be a ref and the replacement should add the other Pg datetime variants: CURRENT_TIME, CURRENT_DATE, LOCALTIME, LOCALSTAMP. There might even be other Pg magic functions that should be added.
described here: jjn1056/DBIx-Class-Migration#108
This statement:
sqlt-graph -f MySQL -o fail.png -t png fail.sql
produces the following error:
ERROR (line 3): Invalid statement: Was expecting comment, or use, or
set, or drop, or create, or alter, or insert, or
delimiter, or empty statement
Error: translate: Error with parser 'SQL::Translator::Parser::MySQL': no results at ~/perl5/perlbrew/perls/perl-5.22.4/bin/sqlt-graph line 195.
The following SQL was used to produce the error:
SET NAMES 'utf8';
CREATE DATABASE example CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
USE example;
CREATE TABLE articel (
id INTEGER NOT NULL AUTO_INCREMENT,
name VARCHAR(255) NOT NULL
) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
When I remove CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci
from the create statement, it parses just fine.
Note: The character set for the table does not produce an error.
According to https://dev.mysql.com/doc/refman/5.7/en/charset-database.html the used syntax for a character set in create
should be valid.
$ sqlt -t Sybase -f DBI --dsn "dbi:Sybase:server=..." --db-user=... --db-password=... > sybase.sql
Apparently we have a table without any indexes...
Object does not have any indexes.
DBD::Sybase::db selectall_hashref failed: Field 'INDEX_NAME' does not exist (not one of COL(1)) at /home/musgrom/perl5/lib/perl5/SQL/Translator/Parser/DBI/Sybase.pm line 232.
Error: translate: Error with parser 'SQL::Translator::Parser::DBI': DBD::Sybase::db selectall_hashref failed: Field 'INDEX_NAME' does not exist (not one of COL(1)) at /home/musgrom/perl5/lib/perl5/SQL/Translator/Parser/DBI/Sybase.pm line 232.
On line 137 of SQL::Translator::Parser::Oracle, the grammar rule:
drop : /drop/i WORD(s) NAME WORD(s?) ';'
fails to match if table names aren't surrounded with quotes because WORD(s)
matches to the end and NAME
has nothing left to match.
@rabbiveesh As you design that, an interesting unit test for postgres to really test the boundaries would be
create table test (name varchar(50) not null);
create index ix_test2 on test (substr(name, 2, 3) desc, substr(name, 5, 3) asc);
:-)
Originally posted by @nrdvana in #68 (comment)
There are a few more comments in that PR's discussion. (will past them back here).
Another thought - when parsing the fields, sqlt helpfully splits strings on commas. So we're actually going to have to only support that via a hashref arg (as mentioned there)
ref #68 (comment)
and #68 (comment)
$ ~/perl5/bin/sqlt-graph --from=Sybase -o test.png test.sql
ERROR (line 1): Invalid statement: Was expecting create table, or
create procedure, or create index, or create
constraint, or comment, or use, or setuser, or if, or
print, or grant, or exec
Error: translate: Error with parser 'SQL::Translator::Parser::Sybase': no results at /home/musgrom/perl5/bin/sqlt-graph line 195.
$ perl -MSQL::Translator -e 'print
0.11021_01
Line 128 of SQL::Translator::Producer::Oracle turns timestamp
columns into date
, but Oracle has timestamp fields with fractional second precision.
the PostgreSQL parser does not read default --schema-only output from pg_dump. It fails in my dump at line two which is an ALTER SCHEMA ... statement. Removing the statement succeeds.
t/data/roundtrip_autogen.yaml
is missing in t/60roundtrip.t
on line 91.
I did not find the file in the git history.
Maybe it is still on the authors machine?
๐ I noticed that for the latest release, the artifact is not in CPAN. Just want to call it out. Thanks!
kind of relates to Homebrew/homebrew-core#61453
YAML 1.31 recommends using something else, such as YAML::PP.
I have experimented with switching to YAML::PP and YAML::Tiny but the tests fail to subtle differences in serialisation. Notably, some numbers are turned into strings, e.g. version: \'1.64\'
instead of version: 1.64
.
In experiments YAML::Tiny seems to work better as it preserves decimal places, e.g. 0.00
is not turned into 0
.
(On closer inspection, YAML::Tiny does not support all of the required features.)
Hi,
I'm looking to translate 500k lines early '90s sybase/tsql proc and trigger code. It seems the goto flow control construct is not supported, its treated as a syntax error.
I'm also having difficulty locating flow control constructs in the schema.
Did I miss something ? And, if needed how's the best way to add "goto" and other flow control support ?
cheers
For better user experience we need to change link from RT issues to GH issues.
I'm adding a boolean column to an existing table and creating a different table with a boolean column:
"flagged_for_deletion",
{ data_type => "boolean",
is_nullable => 0,
default_value => 0,
},
and generating the DDL via DBIx::Class:
$schema->create_ddl_dir( ['MySQL'], undef, './sql/ddl', $opt->pre_version,
{ producer_args => { mysql_version => 5 } });
The full schema is correctly generated, both tables have "flagged_for_deletion boolean NOT NULL DEFAULT '0'" which is exactly what I expected. The diff however:
CREATE TABLE `created` (
-- snip
`flagged_for_deletion` enum('0','1') NOT NULL DEFAULT '0',
-- snip
) ENGINE=InnoDB DEFAULT CHARACTER SET utf8;
ALTER TABLE altered -- snip
ADD COLUMN flagged_for_deletion boolean NOT NULL DEFAULT '0',
-- snip
The CREATE TABLE statement has the backtick quoting (that the full schema doesn't have) and uses MySQL 3-style boolean emulation with enums. From all the way over here it looks like it's been generated by a different producer with different options, so it might be a DBIC issue instead?
A lot of the translation and diffing functionality is missing from the oracle library such as drop_field, drop_table, alter_drop_index, etc.
code example:
$sqlt->add_procedure(
name => 'order_total_suma',
parameters => [
{ argmode => 'in', name => '_deep', type => 'tstzrange', default => 1 },
],
...
expected SQL:
CREATE FUNCTION "order_total_suma" (in _deep tstzrange default 1) ...
jjn1056/DBIx-Class-Migration#145 is a bug report for DBICM, where upgrade scripts work correctly, but downgrade scripts don't use the short version.
@KES777 could you share a snipped version of the autogenerated YAML files from the 2 versions? I believe that's how DBICM makes the diff. Once I have that, we can work on diagnosing and/or fixing the problem
This issues related to #82
When body of function is changed but on this function depend other objects we got next error:
Exception: DBD::Pg::db do failed: ERROR: cannot drop function make_prow() because other objects depend on it
The produced upgrade/downgrade
SQLs are like:
DROP FUNCTION make_prow ();
CREATE FUNCTION "make_prow" ();
According to the DOC
If you drop and then recreate a function, the new function is not the same entity as the old; you will have to drop existing rules, views, triggers, etc. that refer to the old function. Use CREATE OR REPLACE FUNCTION to change a function definition without breaking objects that refer to the function. Also, ALTER FUNCTION can be used to change most of the auxiliary properties of an existing function.
So here instead of DROP/CREATE
we should use 'CREATE OR REPLACE FUNCTION' when {add_drop_procedure}
option is supplied.
The patch:
--- a/lib/SQL/Translator/Producer/PostgreSQL.pm
+++ b/lib/SQL/Translator/Producer/PostgreSQL.pm
@@ -713,10 +713,7 @@ sub create_procedure {
my @statements;
- push @statements, drop_procedure( $procedure )
- if $options->{add_drop_procedure};
-
- my $sql = 'CREATE FUNCTION ';
+ my $sql = 'CREATE '. ($options->{add_drop_procedure} ? 'OR REPLACE ' : '') .'FUNCTION ';
$sql .= $generator->quote($procedure->name);
$sql .= ' (';
my @args = ();
When applying is_auto_increment
for already existing field
--- a/lib/HyperMouse/Schema/Result/Language.pm
+++ b/lib/HyperMouse/Schema/Result/Language.pm
@@ -33,6 +33,7 @@ $Z->add_columns(
},
language_id => {
data_type => "integer",
+ is_auto_increment => 1,
extra => { unsigned => 1 },
},
known_from => {
the generated upgrade script is:
ALTER TABLE language ALTER COLUMN language_id TYPE serial;
which fails with error ERROR: type "serial" does not exist at
:
$(which dbic-migration) --schema_class HyperMouse::Schema --database PostgreSQL -Ilib upgrade
Reading configurations from /home/kes/work/projects/tucha/monkeyman/share/fixtures/13/conf
failed to run SQL in /home/kes/work/projects/tucha/monkeyman/share/migrations/PostgreSQL/upgrade/13-14/001-auto.sql: DBIx::Class::DeploymentHandler::DeployMethod::SQL::Translator::try {...} (): DBI Exception: DBD::Pg::db do failed: ERROR: type "serial" does not exist at inline delegation in DBIx::Class::DeploymentHandler for deploy_method->upgrade_single_step (attribute declared in /home/kes/work/projects/tucha/monkeyman/local/lib/perl5/DBIx/Class/DeploymentHandler/WithApplicatorDumple.pm at line 51) line 18
(running line 'ALTER TABLE language ALTER COLUMN language_id TYPE serial') at /home/kes/work/projects/tucha/monkeyman/local/lib/perl5/DBIx/Class/DeploymentHandler/DeployMethod/SQL/Translator.pm line 248.
DBIx::Class::Storage::TxnScopeGuard::DESTROY(): A DBIx::Class::Storage::TxnScopeGuard went out of scope without explicit commit or error. Rolling back. at /home/kes/work/projects/tucha/monkeyman/local/bin/dbic-migration line 0
DBIx::Class::Storage::TxnScopeGuard::DESTROY(): A DBIx::Class::Storage::TxnScopeGuard went out of scope without explicit commit or error. Rolling back. at /home/kes/work/projects/tucha/monkeyman/local/bin/dbic-migration line 0
Makefile:132: recipe for target 'dbup' failed
make: *** [dbup] Error 255
For already existing field there should be few commands:
CREATE SEQUENCE foo_a_seq OWNED BY foo.a;
SELECT setval('foo_a_seq', coalesce(max(a), 0)) FROM foo;
ALTER TABLE foo ALTER COLUMN a SET DEFAULT nextval('foo_a_seq');
You cannot add an index on an expression (aka a functional index) using a function that more than one argument like shown in the following test addition:
git diff t/47postgres-producer.t
diff --git a/t/47postgres-producer.t b/t/47postgres-producer.t
index 9c50db7..c2db844 100644
--- a/t/47postgres-producer.t
+++ b/t/47postgres-producer.t
@@ -686,6 +686,14 @@ is($view2_sql1, $view2_sql_replace, 'correct "CREATE OR REPLACE VIEW" SQL 2');
($def) = SQL::Translator::Producer::PostgreSQL::create_index($index, $quote);
is($def, 'CREATE INDEX "myindex" on "foobar" USING hash ("bar", lower(foo)) WHERE upper(foo) = \'bar\' AND bar = \'foo\'', 'index using & where created w/ quotes');
}
+
+ {
+ my $index = $table->add_index(name => 'myindex', fields => ['coalesce(foo, 0)']);
+ my ($def) = SQL::Translator::Producer::PostgreSQL::create_index($index);
+ is($def, "CREATE INDEX myindex on foobar (coalesce(foo, 0))", 'index created');
+ ($def) = SQL::Translator::Producer::PostgreSQL::create_index($index, $quote);
+ is($def, 'CREATE INDEX "myindex" on "foobar" (coalesce(foo, 0))', 'index created w/ quotes');
+ }
}
my $drop_view_opts1 = { add_drop_view => 1, no_comments => 1, postgres_version => 8.001 };
The SQL produced for any table with an identity column will be invalid.
[1] fake.fake.1> sp_help TestProducer;
Name Owner Object_type Object_status Create_date
------------------------ ---------- ---------------------- -------------------------- --------------------------------------
TestProducer dbo user table -- none -- Jul 13 2017 12:42PM
(1 row affected)
Column_name Type Length Prec Scale Nulls Not_compressed Default_name Rule_name Access_Rule_name Computed_Column_object Identity
---------------------- -------------- ------------ -------- ---------- ---------- ---------------------------- ------------------------ ------------------ -------------------------------- -------------------------------------------- --------------------
TestID numeric 5 9 0 0 0 NULL NULL NULL NULL 1
TestVar varchar 2 NULL NULL 0 0 NULL NULL NULL NULL 0
TestChr char 1 NULL NULL 1 0 NULL NULL NULL NULL 0
Object does not have any indexes.
No defined keys for this object.
If I have SQL::Translator::Producer::Sybase dump out the SQL for this table then I get this (comments and blank lines removed):
CREATE TABLE TestProducer (
TestID IDENTITY numeric(9,0) NOT NULL,
TestVar varchar(2) NOT NULL,
TestChr char(1) NULL
);
Notice that IDENTITY falls before the datatype which is invalid, will not load in Sybase and rightly causes SQL::Translator::Parser::Sybase to error out.
When attempting to parse Oracle SQL where a column definition has a default value of CURRENT_TIMESTAMP the parser will fail with the following error:
ERROR (line 1): Invalid statement: Was expecting remark, or run, or
prompt, or create, or table comment, or comment on
table, or comment on column, or alter, or drop
translate: Error with parser 'SQL::Translator::Parser::Oracle': Parse failed.
You can recreate the error with the following script:
#!/usr/bin/env perl
use strict;
use warnings;
use FindBin;
use SQL::Translator;
my $translator = SQL::Translator->new(
# Print debug info
debug => 1,
# Print Parse::RecDescent trace
trace => 1,
# Don't include comments in output
no_comments => 0,
# Print name mutations, conflicts
show_warnings => 1,
# Add "drop table" statements
add_drop_table => 1,
# to quote or not to quote, thats the question
quote_identifiers => 1,
# Validate schema object
validate => 1,
# Make all table names CAPS in producers which support this option
format_table_name => sub {my $tablename = shift; return uc($tablename)},
# Null-op formatting, only here for documentation's sake
format_package_name => sub {return shift},
format_fk_name => sub {return shift},
format_pk_name => sub {return shift},
);
my $output = $translator->translate(
from => 'Oracle',
to => 'MySQL',
# Or an arrayref of filenames, i.e. [ $file1, $file2, $file3 ]
filename => "$FindBin::Bin/../db_versions/just_person_test.sql",
) or die $translator->error;
print $output;
just_person_test.sql contains the following
CREATE TABLE person (
id varchar2(32) NOT NULL,
added date DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (id)
);
If you remove DEFAULT CURRENT_TIMESTAMP
it will parse just fine.
Hi veesh, I have two XRefs to fail reports that have a connection to the release 1.64:
It would probably be helpful to the downstream users if you could chime in in those issues and explain what they have to fix.
Thanks!
Postgres has a weird feature for its indexes where you specify an "opclass" on the fields of the index definition. SQL::Translator currently doesn't have a place to store this information, in addition to not being able to round-trip for it.
Here's an example from the trigram module :
CREATE INDEX trgm_idx ON test_trgm USING GIN (t gin_trgm_ops);
I now have two projects using trigram indexes, so the itch to fix it is growing. I discovered that the DDL generator already has a special case to not quote field names with parentheses in them, so I was able to work around the problem for generating DDL with:
->add_index({
name => 'trgm_idx',
fields => [ '(t) gin_trgm_ops' ],
options => { using => "GIN" }
})
because Postgres allows arbitrary parentheses around the field name.
It seems a bit hacky. In most other places of DBIC when we want literal SQL we can use a scalar ref. Would that be the right thing to do here?
The next question is how to round-trip this. If I add Postgres Parser support for detecting trigram indices, should I construct index objects like above? (with the parentheses around the column name) or should there be a new scalar-ref feature first and then use that? On the same topic, I don't see a good way to put the "ASC" or "DESC" flags on the fields either, such as used in
CREATE INDEX IF NOT EXISTS x ON y (a DESC, b DESC, c ASC);
As a final consideration, it might be counter-productive to add SQL into the fields because code that wants to introspect a table to find out which columns are indexed would not find a match between sql fields and column names. Maybe there should be field objects that stringify to the field name and contain more descriptive attributes to generate the sql?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.