GithubHelp home page GithubHelp logo

terraform-google-modules / terraform-google-sql-db Goto Github PK

View Code? Open in Web Editor NEW
263.0 40.0 422.0 1.33 MB

Creates a Cloud SQL database instance

Home Page: https://registry.terraform.io/modules/terraform-google-modules/sql-db/google

License: Apache License 2.0

HCL 66.14% Makefile 1.19% Python 4.88% Go 27.80%
cft-terraform databases

terraform-google-sql-db's Introduction

terraform-google-sql

terraform-google-sql makes it easy to create Google CloudSQL instance and implement high availability settings. This module consists of the following submodules:

See more details in each module's README.

Compatibility

This module is meant for use with Terraform 1.3+ and tested using Terraform 1.6+. If you find incompatibilities using Terraform >=1.13, please open an issue.

Upgrading

The current version is 20.X. The following guides are available to assist with upgrades:

Root module

The root module has been deprecated. Please switch to using one of the submodules.

Requirements

Installation Dependencies

Configure a Service Account

In order to execute this module you must have a Service Account with the following:

Roles

  • Cloud SQL Admin: roles/cloudsql.admin
  • Compute Network Admin: roles/compute.networkAdmin

Enable APIs

In order to operate with the Service Account you must activate the following APIs on the project where the Service Account was created:

  • Cloud SQL Admin API: sqladmin.googleapis.com

In order to use Private Service Access, required for using Private IPs, you must activate the following APIs on the project where your VPC resides:

  • Cloud SQL Admin API: sqladmin.googleapis.com
  • Compute Engine API: compute.googleapis.com
  • Service Networking API: servicenetworking.googleapis.com
  • Cloud Resource Manager API: cloudresourcemanager.googleapis.com

Service Account Credentials

You can pass the service account credentials into this module by setting the following environment variables:

  • GOOGLE_CREDENTIALS
  • GOOGLE_CLOUD_KEYFILE_JSON
  • GCLOUD_KEYFILE_JSON

See more details.

Provision Instructions

This module has no root configuration. A module with no root configuration cannot be used directly.

Copy and paste into your Terraform configuration, insert the variables, and run terraform init :

For MySQL :

module "sql-db" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/mysql"
  version = "~> 22.0"
}

or for PostgreSQL :

module "sql-db" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
  version = "~> 22.0"
}

or for MSSQL Server :

module "sql-db" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/mssql"
  version = "~> 22.0"
}

Contributing

Refer to the contribution guidelines for information on contributing to this module.

terraform-google-sql-db's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-sql-db's Issues

Update variables

availability_type, crash_safe_replication, and user_labels need to be added.

No connect for SQL DB from GKE cluster

I want to create GKE cluster with SQL DB instance, and connect from cluster on mysql by private IP.
I can not connect to mysql. I get error. What can be wrong ?

Illuminate/Database/QueryException with message 'SQLSTATE[HY000] [1045] Access denied for user 'trade'@'192.168.1.15' (using password: YES) (SQL: select 1)'

But user exists and password is true

image

Version

Terraform v0.12.13
+ provider.google v2.18.1
+ provider.google-beta v3.0.0-beta.1
+ provider.kubernetes v1.10.0
+ provider.null v2.1.2
+ provider.random v2.2.1

Terraform configs


locals {
  cluster_type = "node-pool"
}

module "gcp-network" {
  source       = "terraform-google-modules/network/google"
  version      = "~> 1.4.0"
  project_id   = var.project_id
  network_name = var.network

  subnets = [
    {
      subnet_name   = var.subnetwork
      subnet_ip     = "10.0.0.0/17"
      subnet_region = var.region
    },
  ]

  secondary_ranges = {
    "${var.subnetwork}" = [
      {
        range_name    = var.ip_range_pods_name
        ip_cidr_range = "192.168.0.0/18"
      },
      {
        range_name    = var.ip_range_services_name
        ip_cidr_range = "192.168.64.0/18"
      },
    ]
  }
}

module "gke" {
  source                   = "terraform-google-modules/kubernetes-engine/google"
  version                  = "5.1.1"
  project_id               = var.project_id
  name                     = "${local.cluster_type}-${var.cluster_name_suffix}-cluster"
  regional                 = true
  region                   = var.region
  zones                    = var.zones
  network                  = module.gcp-network.network_name
  subnetwork               = module.gcp-network.subnets_names[0]
  ip_range_pods            = var.ip_range_pods_name
  ip_range_services        = var.ip_range_services_name
  create_service_account   = false
  remove_default_node_pool = true

  node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "n1-standard-2"
      min_count          = 2
      max_count          = 5
      disk_size_gb       = 30
      disk_type          = "pd-standard"
      image_type         = "COS"
      auto_repair        = true
      auto_upgrade       = false
      service_account    = var.compute_engine_service_account
      preemptible        = false
      initial_node_count = 2
    },
  ]
}

data "google_client_config" "default" {
}


resource "google_compute_subnetwork" "mysql" {
  project                  = var.project_id
  name                     = "mysql"
  ip_cidr_range            = "10.127.0.0/20"
  network                  = module.gcp-network.network_self_link
  region                   = var.region
  private_ip_google_access = true
}

module "sql_db_mysql" {
  source           = "GoogleCloudPlatform/sql-db/google//modules/mysql"
  version          = "2.0.0"
  project_id       = var.project_id
  region           = var.region
  zone             = "c"
  name             = var.sql_db_name
  db_name          = var.db_name
  database_version = "MYSQL_5_7"
  user_name        = var.user_name
  user_host        = var.user_host
  user_password    = var.user_password
  disk_size        = var.disk_size
  disk_type        = var.disk_type

  ip_configuration = {
    ipv4_enabled        = true
    require_ssl         = false
    private_network     = module.gcp-network.network_self_link
    authorized_networks = []
  }

  database_flags = [
    {
      name  = "log_bin_trust_function_creators"
      value = "on"
    },
  ]
}

module "private-service-access" {
  source      = "GoogleCloudPlatform/sql-db/google//modules/private_service_access"
  version     = "2.0.0"
  project_id  = var.project_id
  vpc_network = module.gcp-network.network_name
}


Update tests to exercise examples

To ensure that the examples remain functionally correct, the test fixtures should be updated to invoke the example modules rather than directly invoking the functional submodules. The examples can be split to separate MySql from PostgreSQL to simplify the responsibility of each fixture.

output empty

not getting any outputs from module

module "mysql-db" {
source = "GoogleCloudPlatform/sql-db/google"
version = "1.0.0"
name = "${var.deploy_name}"
database_version = "MYSQL_5_7"
disk_autoresize = true
region = "${var.region}"
location_preference = "${var.location_preference}"
}

output "instance_name" {
description = "The name of the database instance"
value = "${module.mysql-db.instance_name}"
}

Support for private networking

Currently, attempting to use private networking errors out because a Service Networking connection and private IP is not established/created.

How to promote readreplica

Hello.

  1. We did replica promotion because we've got strange behaviour of master.
  2. removed master from state

how can I make promoted instance as a master in state and create new read replica? I imported instance as a master to state but it has "-replica0" in a name and should be recreated

Problems with interpolation in "ip_configuration" variable

I'm trying to deploy private postgresql using the module.

My module definition looks like this:

module "postgres_foo" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
  version = "1.1.1"

  name       = "postgres-foo-${random_id.name.hex}"
  project_id = "${var.project_id}"
  region     = "${var.region}"
  zone       = "${var.zone_suffix}"
  database_version = "POSTGRES_9_6"

  ip_configuration = {
    ipv4_enabled = false
    private_network = "${google_compute_network.main.self_link}"
  }
}

Planning this results in error:

Error: module.postgres_foo.google_sql_database_instance.default: settings.0.ip_configuration: should be a list

However, if I do terraform state show google_compute_network.main and copy value of self_link directly, terraform plan works fine:

module "postgres_foo" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
  version = "1.1.1"

  name       = "postgres-foo-${random_id.name.hex}"
  project_id = "${var.project_id}"
  region     = "${var.region}"
  zone       = "${var.zone_suffix}"
  database_version = "POSTGRES_9_6"

  ip_configuration = {
    ipv4_enabled = false
    private_network = "https://www.googleapis.com/compute/v1/projects/my-project/global/networks/main"
  }
}

I suppose it somehow relates to interpolation issues in module variables of complex types (e.g. maps), but I'm not sure. The only related issue I've found is this: hashicorp/terraform#7241

Destruction of postgres module causes error.

When using the following config, I get an error when using terraform destroy: Error, failure waiting for deletion of db-user in my-db

module "postgres" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
  version = "3.1.0"

  project_id = var.project
  region  = var.region
  name    = local.instance_name
  db_name = var.db_name

  database_version = var.postgres_version
  tier = var.machine_type

  user_password = random_password.password.result
  user_name     = var.user_name

  ip_configuration = {
    require_ssl = true
    ipv4_enabled = true
    authorized_networks = []
    private_network = null
  }
  backup_configuration = {
    enabled = true
    binary_log_enabled = false
    start_time         = "04:00"
  }
  
  maintenance_window_day = 7
  maintenance_window_hour = 7
  maintenance_window_update_track = "stable"

  database_flags = [
    {
      name  = "autovacuum_naptime"
      value = "2"
    },
  ]

  zone = var.zone
}

Config
Terraform v0.12.21

MySQL Read Replica IP Configuration not applied

Scenario

When providing read replica configuration via the variable read_replica_ip_configuration, the information being passed in is not used when creating the read replica.

Expected Behavior

Read replica should use the IP configuration from the variable read_replica_ip_configuration

Actual Behavior

Read replica uses the IP configuration from the main database (via the variable ip_configuration)

Steps to Reproduce

Create a master database with a read replica and supply different information in the variables ip_configuration and read_replica_ip_configuration.

Proposed Solution

in read_replica.tf update locals:

locals {
  primary_zone       = "${var.zone}"
  read_replica_zones = ["${compact(split(",", var.read_replica_zones))}"]

  zone_mapping = {
    enabled  = ["${local.read_replica_zones}"]
    disabled = "${list(local.primary_zone)}"
  }

  zones_enabled = "${length(local.read_replica_zones) > 0}"
  mod_by        = "${local.zones_enabled ? length(local.read_replica_zones) : 1}"

  zones = "${local.zone_mapping["${local.zones_enabled ? "enabled" : "disabled"}"]}"

   # ************************************************
   # Append the following:
   # ************************************************
  read_replica_ip_configuration_enabled = "${length(keys(var.read_replica_ip_configuration)) > 0 ? true : false}"
  read_replica_ip_configurations = {
    enabled  = "${var.read_replica_ip_configuration}"
    disabled = "${map()}"
  }
   # ************************************************

}

Then at line 38:

    ip_configuration            = ["${local.read_replica_ip_configurations["${local.read_replica_ip_configuration_enabled ? "enabled" : "disabled"}"]}"]

The same change will be required for both mysql and postgresql modules.

Private subnet allocation gives error.

running example with private subnet gives issues.. even with other cidr class 172.X get same issue!

google_sql_database_instance.default: Error, failed to create instance example-postgresql-a6df: googleapi: Error 400: Invalid request: Non-routable or private authorized network (10.127.0.0/20).., invalid

-Thanks Amit

Invalid sql instance name because of project name containing capital letters

google_sql_database_instance.master: Creating...
  connection_name:                   "" => "<computed>"
  database_version:                  "" => "MYSQL_5_6"
  first_ip_address:                  "" => "<computed>"
  ip_address.#:                      "" => "<computed>"
  master_instance_name:              "" => "<computed>"
  name:                              "" => "master-instance"
  private_ip_address:                "" => "<computed>"
  project:                           "" => "Villagetravelsproject-193803"
  public_ip_address:                 "" => "<computed>"
  region:                            "" => "us-central"
  replica_configuration.#:           "" => "<computed>"
  self_link:                         "" => "<computed>"
  server_ca_cert.#:                  "" => "<computed>"
  service_account_email_address:     "" => "<computed>"
  settings.#:                        "" => "1"
  settings.0.activation_policy:      "" => "<computed>"
  settings.0.backup_configuration.#: "" => "<computed>"
  settings.0.crash_safe_replication: "" => "<computed>"
  settings.0.disk_size:              "" => "<computed>"
  settings.0.disk_type:              "" => "<computed>"
  settings.0.ip_configuration.#:     "" => "<computed>"
  settings.0.location_preference.#:  "" => "<computed>"
  settings.0.pricing_plan:           "" => "PER_USE"
  settings.0.replication_type:       "" => "SYNCHRONOUS"
  settings.0.tier:                   "" => "D0"
  settings.0.version:                "" => "<computed>"

Error: Error applying plan:

1 error(s) occurred:

* google_sql_database_instance.master: 1 error(s) occurred:

* google_sql_database_instance.master: Error, failed to create instance master-instance: googleapi: Error 400: Invalid request: Invalid full instance name (Villagetravelsproject-193803:master-instance).., invalid

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Unable to set maintenance windows for replicas

According to Google's docs, in the Maintenance Windows section, 2nd gen instances do not allow setting any explicitly:

Read replicas do not support the maintenance window setting; they can experience a disruptive upgrade at any time. Failover replicas have the same maintenance window as the primary instance, and are updated immediately before the primary.

I seem to be unable to setup replicas without specifying a maintenance window and applying the default config will result in error.

  ~ module.cloud_sql_prd.google_sql_database_instance.failover-replica
      settings.0.maintenance_window.#:                               "0" => "1"
      settings.0.maintenance_window.0.day:                           "" => "6"
      settings.0.maintenance_window.0.hour:                          "" => "3"
      settings.0.maintenance_window.0.update_track:                  "" => "stable"

  ~ module.cloud_sql_prd.google_sql_database_instance.replicas
      settings.0.maintenance_window.#:                               "0" => "1"
      settings.0.maintenance_window.0.day:                           "" => "1"
      settings.0.maintenance_window.0.hour:                          "" => "3"
      settings.0.maintenance_window.0.update_track:                  "" => "canary"

Is the error in my code and there's a way to bypass this config or should this be considered in the module?

Thanks!
--Ariel

backup_configuration.binary_log_enabled is always set to true in TFState even after changing to false

I spinned up ClousSQLMysQL with backup_configuration.binary_log_enabled set to true. Now i am trying to set it to false.


The terraform plan said:

        backup_configuration {
          ~ binary_log_enabled = true -> false
            enabled            = false
            start_time         = "00:05"
        }

The terraform apply was successful saying

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

However, when i plan and apply again, it is again saying the same things as if the previous changes were not performed.


I also looked into the TFState file and it shows the following even though the plan-apply steps said it changed backup_configuration.binary_log_enabled to false successfully

"backup_configuration": [
    {
        "binary_log_enabled": true,
        "enabled": false,
        "location": "",
        "start_time": "00:05"
    }
],

Move replica/users/databases to for_each

count is being used for

  • Databases var.additional_databases
  • Users var.additional_users
  • Replica instances var.read_replica_size

We should move that to using for_each keeping the same input type (list) and just constructing the map/set internally similar to other modules.

SQL Server Module

Are there plans or a timeline for a cloud sql module for sql server?
Thanks

backup_configuration error for postgres

"settings.0.backup_configuration.0.start_time": one of settings.0.backup_configuration.0.binary_log_enabled,settings.0.backup_configuration.0.enabled,settings.0.backup_configuration.0.location,settings.0.backup_configuration.0.start_time must be specified

One of these values need to be set: https://github.com/terraform-google-modules/terraform-google-sql-db/blob/master/modules/postgresql/main.tf#L40

https://github.com/terraform-google-modules/terraform-google-sql-db/blob/master/modules/postgresql/variables.tf#L127

https://www.terraform.io/docs/providers/google/guides/version_3_upgrade.html#at-least-one-of-binary_log_enabled-enabled-start_time-or-location-is-now-required-on-google_sql_database_instance-settings-backup_configuration

Seems in the example, you should pin the version to

google = "~> 2.13"

"Error in function call"

I'm running into problems when attempting to use the module:

Error: Error in function call

  on .terraform/modules/postgresql-db/GoogleCloudPlatform-terraform-google-sql-db-9c7727a/modules/postgresql/main.tf line 22, in locals:
  22:     disabled = "${map()}"

Call to function "map" failed: map requires an even number of two or more
arguments, got 0.

See hashicorp/terraform#20372 for more info

Cannot provision Postgres database

New project in GCP with zero provisioned databases. Timeout after 20 minutes with:

Error: Error, failed to create instance dev-1608296: googleapi: Error 409: The instance or operation is not in an appropriate state to handle the request., invalidState

  on ../../modules/postgres/main.tf line 10, in resource "google_sql_database_instance" "default":
  10: resource "google_sql_database_instance" "default" {
resource "google_sql_database_instance" "default" {
  project          = "${var.project_id}"
  name             = "${var.db_instance_name}"
  database_version = "${var.db_version}"
  region           = "${var.region}"

  settings {
    tier = "${var.db_instance_tier}"
  }
}

resource "google_sql_database" "default" {
  name       = "${var.db_name}"
  project    = "${var.project_id}"
  instance   = "${google_sql_database_instance.default.name}"
  depends_on = ["google_sql_database_instance.default"]
}

Add module_depends_on resource to private_service_access

  • private_service_access module depends on Service Networking API and vpc_network, so it would be very helpful to have module_depends_on in private_service_access.
  • Also description for private_service_access should be updated, it useful not only for MySQL but also for Postgres databases.

Unknown resource 'random_id.name'

I'm using the module, however when I use terraform init the following error is displayed

Downloading modules...
Get: https://api.github.com/repos/GoogleCloudPlatform/terraform-google-sql-db/tarball/1.0.1?archive=tar.gz
Error getting plugins: module root: 1 error(s) occurred:

* module 'mysql-db': unknown resource 'random_id.name' referenced in variable random_id.name.hex

postgres issues

hi
found 2 issues in postgres

  1. terraform always add the user even if it exist (i thing its because of the host: "%" ) unlike the mysql
  2. terraform output shows the password unlike the mysql

some variables are not used (disk_size, disk_type, pricing_plan, replication_type)

Hi

thank you for your module. I noticed that some variables are not used :

  • disk_size
  • disk_type
  • pricing_plan
  • replication_type

I really care about disk_size but other people may have the need of others.

As a new year resolution, I will try to send you a pull request for these, but also for adding support for failover instance.

Thanks !

authorized_networks

Hi, can you give an example of how to pass properly formatted value to ip_configuration?

How create SQL DB with private IP

I want to create GKE cluster with SQL DB instance, and connect from cluster on mysql by private IP.
I get an error. What is the problem ?


locals {
  cluster_type = "node-pool"
}

module "gcp-network" {
  source       = "terraform-google-modules/network/google"
  version      = "~> 1.4.0"
  project_id   = var.project_id
  network_name = var.network

  subnets = [
    {
      subnet_name   = var.subnetwork
      subnet_ip     = "10.0.0.0/17"
      subnet_region = var.region
    },
  ]

  secondary_ranges = {
    "${var.subnetwork}" = [
      {
        range_name    = var.ip_range_pods_name
        ip_cidr_range = "192.168.0.0/18"
      },
      {
        range_name    = var.ip_range_services_name
        ip_cidr_range = "192.168.64.0/18"
      },
    ]
  }
}

module "gke" {
  source                   = "terraform-google-modules/kubernetes-engine/google"
  version                  = "5.1.1"
  project_id               = var.project_id
  name                     = "${local.cluster_type}-${var.cluster_name_suffix}-cluster"
  regional                 = true
  region                   = var.region
  zones                    = var.zones
  network                  = module.gcp-network.network_name
  subnetwork               = module.gcp-network.subnets_names[0]
  ip_range_pods            = var.ip_range_pods_name
  ip_range_services        = var.ip_range_services_name
  create_service_account   = false
  remove_default_node_pool = true

  node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "n1-standard-2"
      min_count          = 2
      max_count          = 5
      disk_size_gb       = 30
      disk_type          = "pd-standard"
      image_type         = "COS"
      auto_repair        = true
      auto_upgrade       = true
      service_account    = var.compute_engine_service_account
      preemptible        = false
      initial_node_count = 2
    },
  ]
}

data "google_client_config" "default" {
}


resource "google_compute_subnetwork" "mysql" {
  project                  = var.project_id
  name                     = "mysql"
  ip_cidr_range            = "10.127.0.0/20"
  network                  = module.gcp-network.network_self_link
  region                   = var.region
  private_ip_google_access = true
}

module "sql_db_mysql" {
  source           = "GoogleCloudPlatform/sql-db/google//modules/mysql"
  version          = "2.0.0"
  project_id       = var.project_id
  region           = var.region
  zone             = "c"
  name             = var.sql_db_name
  db_name          = var.db_name
  database_version = "MYSQL_5_7"
  user_name        = var.user_name
  user_host        = var.user_host
  user_password    = var.user_password
  disk_size        = var.disk_size
  disk_type        = var.disk_type

  ip_configuration = {
    ipv4_enabled        = true
    require_ssl         = true
    private_network     = null
    authorized_networks = [
      {
        name  = module.gcp-network.network_name
        value = google_compute_subnetwork.mysql.ip_cidr_range
      },
    ]
  }

  database_flags = [
    {
      name  = "log_bin_trust_function_creators"
      value = "on"
    },
  ]
}

Error: Error, failed to create instance trade: googleapi: Error 400: Invalid request: Non-routable or private authorized network (10.127.0.0/20).., invalid

  on .terraform/modules/sql_db_mysql/terraform-google-modules-terraform-google-sql-db-5f780d0/modules/mysql/main.tf line 27, in resource "google_sql_database_instance" "default":
  27: resource "google_sql_database_instance" "default" {

Disk Resize and Disk Size are defaulted to conflicting values

Unless I'm mistaken, the default values are incorrect for disk_size. It should not be supplied at all. The documentation default examples are also wrong. We are on version 1.2.0 but I think this applies to all versions.

Documentation: https://www.terraform.io/docs/providers/google/r/sql_database_instance.html
Relevant Section with bolded part

disk_autoresize - (Optional, Second Generation, Default: true) Configuration to increase storage size automatically. Note that future terraform apply calls will attempt to resize the disk to the value specified in disk_size -if this is set, do not set disk_size.

disk_size - (Optional, Second Generation, Default: 10) The size of data disk, in GB. Size of a running instance cannot be reduced but can be increased.

This repo follows the defaults given on the above documentation. The problem occurs when your DB autoscales higher than your disk_size. If terraform apply runs again, it will try to recreate the DB at the smaller size (defaulted) and cause outage/data loss. We need a way to not supply the disk_size variable at all when disk_autoresize=true (default value).

Standardize integration testing

The integration testing for this module should be standardized based on the module template. This should include obviating the need for manual steps beyond running kitchen.

Unable to boot the RDS postgress SQL inside the VPC

hi,
I am trying to boot my RDS PostgreSQL inside a VPC using a terraform code. I had followed the above two and also the other examples in the the repo but couldn't had any luck.

ip_configuration {
        ipv4_enabled = true
        private_network = "10.10.22.0/24"
        }

earlier to this i had tried

 ip_configuration {
        ipv4_enabled = true
        authorized_networks = [{
          name  = "${var.network_name}"
          value = "10.10.22.0/24"

Can you help me here or guide me how i can bring this into the VPC?

Name conflict when recreating failover database

Scenario

A MySQL database restore is done on a database with an existing failover replica, requiring deletion of the failover to complete the restore. Attempting to recreate the failover database using Terraform results in a name collision error.

Expected Behavior

Terraform apply should determine the replica is not present and successfully create it.

Actual Behavior

Name collision error message as below:

* google_sql_database_instance.failover-replica: Error, failed to create instance test-investigate-instance-failover with error code 409: googleapi: Error 409: The Cloud SQL instance already exists., instanceAlreadyExists. This may be due to a name collision - SQL instance names cannot be reused within a week.

Steps to Reproduce

  1. Using terraform, create a MySQL database with a failover replica
  2. Manually restore the database which requires deletion of the failover replica.
  3. Run terraform apply to recreate the failover replica

Proposed Solution

Create a failover database name suffix that defaults to an empty string to allow the issue above to be resolved and not affect any existing code.

In variables.tf add the following code:

variable "failover_replica_name_suffix" {
  description = "An optional suffix to add to the failover database name"
  default     = ""
}

In 'failover_replica.tf' (line 20):

name                  = "${var.name}-failover${(var.failover_replica_name_suffix=="" ? "" : "-")}${var.failover_replica_name_suffix}"

The conditional expression in the name above gracefully handles inserting a dash before the suffix if it is provided.

The suffix can now be updated after step 2 above resulting in successful execution of terraform apply in step 3.

Cannot create two mysql database instances with different parameters. Error 400: The incoming request contained invalid data., invalidRequest

I am trying to create two database instances using this module and I am getting the following error

Error: Error, failed to create instance live-api-30-15: googleapi: Error 400: The incoming request contained invalid data., invalidRequest

  on .terraform/modules/live_api_db/terraform-google-modules-terraform-google-sql-db-2d7c0eb/modules/mysql/main.tf line 27, in resource "google_sql_database_instance" "default":
  27: resource "google_sql_database_instance" "default" {

My terraform code

terraform {
  backend "gcs" {
    prefix = "databases/stag"
    bucket = "flo-terraform-state"
  }
}

locals {
  app_env = "stag"
  gcp_region = "us-central1"
  gcp_project = "flosports-174016"
  core_cluster_name = "core-api-30-15"
  live_cluster_name = "live-api-30-15"
}

provider "google" {
  project = local.gcp_project
  region = local.gcp_region
}

resource "random_string" "live_password" {
  length = 12
  special = false
}

resource "random_string" "core_password" {
  length = 12
  special = false
}

module "core_api_db" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/mysql"
  version = "3.1.0"
  tier = "db-f1-micro"
  database_version = "MYSQL_5_7"
  name = local.core_cluster_name
  project_id = local.gcp_project
  region = local.gcp_region
  zone = "a"
  backup_configuration = {
    "binary_log_enabled": true,
    "enabled": true,
    "start_time": null
  }
  db_charset = "utf8mb4"
  db_collation = "utf8mb4_bin"
  db_name = "core"
  disk_autoresize = true
  disk_size = 50

  user_labels = {
    app = "core-api-30"
    env = local.app_env
  }
  user_name = "root"
  user_password = random_string.core_password.result
}

module "live_api_db" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/mysql"
  version = "3.1.0"
  tier = "db-f1-micro"
  database_version = "MYSQL_5_7"
  name = local.live_cluster_name
  project_id = local.gcp_project
  region = local.gcp_region
  zone = "a"
  backup_configuration = {
    "binary_log_enabled": true,
    "enabled": true,
    "start_time": null
  }
  db_charset = "utf8mb4"
  db_collation = "utf8mb4_bin"
  db_name = "live"
  disk_autoresize = true
  disk_size = 5

  user_labels = {
    app = "live-api-30"
    env = local.app_env
  }
  user_name = "root"
  user_password = random_string.live_password.result
}

location_preference.zone string contains region?

Hi:)

I can't understand why is this hardcoded to "${var.region}-${var.zone}".

   location_preference {
      zone = "${var.region}-${var.zone}"
    }

I have some CloudSQL instances already created and I'm trying to bring them under terraform provisioning.

Every time I do terraform plan, I get

 ~ module.example.google_sql_database_instance.default
    settings.0.location_preference.0.zone:    "europe-west1-d" => "europe-west1-europe-west1-d"

Is this a bug or feature?:)
Thanks!

Improve master authorized networks example

Currently, the example config is invalid (see #20) as it attempts to set up a master authorized network using the internal CIDR block.

We should improve this situation by:

  1. Moving the regular (non-private) example into its own separate example
  2. Accepting a user input (variable) for master authorized network (with a narrow default).

Error using module.private-service-access based on the example

Hi,

In the example for the module.private_service_access in v1.1.2, you are using the network's self_link property which results in an error :

* google_compute_global_address.google-managed-services-range: Error creating GlobalAddress: googleapi: Error 400: Invalid value for field 'resource.name': 'google-managed-services-https://www.googleapis.com/compute/v1/projects/my-project/global/networks/main'. Must be a match of regex '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)', invalid

If I am not mistaken the module documentation states : vpc_network | Name of the VPC network to peer.

So it should be :

  module "private-service-access" {
    source      = "../../modules/private_service_access"
    project_id  = "${var.project_id}"
-   vpc_network = "${google_compute_network.default.self_link}"
+   vpc_network = "${google_compute_network.default.name}"
  }

Once fixed, the module works as intended.

Remove deprecated failover instance for HA MySQL

The usage of a failover instance for mysql has been deprecated in favour of the newer availability-type=REGIONAL method that Postgres is already using.

https://cloud.google.com/sql/docs/mysql/high-availability#legacy_mysql_high_availability_option

Until April 2020, you have the option of using the legacy process for adding high availability to MySQL instances, which uses a failover replica. The legacy functionality is not available in the Cloud Console. Instead, use gcloud or cURL commands. See Legacy configuration: Creating a new instance configured for high availability or Legacy configuration: Configuring an existing instance for high availability.

Related Issue on using the new HA field: #71

is there support for creating multipe postgresql schema

we have a case that we need to create postgresql cloud sql , with multiple schema . since we are creating the database with private ip , we can't use provider "postgresql"

what is the best solution to do that while using the terraform-google-sql-db

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.