GithubHelp home page GithubHelp logo

electric-sql / electric Goto Github PK

View Code? Open in Web Editor NEW
4.8K 37.0 110.0 37.6 MB

Local-first sync layer for web and mobile apps. Build reactive, realtime, local-first apps directly on Postgres.

Home Page: https://electric-sql.com

License: Apache License 2.0

Elixir 55.71% Shell 0.02% Makefile 0.23% Dockerfile 0.06% TypeScript 39.17% JavaScript 0.11% PLpgSQL 0.56% HTML 4.00% Erlang 0.14%
local-first sqlite elixir postgres sql crdts offline crdt

electric's Introduction

ElectricSQL logo

Local-first sync layer for web and mobile apps. Build reactive, realtime, local-first apps directly on Postgres.

CI License - Apache 2.0 Status - Alpha Chat - Discord

ElectricSQL

Sync for modern apps. From the inventors of CRDTs.

Quick links

What is ElectricSQL?

ElectricSQL is a local-first software platform that makes it easy to develop high-quality, modern apps with instant reactivity, realtime multi-user collaboration and conflict-free offline support.

Local-first is a new development paradigm where your app code talks directly to an embedded local database and data syncs in the background via active-active database replication. Because the app code talks directly to a local database, apps feel instant. Because data syncs in the background via active-active replication it naturally supports multi-user collaboration and conflict-free offline.

How do I use it?

ElectricSQL gives you instant local-first for your Postgres. Think of it like "Hasura for local-first". Drop ElectricSQL onto an existing Postgres-based system and you get instant local-first data synced into your apps.

ElectricSQL then provides a whole developer experience for you to control what data syncs where and to work with it locally in your app code. See the Introduction and the Quickstart guide to get started.

Repo structure

This is the main repository for the ElectricSQL source code. Key components include:

  • clients/typescript — Typescript client that provides SQLite driver adapters, reactivity and a type-safe data access library
  • components/electric — Elixir sync service that manages active-active replication between Postgres and SQLite
  • generator — Prisma generator that creates the type safe data access library
  • protocol/satellite.proto — Protocol Buffers definition of the Satellite replication protocol

See the Makefiles for test and build instructions and the e2e folder for integration tests.

Team

ElectricSQL was founded by @thruflo and @balegas, under the guidance of:

See the Team and Literature pages for more details.

Contributing

See the Community Guidelines including the Guide to Contributing and Contributor License Agreement.

Support

We have an open community Discord. Come and say hello and let us know if you have any questions or need any help getting things running.

It's also super helpful if you leave the project a star here at the top of the page☝️

electric's People

Contributors

adeleke5140 avatar alco avatar balegas avatar davidmartos96 avatar define-null avatar dependabot[bot] avatar ekkaiasmith avatar fooware avatar github-actions[bot] avatar gregzo avatar icehaunter avatar jelligett avatar joshuafolorunsho avatar js2702 avatar kandros avatar kevin-dp avatar klavinski avatar kyleamathews avatar magnetised avatar mentalgear avatar msfstef avatar ricotrevisan avatar robbwdoering avatar samuelcivalgo avatar samwillis avatar sebws avatar tachyonicbytes avatar thorwebdev avatar thruflo avatar v0idpwn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

electric's Issues

[VAX-1187] Modify db:psql to connect to PG via Docker container

The current db:psql command relies on the machine's local psql command to connect to the PG DB that runs inside a Docker container. Thus it requires users to have postgres installed on their machine in order for the psql command to be found. We can lift this requirement by modifying the db:psql command such that it connects to the container that runs PG and then within that terminal run the psql command.

From SyncLinear.com | VAX-1187

deploying to fly

hey guys - awesome tech!

this was almost exactly what i was looking for (pg interface with a logically-replicated sqlite + crdt on globally distributed instances). i've been playing around with nqlite (nats as a sqlite active-active replicator + yjs), but it's not a robust solution. ideally, i'd like to quickly instantiate many (thousands) of scoped sqlite from pg + replicate crdt-based writes per instance during runtime.

i ran into a few issues deploying electric to fly but ran out of time before i could get things running.

  1. :epgsql.connect & ipv6: there's a check for ECTO_IPV6 for the electric endpoints, but on fly, my pg instance is also only available via ipv6. i added:
    connection_opts = config
    |> Keyword.fetch!(:connection)
    |> new_map_with_charlists()
    |> set_replication(replication?)
    |> Map.put_new(:tcp_opts, [:inet6])
  1. composing LOGICAL_PUBLISHER_HOST from fly envs. the ideal hostname is:
"#{env.FLY_MACHINE_ID}.vm.#{env.FLY_APP_NAME}.internal"

but this is a bit tricky to setup since fly config nor fly secrets support interpolating from existing values. i couldn't use [[processes]] to inject env vars since dockerfile sets the ENTRYPOINT. i ended up forking electric.

  1. i'm now showing an error i don't quite understand, but this is where i left it
:error, "55006",
:object_in_use,
"replication slot \"electric_replication_out_test\" is active for PID 22227",
[file: "slot.c", line: "516", routine: "ReplicationSlotAcquire", severity: "ERROR"]

cheers

[VAX-1068] Prefix env var DATABASE_URL

The starter template uses an env var named DATABASE_URL that sets the URL for connecting to Postgres.

The name of this variable might easily clash with env vars of other projects. The goal of this task is to prefix that variable with ELECTRIC_.

From SyncLinear.com | VAX-1068

Unable to replicate boolean columns

For the following prisma schema:

model Column {
  id                String        @id @default(dbgenerated("gen_random_uuid()")) @db.Uuid
  tableId           String        @map("table_id") @db.Uuid
  orderId           Float         @map("order_id")
  name              String
  type              Int
  integrationId     String?       @map("integration_id")
  autoRun           Boolean       @map("auto_run")
  integrationConfig String?       @map("integration_config")
  createdAt         Float         @map("created_at")
  electricUserId    String        @map("electric_user_id")
}

The generated client side schema expects a number not a boolean in the schema:

/////////////////////////////////////////
// COLUMN SCHEMA
/////////////////////////////////////////

export const ColumnSchema = z.object({
  id: z.string(),
  table_id: z.string(),
  order_id: z.number(),
  name: z.string(),
  type: z.number().int(),
  integration_id: z.string().nullable(),
  auto_run: z.number().int(),
  integration_config: z.string().nullable(),
  created_at: z.number(),
  electric_user_id: z.string(),
})

export type Column = z.infer<typeof ColumnSchema>

Which is problematic as the satelite protocol expects that boolean values are encoded as either "f" or "t". So it crashes when the client sends a number.

23:11:55.195 pid=<0.2829.0> client_id=32d8b3f7-8d6f-4d0d-a4e6-baa3b7520447 instance_id=0cac0837-170a-4e79-9ca4-be2c47ca9670 user_id=43c4785e-51a8-4805-b17b-b483ab69d83b [debug] ws data received: %Electric.Satellite.SatOpLog{ops: [%Electric.Satellite.SatTransOp{op: {:begin, %Electric.Satellite.SatOpBegin{commit_timestamp: 1696029114724, trans_id: "", lsn: <<0, 0, 0, 40>>, origin: nil, is_migration: false}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 0, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "", "", ""]}, old_row_data: nil, tags: ["32d8b3f7-8d6f-4d0d-a4e6-baa3b7520447@1696029114684"]}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<5, 0>>, values: ["ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "0.1", "column_ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "0", "", "0", "", "1696029114685.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 0, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<5, 0>>, values: ["2615e55d-21bc-4f2c-8520-55258d123c85", "17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "1.1", "column_2615e55d-21bc-4f2c-8520-55258d123c85", "0", "", "0", "", "1696029114685.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 0, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<5, 0>>, values: ["f3bd4ddb-86a7-4f0e-afb0-5a3b6de9e421", "17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "2.1", "column_f3bd4ddb-86a7-4f0e-afb0-5a3b6de9e421", "0", "", "0", "", "1696029114685.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 0, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<0>>, values: ["59e4b3e6-f932-426e-8157-49607fef93d2", "17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "0.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 0, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<0>>, values: ["958e2cce-adc0-43a6-8713-ed9f44f940db", "17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "1.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 0, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<0>>, values: ["8d1d586a-c49e-4ffd-81c3-c50c48aa83f1", "17cf2b1a-a1c7-4776-8a74-ff56e663c3b7", "2.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["59e4b3e6-f932-426e-8157-49607fef93d2", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<127, 192>>, values: ["ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "", "", "", "", "", "", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 4, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<16>>, values: ["dc848a30-e2ec-457a-8d8a-4e6ffb69143a", "ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "59e4b3e6-f932-426e-8157-49607fef93d2", "", "\"A\"", "0", "1696029114691.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["958e2cce-adc0-43a6-8713-ed9f44f940db", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<127, 192>>, values: ["ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "", "", "", "", "", "", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 4, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<16>>, values: ["a9bdbee7-39fc-4f87-81fc-7cd179150c02", "ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "958e2cce-adc0-43a6-8713-ed9f44f940db", "", "\"A\"", "0", "1696029114691.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["8d1d586a-c49e-4ffd-81c3-c50c48aa83f1", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<127, 192>>, values: ["ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "", "", "", "", "", "", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 4, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<16>>, values: ["cb04f540-c40a-4599-87dd-55740aca9cb1", "ffef2a0c-3e70-4ec9-8970-f4765aedaa7a", "8d1d586a-c49e-4ffd-81c3-c50c48aa83f1", "", "\"A\"", "0", "1696029114691.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["59e4b3e6-f932-426e-8157-49607fef93d2", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<127, 192>>, values: ["2615e55d-21bc-4f2c-8520-55258d123c85", "", "", "", "", "", "", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 4, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<16>>, values: ["1e652130-012f-49bd-955e-984410644ddd", "2615e55d-21bc-4f2c-8520-55258d123c85", "59e4b3e6-f932-426e-8157-49607fef93d2", "", "\"A\"", "0", "1696029114691.1", "43c4785e-51a8-4805-b17b-b483ab69d83b"]}, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 3, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: "p", values: ["958e2cce-adc0-43a6-8713-ed9f44f940db", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:update, %Electric.Satellite.SatOpUpdate{relation_id: 1, row_data: %Electric.Satellite.SatOpRow{nulls_bitmask: <<127, 192>>, values: ["2615e55d-21bc-4f2c-8520-55258d123c85", "", "", "", "", "", "", "", "", ""]}, old_row_data: nil, tags: []}}}, %Electric.Satellite.SatTransOp{op: {:insert, %Electric.Satellite.SatOpInsert{relation_id: 4, row_d (truncated)
23:11:55.195 pid=<0.2829.0> client_id=32d8b3f7-8d6f-4d0d-a4e6-baa3b7520447 instance_id=0cac0837-170a-4e79-9ca4-be2c47ca9670 user_id=43c4785e-51a8-4805-b17b-b483ab69d83b [error] ** (MatchError) no match of right hand side value: false
    (electric 0.6.3) lib/electric/satellite/serialization.ex:468: Electric.Satellite.Serialization.decode_column_value!/2
    (electric 0.6.3) lib/electric/satellite/serialization.ex:447: Electric.Satellite.Serialization.decode_values/4
    (electric 0.6.3) lib/electric/satellite/serialization.ex:458: Electric.Satellite.Serialization.decode_values/4
    (electric 0.6.3) lib/electric/satellite/serialization.ex:448: Electric.Satellite.Serialization.decode_values/4
    (electric 0.6.3) lib/electric/satellite/serialization.ex:438: Electric.Satellite.Serialization.decode_record!/3
    (electric 0.6.3) lib/electric/satellite/serialization.ex:391: Electric.Satellite.Serialization.op_to_change/2
    (electric 0.6.3) lib/electric/satellite/serialization.ex:383: anonymous fn/5 in Electric.Satellite.Serialization.deserialize_op_log/5
    (elixir 1.15.4) lib/enum.ex:2510: Enum."-reduce/3-lists^foldl/2-0-"/3

Client unable to sync up local writes to boolean or float columns

This is a known issue that will be addressed in one of the upcoming patch releases.

The problem happens for tables that are electrified and then included in the client as a bundled migration by running yarn db:migrate or npx electric-sql generate, depending on your setup.

When a new record with a boolean column is then created on the client, it will break the replication stream such that no further local writes will be able to sync up. A workaround for testing sync with boolean columns is to electrify a table in Postgres and then use electric.db.sql.raw({sql: "INSERT ... INTO ...", args: [...]}).

For float columns, the same error state is reached when trying to sync up a whole number, i.e. one that does not have any places after the decimal point. There's no workaround for this issue but we're working on releasing a fix soon.

Related to VAX-842 and VAX-1078.

Can't delete two entries in a row

Hello,
We are experiencing an issue in the todos app.

Steps to reproduce:

  • Create 2 todos
  • Delete 1
  • Delete the remaining one

The first delete works correctly, but the second makes the item reappear instantly.
We cannot reproduce it in the wa-sqlite example. Might have to do with the example having only one column.

Maybe this is a regression from 232f7a5

It's a similar issue as #258 . The delete operation is being sent without an empty tags value [].

As a side note, it might be worth to include an up to date version of the todos example in the monorepo or make it as a default instead of the simplified items database from the wa-sqlite example. There were other instances where we encountered issues with the todos app and not with the wa-sqlite example.

Feature: Support encrypt

As said in the title, could you create a feature that encrypts data before syncing it to the remote server?

I believe this is a natural thing for a locally prioritized application.

electric-sql/* Typescript Path Alias

Instead of resolving the modules by hardcoding paths in metro.config.js:

config.resolver.resolveRequest = (context, moduleName, platform) => {
if (moduleName.startsWith("electric-sql/expo")) {
return {
filePath: `${__dirname}/node_modules/electric-sql/dist/drivers/expo-sqlite/index.js`,
type: "sourceFile",
}
}
if (moduleName.startsWith("electric-sql/react")) {
return {
filePath: `${__dirname}/node_modules/electric-sql/dist/frameworks/react/index.js`,
type: "sourceFile",
}
}
const pattern1 = /^electric-sql\/(?<package1>[-a-zA-Z0-9]+)\/(?<package2>[-a-zA-Z0-9]+)$/
if (moduleName.match(pattern1)) {
const { package1, package2 } = pattern1.exec(moduleName).groups
return {
filePath: `${__dirname}/node_modules/electric-sql/dist/${package1}/${package2}/index.js`,
type: "sourceFile",
}
}
const pattern2 = /^electric-sql\/(?<package>[-a-zA-Z0-9]+)$/
if (moduleName.match(pattern2)) {
const { package } = pattern2.exec(moduleName).groups
return {
filePath: `${__dirname}/node_modules/electric-sql/dist/${package}/index.js`,
type: "sourceFile",
}
}

Is there any reason we don't just use a typescript path alias?

{
  "extends": "expo/tsconfig.base",
  "compilerOptions": {
    "jsx": "react",
    "strict": true,
    "paths": {
      "electric-sql/*": ["../../node_modules/electric-sql/dist/*"],
    }
  },
}

And then in the client:

import { electrify } from 'electric-sql/drivers/expo-sqlite'
import { makeElectricContext, useLiveQuery } from 'electric-sql/frameworks/react/'
import { genUUID } from 'electric-sql/util'

Seems like a cleaner approach?

Preserving data integrity demo

I was able to create an inconsistent state in the "Preserving data integrity" demo.

Xnip2023-09-30_12-23-56

Note sure whether it's a bug or not, but "Try it out for yourself. Can you cause an integrity violation?" seems to indicate that the demo might be resilient to these sorts of issues.

Steps to reproduce:

  1. Turn off connectivity on the left panel.
  2. On the right panel, add 1 color to tournament 1
  3. On the left panel, add 1 color to tournament 2
  4. Turn on connectivity on the left panel.
  5. Note that they don't sync to the same state.

Electric pg connection closing on large syncs

We are doing some tests with large quantities of data (10000-15000 new rows) on a foreign key related table (so compensations messages are sent). What we've encountered is that sometimes the Electric service will complain about the Postgres connection being closed. We've been progressively increasing the number of rows to test and when reaching 10K it may or may not fail. When it fails it's possible that it tries again and then it gets synced correctly. But if we keep increasing the number of rows it starts failing consistently.

The tests are on Electric server and client 0.6.4 and we ran them on a macOS machine (Docker) and on a Linux server. It happens on both of them.

Electric logs

manabox_sync-electric-1  | 08:35:29.239 pid=<0.2824.0> origin=postgres_1 pg_slot=postgres_1 [debug] Sending 60010 messages to the subscriber: from #Lsn<0/75AA9> to #Lsn<0/108291>
manabox_sync-electric-1  | 08:36:30.160 pid=<0.2824.0> origin=postgres_1 pg_slot=postgres_1 [error] GenServer #PID<0.2824.0> terminating
manabox_sync-electric-1  | ** (MatchError) no match of right hand side value: {:error, :closed}
manabox_sync-electric-1  |     (electric 0.6.4) lib/electric/replication/postgres/tcp_server.ex:620: Electric.Replication.Postgres.TcpServer.tcp_send/2
manabox_sync-electric-1  |     (elixir 1.15.4) lib/enum.ex:984: Enum."-each/2-lists^foreach/1-0-"/2
manabox_sync-electric-1  |     (electric 0.6.4) lib/electric/replication/postgres/slot_server.ex:321: Electric.Replication.Postgres.SlotServer.send_transaction/3
manabox_sync-electric-1  |     (elixir 1.15.4) lib/enum.ex:2510: Enum."-reduce/3-lists^foldl/2-0-"/3
manabox_sync-electric-1  |     (electric 0.6.4) lib/electric/replication/postgres/slot_server.ex:275: Electric.Replication.Postgres.SlotServer.handle_events/3
manabox_sync-electric-1  |     (gen_stage 1.2.1) lib/gen_stage.ex:2578: GenStage.consumer_dispatch/6
manabox_sync-electric-1  |     (stdlib 4.3.1.2) gen_server.erl:1123: :gen_server.try_dispatch/4
manabox_sync-electric-1  |     (stdlib 4.3.1.2) gen_server.erl:1200: :gen_server.handle_msg/6
... // Last Message
manabox_sync-electric-1  | 08:36:30.176 pid=<0.2905.0> origin=postgres_1 pg_slot=postgres_1 [debug] slot server started, registered as {:n, :l, {Electric.Replication.Postgres.SlotServer, "postgres_1"}} and {:n, :l, {Electric.Replication.Postgres.SlotServer, {:slot_name, "postgres_1"}}}

Postgres logs

manabox_sync-postgres-1  | 2023-10-03 08:37:04.899 GMT [184] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
manabox_sync-postgres-1  |              This probably means the server terminated abnormally
manabox_sync-postgres-1  |              before or while processing the request.
manabox_sync-postgres-1  | 2023-10-03 08:37:04.901 GMT [1] LOG:  background worker "logical replication worker" (PID 184) exited with exit code 1
manabox_sync-postgres-1  | 2023-10-03 08:37:04.902 GMT [298] LOG:  logical replication apply worker for subscription "postgres_1" has started

Extra

Kinda on topic, would there be any difference between an user syncing 10K oplogs and 1K users syncing 10 oplogs? In terms of server performance.
If you know any tool we could use to test a higher number of users it would be great to hear.

Expo Web Support

In order to build the example for expo web, a couple changes might be required:

a) should use metro for the web bundler:

    "web": {
      "favicon": "./assets/favicon.png",
      "bundler": "metro",
      "output": "static",
    },

b) resolve .mjs files in metro.config.js (Zod requires it)

config.resolver.sourceExts.push('mjs');

c) NativeModules.SourceCode.scriptURL isn't cross-platform, therefore you might need to explicitly define the hostname / url through an env variable or find a cross platform method, perhaps in expo constants.

const { hostname } = new URL(NativeModules.SourceCode.scriptURL)


Unfortunately, web still errors with a require cycles:

Require cycle: ../../node_modules/electric-sql/dist/migrators/index.js -> ../../node_modules/electric-sql/dist/migrators/bundle.js -> ../../node_modules/electric-sql/dist/migrators/index.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
Require cycle: ../../node_modules/sqlite-parser/lib/index.js -> ../../node_modules/sqlite-parser/lib/streaming.js -> ../../node_modules/sqlite-parser/lib/index.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
Require cycle: ../../node_modules/protobufjs/src/util/minimal.js -> ../../node_modules/protobufjs/src/util/longbits.js -> ../../node_modules/protobufjs/src/util/minimal.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
Require cycle: ../../node_modules/electric-sql/dist/satellite/client.js -> ../../node_modules/electric-sql/dist/satellite/shapes/cache.js -> ../../node_modules/electric-sql/dist/satellite/client.js

Possible issues on the Typescript client

We've been developing (@davidmartos96) a Dart implementation of the electric client (Available here https://github.com/SkillDevs/electric_dart). We found some issues/ had some concerns and some questions from the client implementation. We will post them all here in different sections, you can later organize them as you wish, or discard them.

Issues

Electric tables sqlite types

Some metadata tables that Electric creates (oplogTable, metaTable, etc) use the column type STRING. Which does not exist.
The affinity for the column defaults as numeric. We had some issues here where we encountered a number instead of a string.
This works in Typescript but might be a source of problems in the future.

Here is an example:

`CREATE TABLE ${triggersTable} (tablename STRING PRIMARY KEY, flag INTEGER);`,

Union return types

Some functions in the satellite client and process have a return type of Promise<void | Error> or Promise<void | SatelliteError>. This is fine but we found some places where the promise is rejected instead of returning the Error. It may be confusing to have a header which returns an union type and the promise actually fails.

The implementation method.

startReplication(lsn?: LSN): Promise<void> {

The interface method.

startReplication(lsn?: LSN): Promise<void | SatelliteError>

Unused property

_lastSnapshotTimestamp?: Date

Throttle cleanup

The following lodash throttled never gets cleaned up. The output of throttle has a method cancel.

this._throttledSnapshot = throttle(

Maybe it could be instantiated in the start and cleaned up in stop?

Connectivity change subscription cleanup

This subscription never gets unsubscribed, maybe it could be unsubscribe right before subscribing again?

this._connectivityChangeSubscription =

Typo?

This probably should be refreshToken.

await this._setMeta('refreshToken', token)

Untyped tests

There are some tests which we had some questions while replicating. Mainly because they were not typed. Mostly working with any. One example is the test 'apply empty incoming'.


Here this function has 3 parameters, but the second and the third don't allow undefined. Are undefined values in this function supposed to exist in a real usage case?

Also, in 'compensations: using triggers with flag ' There are no expectations, just a single t.true(true).

In:

This test is actually failing silently. When the setTimeout resolves the test already finished so even though the expectation is failing it doesn't throw. If we update:

 // OLD
setTimeout(async () => {
  const lsn2 = await satellite._getMeta('lastSentRowId')
  t.is(lsn2, '2')
}, 200)

// NEW
await sleepAsync(200)
const lsn2 = await satellite._getMeta('lastSentRowId')
t.is(lsn2, '2') // This now fails

It actually fails.

Questions

  1. Here it is assumed that secondValue is not undefined. But it has not been checked. Is that correct?

  2. Would it be possible to have the Docker images and the Electric web client updated to the protocol 1.x? Currently the version in npm is 0.4.3, which uses the electric protocol 0.2. It would be nice to have a version upgrade so that the Flutter Dart client can easily be tested without manually building the web client.

  3. There seems to be an issue when deleting an updated entry. We found this using the todoMVC example. It seems to be replicable on the web client. Steps to reproduce:

  • Create a TODO.
  • Mark it as done.
  • Delete this todo. (It will reappear instantly)
  • Delete this todo. (Now it works)

What happens sometimes is that right after deleting the TODO it appears again marked as done.
Maybe related to this. When a new client appears and it replicates from the empty state. Some todos will appear even though they were deleted. We think these could be the same TODOs that showed the previous bug.

[VAX-1194] INSERT or REPLACE throwing an unexpected Foreign Key constraint error

Here is how I can reproduce the issue:

-- CreateTable
CREATE TABLE "Vertex" (
    "id" TEXT NOT NULL,
    "noteHtml" TEXT,

    CONSTRAINT "Vertex_pkey" PRIMARY KEY ("id")
);

-- CreateTable
CREATE TABLE "Edge" (
    "id" TEXT NOT NULL,
    "fromVertexId" TEXT NOT NULL,
    
    CONSTRAINT "Edge_pkey" PRIMARY KEY ("id")
);

-- AddForeignKey
ALTER TABLE "Edge" ADD CONSTRAINT "Edge_fromVertexId_fkey" FOREIGN KEY ("fromVertexId") REFERENCES "Vertex"("id") ON DELETE RESTRICT ON UPDATE CASCADE;

/* ⚡ Electrify ⚡ */
CALL electric.electrify('Vertex');
CALL electric.electrify('Edge');

Then in the client:

await electric.adapter.runInTransaction({ sql: "INSERT OR REPLACE INTO Vertex (id, noteHtml) VALUES (?, ?)", args: [ "vertex-1", "Vertex 1 text" ]});
await electric.adapter.runInTransaction({ sql: "INSERT OR REPLACE INTO Edge (id, fromVertexId) VALUES (?, ?)", args: [ "edge-1", "vertex-1" ]});
await electric.adapter.runInTransaction({ sql: "INSERT OR REPLACE INTO Vertex (id, noteHtml) VALUES (?, ?)", args: [ "vertex-1", "Vertex 1 text changed" ]});

> sqlite-api.js:824 Uncaught Error: FOREIGN KEY constraint failed

Reported by @ccapndave

From SyncLinear.com | VAX-1194

db error: ERROR: tuple concurrently updated

This is an intermittent failure happening when both postgres and sqlite are doing disjoint sets of operations at the same time. Looks like a no-op compensation from sqlite got applied as an UPDATE while postgres was doing something on that row. This error is very rare - like 1 in 20 test runs.

[VAX-1006] Possible race condition in `process.subscribe()`

We sometimes are encountering an issue in the subscribe call. The subscription notifier doesn't have the subscription id after the client.subscribe() is finished. While debugging we noticed that when this happens it's because the subscription data message is received before client.subscribe() finishes. So when trying to read the promise from the subscription notifiers it's undefined.

The subscribe function

This is what triggers the websocket communication that can end up deleting the subscriptionId before finishing.

await this.client.subscribe(subId, shapeReqs)

Here is where we get the undefined exception.

synced: this.subscriptionNotifiers[subId].promise,

Here is where the subscription is being removed.

delete this.subscriptionNotifiers[subsData.subscriptionId] // GC the notifiers for this subscription ID

Would it make sense to check after client.subscribe() if the subscription exists? Or is that something that shouldn't happen?

When everything works fine, during client.subscribe() WebSocket only receives SatSubsResp. And after that the rest of the subscription messages are received (SatSubsDataBegin, ..., SatSubsDataEnd). When it fails this messages are received while the client.subscribe() is running.

VAX-1006

[VAX-1078] Client write to a float column fails to validate on the server

Reported in Discord — https://discord.com/channels/933657521581858818/1079688869852753981/1154950112414535753.

The problem happens when a whole number is written into a float column, e.g. 1695431807821. It is sent to the server without the decimal point and that results in a validation failure.

The server explicitly checks that a float number has a decimal point and at least one place following it, e.g. 1695431807821.0. This was a deliberate choice to avoid conflating integers and floats and instead make them have completely disjoint sets of possible values.

The client needs to make sure that a floating point number always has a decimal point when encoded for the Satellite protocol.

From SyncLinear.com | VAX-1078

Electric crashes on too big migrations

Applying a chonky migration fails. Here's the postgres log:

2023-09-29 21:30:13.506 UTC [251] CONTEXT:  PL/pgSQL function electric.current_migration_version() line 15 at RAISE
	SQL statement "SELECT v.txid, v.txts, v.version
	                                            FROM electric.current_migration_version() v"
	PL/pgSQL function electric.capture_ddl(text) line 8 at SQL statement
	SQL statement "SELECT electric.capture_ddl(_create_sql)"
	PL/pgSQL function electric.electrify(text,text) line 38 at PERFORM
	SQL statement "CALL electric.electrify('public."DataTable"')"
	PL/pgSQL function inline_code_block line 12 at CALL
2023-09-29 21:30:14.283 UTC [251] WARNING:  assigning automatic migration version id: 20230929213014_278
2023-09-29 21:30:14.283 UTC [251] CONTEXT:  PL/pgSQL function electric.current_migration_version() line 15 at RAISE
	SQL statement "SELECT v.txid, v.txts, v.version
	                                            FROM electric.current_migration_version() v"
	PL/pgSQL function electric.capture_ddl(text) line 8 at SQL statement
	PL/pgSQL assignment "_trid := (SELECT electric.capture_ddl())"
	PL/pgSQL function electric.ddlx_command_end_handler() line 38 at assignment
2023-09-29 21:30:14.350 UTC [251] WARNING:  assigning automatic migration version id: 20230929213014_337
2023-09-29 21:30:14.350 UTC [251] CONTEXT:  PL/pgSQL function electric.current_migration_version() line 15 at RAISE
	SQL statement "SELECT v.txid, v.txts, v.version
	                                            FROM electric.current_migration_version() v"
	PL/pgSQL function electric.capture_ddl(text) line 8 at SQL statement
	PL/pgSQL assignment "_trid := (SELECT electric.capture_ddl())"
	PL/pgSQL function electric.ddlx_command_end_handler() line 38 at assignment
2023-09-29 21:30:14.351 UTC [251] ERROR:  index row size 2720 exceeds btree version 4 maximum 2704 for index "ddl_table_unique_migrations"
2023-09-29 21:30:14.351 UTC [251] DETAIL:  Index row references tuple (0,8) in relation "ddl_commands".
2023-09-29 21:30:14.351 UTC [251] HINT:  Values larger than 1/3 of a buffer page cannot be indexed.
	Consider a function index of an MD5 hash of the value, or use full text indexing.
2023-09-29 21:30:14.351 UTC [251] CONTEXT:  SQL statement "INSERT INTO "electric"."ddl_commands" (txid, txts, version, query) VALUES
	          (_txid, _txts, _version, _query)
	        ON CONFLICT ON CONSTRAINT ddl_table_unique_migrations DO NOTHING
	        RETURNING id"
	PL/pgSQL function electric.create_active_migration(xid8,timestamp with time zone,text,text) line 9 at SQL statement
	PL/pgSQL assignment "_trid := (SELECT electric.create_active_migration(_txid, _txts, _version, query))"
	PL/pgSQL function electric.capture_ddl(text) line 12 at assignment
	PL/pgSQL assignment "_trid := (SELECT electric.capture_ddl())"
	PL/pgSQL function electric.ddlx_command_end_handler() line 38 at assignment

The relevant line is

2023-09-29 21:30:14.351 UTC [251] ERROR:  index row size 2720 exceeds btree version 4 maximum 2704 for index "ddl_table_unique_migrations"

expo-sqlite type issue

When running the typecheck node task, I get the following error:

test/example-typing-usage/expo-sqlite.ts:14:32 - error TS2345: Argument of type 'SQLiteDatabase' is not assignable to parameter of type 'Database'.
  Type 'SQLiteDatabase' is not assignable to type 'WebSQLDatabase & { _name?: string | undefined; }'.
    Type 'SQLiteDatabase' is missing the following properties from type 'WebSQLDatabase': version, transaction, readTransaction

This happens in the expo-sqlite example because electrify expects an implementation of either Database or WebSQLDatabase interfaces from expo-sqlite.

Even if expo-sqlite does declare and export a Database interface, their openDatabase function returns an instance of SQLiteDatabase, which doesn't implement it.

I see that this electrify is tested with a mock implementation of Database. Did you have this type mismatch in mind when running this specific case?

Local-stack schema registry fetch table error.

Running the local-stack and typescript client at protocol version Electric.Satellite.v1_0;. Replication is established. When the first transaction is replicated to electric I get no match of right hand side value: :error from Electric.Postgres.SchemaRegistry.fetch_table_info!/2. The client then disconnects replication.

local-stack-electric_1-1  | 08:34:57.698 origin=8408a8cf-3b34-46b3-9459-dd9c874a56a9 pid=<0.2379.0> vx_producer=vaxine_1 [info] VaxineLogProducer #PID<0.2379.0> connected to Vaxine and started replication from offset 0, vx_client pid: #PID<0.2380.0>
local-stack-electric_1-1  | 08:35:04.480 pid=<0.2378.0> vx_consumer=8408a8cf-3b34-46b3-9459-dd9c874a56a9 [error] ** (MatchError) no match of right hand side value: :error
local-stack-electric_1-1  |     (electric 0.2.0) lib/electric/postgres/schema_registry.ex:116: Electric.Postgres.SchemaRegistry.fetch_table_info!/2
local-stack-electric_1-1  |     (electric 0.2.0) lib/electric/replication/changes.ex:64: Electric.Replication.Vaxine.ToVaxine.Electric.Replication.Changes.NewRecord.handle_change/2
local-stack-electric_1-1  |     (electric 0.2.0) lib/electric/replication/vaxine.ex:22: anonymous fn/3 in Electric.Replication.Vaxine.transaction_to_vaxine/2
local-stack-electric_1-1  |     (elixir 1.13.4) lib/enum.ex:4475: Enumerable.List.reduce/3
local-stack-electric_1-1  |     (elixir 1.13.4) lib/enum.ex:2442: Enum.reduce_while/3
local-stack-electric_1-1  |     (vax 0.1.0) lib/vax/adapter.ex:242: anonymous fn/3 in Vax.Adapter.transaction/3
local-stack-electric_1-1  |     (vax 0.1.0) lib/vax/adapter.ex:131: anonymous fn/3 in Vax.Adapter.checkout/3
local-stack-electric_1-1  |     (nimble_pool 0.2.6) lib/nimble_pool.ex:349: NimblePool.checkout!/4
local-stack-electric_1-1  |
local-stack-electric_1-1  | 08:35:04.481 pid=<0.2378.0> vx_consumer=8408a8cf-3b34-46b3-9459-dd9c874a56a9 [error] GenServer #PID<0.2378.0> terminating
local-stack-electric_1-1  | ** (stop) %MatchError{term: :error}
local-stack-electric_1-1  | Last message: {:"$gen_consumer", {#PID<0.2378.0>, #Reference<0.4017595076.3198943234.187219>}, [%Electric.Replication.Changes.Transaction{ack_fn: #Function<6.133325283/0 in Electric.Satellite.Serialization.deserialize_op_log/5>, changes: [%Electric.Replication.Changes.NewRecord{record: %{"value" => "453417c2-5d14-4e62-8293-e1e42ad91522"}, relation: {"public", "Item"}, tags: []}], commit_timestamp: ~U[2023-04-07 08:35:01.448Z], lsn: <<0, 0, 0, 1>>, origin: "8408a8cf-3b34-46b3-9459-dd9c874a56a9", origin_type: :satellite, publication: ""}]}
local-stack-electric_1-1  | 08:35:04.482 pid=<0.2378.0> vx_consumer=8408a8cf-3b34-46b3-9459-dd9c874a56a9 [error] Process #PID<0.2378.0> terminating
local-stack-electric_1-1  | ** (exit) %MatchError{term: :error}
local-stack-electric_1-1  |     (stdlib 3.17.1) gen_server.erl:811: :gen_server.handle_common_reply/8
local-stack-electric_1-1  |     (stdlib 3.17.1) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
local-stack-electric_1-1  | Initial Call: GenStage.init/1
local-stack-electric_1-1  | Ancestors: [#PID<0.2377.0>, Electric.Replication.Connectors, Electric.Supervisor, #PID<0.1846.0>]
local-stack-electric_1-1  | 08:35:04.482 pid=<0.2377.0> [error] Child :vx_consumer of Supervisor #PID<0.2377.0> (Electric.Replication.SatelliteConnector) terminated
local-stack-electric_1-1  | ** (exit) %MatchError{term: :error}
local-stack-electric_1-1  | Pid: #PID<0.2378.0>
local-stack-electric_1-1  | Start Call: Electric.Replication.Vaxine.LogConsumer.start_link("8408a8cf-3b34-46b3-9459-dd9c874a56a9", {:via, :gproc, {:n, :l, {Electric.Satellite.WsServer, "8408a8cf-3b34-46b3-9459-dd9c874a56a9"}}})
local-stack-electric_1-1  | 08:35:04.489 pid=<0.2375.0> sq_client=172.19.0.1:64122 client_id=8408a8cf-3b34-46b3-9459-dd9c874a56a9 user_id=8408a8cf-3b34-46b3-9459-dd9c874a56a9 [info] incoming replication is not active: :paused ignore transaction```

I've followed the local stack instructions and successfully applied electric sync --local. Is there some additional Postgres setup that I need to run?

Windows Compatibility Issue with 'npx electric-sql generate' script

Hello! I've run into a little snag while trying the quick start with the npx electric-sql generate script and it seems like it might not be entirely Windows-friendly.

12 |   provider = "sqlite"
13 |   url = "file:C:\civalgo\electric-test\.electric_migrations_tmp_KUMjZt\electric.db"
   |
error: Unknown escape sequence. If the value is a windows-style path, `\` must be escaped as `\\`.
  -->  schema.prisma:13

the generator generate backslashes that my window machine doesnt like: (it could have generated double backslashes of forward slash) instead to fix this I think

image

Local Only Mode

First of all: Very impressive work, looking forward to add this into our techstack. Thank you!

I'm currently building a small browser extension and, before I add any remote server sync/storage, I'd like to start small and only save data locally. Is it possible to use electric without a server pg backend - so it runs completely local ?

PS: would it also be possible to use pgvector with electric?

Domain type support

Hi!
In my app the postgres database is append only for everything, due to it we want to use UUIDv7 not UUIDv4.
I'm trying to enforce in the data model that everybody uses UUIDv7 not UUIDv4. I've created a domain type based on uuid for that:

-- <GORBAK_CUSTOM>
CREATE OR REPLACE FUNCTION get_uuid_version(_uuid uuid)
RETURNS INTEGER
LANGUAGE SQL
IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF
AS $$
  SELECT get_byte(uuid_send(_uuid), 6) & 240 >> 4;
$$;
CREATE DOMAIN uuidv7 AS uuid CHECK (get_uuid_version(VALUE) = 7);
-- </GORBAK_CUSTOM>

Unfortunately electric doesn't consider that a type might be a domain type and that it has a base type:

    Database error:
    ERROR: Cannot electrify \"public.\"DataTable\"\" because some of its columns have types not supported by Electric:
      \"id\" uuidv7

https://github.com/electric-sql/electric/blob/2b24fa273d9597a3e143f278d756e67221c1d848/components/electric/lib/electric/postgres/extension/functions/validate_table_column_types.sql.eex#L27C20-L27C29
I suggest replacing

    FOR _col_name, _col_type, _col_typmod, _col_type_pretty IN
        SELECT attname, typname, atttypmod, format_type(atttypid, atttypmod)
            FROM pg_attribute
            JOIN pg_type on atttypid = pg_type.oid
            WHERE attrelid = table_name::regclass AND attnum > 0 AND NOT attisdropped
            ORDER BY attnum
    LOOP

with something along the lines of

    FOR _col_name, _col_type, _col_typmod, _col_type_pretty IN
        SELECT attname, (CASE typtype = 'd' THEN (SELECT t1.typname from pg_type AS t1 WHERE t1.oid=pg_type.typbasetype) ELSE typname END), atttypmod, format_type(atttypid, atttypmod)
            FROM pg_attribute
            JOIN pg_type on atttypid = pg_type.oid
            WHERE attrelid = table_name::regclass AND attnum > 0 AND NOT attisdropped
            ORDER BY attnum
    LOOP

Failed migration stuck in Electric (hosted)

Exact details in: Electrifying SQL
https://youtube.com/live/-mCYURMoHro?feature=share

Had a hosted service electric with the todo mvc example already applied.

I wrote up a new migration with some new tables using foreign keys. Put one of them in the wrong order probably and didn't know to add the foreign key pragma.

Built the migration. No problem.
Sync the migration. Went up fine.
Checked the status if the migration. No bueno. Said Failed. No further error information. I ciuld hit Apply which did not help.

Trying the migration on a local sqlite db would probably catch it.

Crash when writing to a table

I create my schema, then write a row using:

const note = await db.note.create({
  data: {
    id: noteToWrite.id,
    html: noteToWrite.html,
    inserted_at: noteToWrite.insertedAt.toISOString(),
    updated_at: noteToWrite.updatedAt.toISOString()
  }
});

This causes the sync service to crash, the client to disconnect, the client to reconnect, the sync service to crash again, etc. Here is the error. Let me know if you need any more details.

I have deleted the Postgres data directory, re-run the migrations and cleared the browser cache (its using wa-sqlite) but the error still occurs as soon as I try and create the row.

17:19:25.206 pid=<0.2812.0> origin=postgres_1 [info] connect: %{host: ~c"database", ssl: false, timeout: 5000, username: ~c"postgres"}
2023-09-25 17:19:25.289 UTC [39] ERROR:  subscription "postgres_1" already exists
2023-09-25 17:19:25.289 UTC [39] STATEMENT:  CREATE SUBSCRIPTION "postgres_1" CONNECTION 'host=electric port=5433 dbname=electric connect_timeout=5000' PUBLICATION "electric_publication" WITH (connect = false)
17:19:25.296 pid=<0.2812.0> origin=postgres_1 [info] Successfully initialized origin postgres_1 at extension version
17:19:25.296 pid=<0.2825.0> pg_producer=postgres_1 [info] Starting Elixir.Electric.Postgres.Extension.SchemaCache for postgres_1
17:19:25.296 pid=<0.2825.0> pg_producer=postgres_1 [warning] SchemaCache "postgres_1" registered as the global instance
2023-09-25 17:19:25.313 UTC [42] ERROR:  replication slot "electric_replication_out_test" already exists
2023-09-25 17:19:25.313 UTC [42] STATEMENT:  CREATE_REPLICATION_SLOT "electric_replication_out_test" LOGICAL pgoutput NOEXPORT_SNAPSHOT
17:19:25.316 pid=<0.2825.0> pg_producer=postgres_1 [info] CREATE TABLE acknowledged_client_lsns (...)
17:19:25.349 pid=<0.2818.0> client_id=e702374f-ca6f-4a41-b9d2-505a5bf184c9 instance_id=4d01a55a-6b4a-4500-a362-b6522b58855f user_id=dummy [error] GenServer #PID<0.2818.0> terminating
** (ArgumentError) errors were found at the given arguments:

  * 1st argument: the table identifier does not refer to an existing ETS table

    (stdlib 4.3.1.2) :ets.last(:ets_backed_cached_wal)
    (electric 0.6.3) lib/electric/postgres/cached_wal/ets_backed.ex:66: Electric.Postgres.CachedWal.EtsBacked.get_current_position/0
    (electric 0.6.3) lib/electric/satellite/ws_server.ex:172: Electric.Satellite.WebsocketServer.handle_info/2
    (bandit 1.0.0-pre.14) lib/bandit/websocket/connection.ex:187: Bandit.WebSocket.Connection.handle_info/3
    (bandit 1.0.0-pre.14) /app/deps/thousand_island/lib/thousand_island/handler.ex:94: Bandit.WebSocket.Handler.handle_info/2
    (stdlib 4.3.1.2) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1.2) gen_server.erl:1200: :gen_server.handle_msg/6
    (stdlib 4.3.1.2) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:perform_initial_sync_and_subscribe, %Electric.Satellite.SatInStartReplicationReq{lsn: "", options: [], subscription_ids: [], schema_version: "20230925125227_910"}}

Electric fails to generate migrations on tables with CHECK constraints

In my quest to workaround #524 I added a check constraint manually:

CREATE TABLE "DataTable" (
    "id" UUID NOT NULL,
    "emoji" TEXT NOT NULL,
    "name" TEXT NOT NULL,
    "created_at" DOUBLE PRECISION NOT NULL,
    "electric_user_id" TEXT NOT NULL,

    CONSTRAINT "DataTable_pkey" PRIMARY KEY ("id"),
    CONSTRAINT "DataTable_id_is_uuidv7" CHECK (public.get_uuid_version(id) = 7)
);

Electric properly electrified the table and everything works(when using old client migrations). The problem is when one tries to regenerate the migrations:

node@920e676ab3fc ~/workspace (uuidv7) [1]> yarn electric-sql:gen
yarn run v1.22.19
$ electric-sql generate --service electric:5133 --out src/generated/client && echo "// @ts-nocheck
$(cat src/generated/client/index.ts)" > src/generated/client/index.ts
generate command failed: SqliteError: no such function: get_uuid_version
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
node@920e676ab3fc ~/workspace (uuidv7) [1]> 

When looking at the .electric_migrations_tmp folder it looks like electric thinks the CHECK constraint is a Foreign key constraint:

CREATE TABLE "DataTable" (
  "id" TEXT NOT NULL,
  "emoji" TEXT NOT NULL,
  "name" TEXT NOT NULL,
  "created_at" REAL NOT NULL,
  "electric_user_id" TEXT NOT NULL,
  CONSTRAINT "DataTable_id_is_uuidv7" CHECK ((get_uuid_version("id") = 7)),
  CONSTRAINT "DataTable_pkey" PRIMARY KEY ("id")
) WITHOUT ROWID;

Similar to #509 electric should either filter out the constraints or forbid electrifying tables with those constraints.

[VAX-971] client: Leaky resources (Notifier subscriptions, EventEmitter listeners...)

Here I list some places in the client code which leak subscriptions to the notifier, as they are never cleared up. These are the ones we found while working on the Dart client.

VAX-971

[Client] Race condition when two performSnapshot run simultaneously

We were debugging an issue on the Dart todo's app where the delete replication wasn't propagating. Only deleting locally. After some tests we discovered that we were triggering multiple times the potentiallyChanged in the adapter. Causing the function performSnapshot from process.ts to throttle.
What happens is that, after it finishes the throttling and the function actually executes, if it takes more time than the throttled window, it can happen that a second performSnapshot starts. What this causes is that the clearTags and tags properties can get emptied when sending the delete operation on top of sending two different delete operations at the same time.

Would it make sense to ignore a performSnapshot event if one is already running? This can also happen on the Typescript client although it's trickier to trigger.

async _performSnapshot(): Promise<Date> {

[VAX-1004] Supported dev OSes

Given the wide usage of Ubuntu 22.04, it would seem a reasonable dev system to work with. Are you guys all working/testing on MacOS or something? Several steps in the quickstart just seem to fail, and trying to hack around them requires installing and sourcing an emsdk, installing node_modules with PUPPETEER_SKIP_DOWNLOAD=true npm i @electric-sql/prisma-generator, doing yarn install, etc. - all things not listed at all in the quickstart.

I realise things are moving fast, but a quick test in a fresh Ubuntu 22.04 VM for the quickstart seems pretty reasonable to me... The issue being, it is very much not "quick" to get started, and wouldn't be possible for someone who hasn't spent a lot of time trying to get things working.

VAX-1004

Typos in two values of the SatErrorResp.ErrorCode enum

Note the same typo in two instances of MISSMATCH below

enum ErrorCode {
INTERNAL = 0;
AUTH_REQUIRED = 1;
AUTH_FAILED = 2;
REPLICATION_FAILED = 3;
INVALID_REQUEST = 4;
PROTO_VSN_MISSMATCH = 5;
SCHEMA_VSN_MISSMATCH = 6;
}

Fixing them in a backwards-compatible way should be as simple as

 enum ErrorCode {
+    option allow_alias = true;
     INTERNAL = 0;
     AUTH_REQUIRED = 1;
     AUTH_FAILED = 2;
     REPLICATION_FAILED = 3;
     INVALID_REQUEST = 4;
+    PROTO_VSN_MISMATCH = 5;
     PROTO_VSN_MISSMATCH = 5;
+    SCHEMA_VSN_MISMATCH = 6;
     SCHEMA_VSN_MISSMATCH = 6;
 }

[VAX-1069] Output info about pg connection url when running starter scripts

The Postgres connection URL can be configured using an environment variable (DATABASE_URL). When working with multiple shells to interact with Electric, it's easy to overlook setting the variable in some of the shells.

The minimal patch would be to raise that the script is not using the default DATABASE_URL. "Ideally, it should display the URL details while obfuscating the password.

Note: Issue #480 is concerned with renaming DATABASE_URL to ELECTRIC_DATABASE_URL

From SyncLinear.com | VAX-1069

[VAX-1003] create-electric-app with asdf node

anton@zub:~/dev/ntc$ node --version
v18.17.0 
--> same problem with 20.6.1
anton@zub:~/dev/ntc$ which node
/home/anton/.asdf/shims/node
anton@zub:~/dev/ntc$ npx create-electric-app ques
/usr/bin/env: ‘node --no-warnings’: No such file or directory
/usr/bin/env: use -[v]S to pass options in shebang lines

Should this work?

VAX-1003

[VAX-1078] Client write to a float column fails to validate on the server

Reported in Discord — https://discord.com/channels/933657521581858818/1079688869852753981/1154950112414535753.

The problem happens when a whole number is written into a float column, e.g. 1695431807821. It is sent to the server without the decimal point and that results in a validation failure.

The server explicitly checks that a float number has a decimal point and at least one place following it, e.g. 1695431807821.0. This was a deliberate choice to avoid conflating integers and floats and instead make them have completely disjoint sets of possible values.

The client needs to make sure that a floating point number always has a decimal point when encoded for the Satellite protocol.

From SyncLinear.com | VAX-1078

Problems with nested creates in the Prisma client

Hi! I am a having some problems with relational data and the generated Prisma client. I'm trying to model a graph structure with vertices and edges, so I've started with this Prisma migration:

model Vertex {
  id   String @id

  edgesIn  Edge[] @relation("toVertex")
  edgesOut Edge[] @relation("fromVertex")
}

model Edge {
  id         String @id

  fromVertex   Vertex @relation("fromVertex", fields: [fromVertexId], references: [id])
  fromVertexId String

  toVertex   Vertex @relation("toVertex", fields: [toVertexId], references: [id])
  toVertexId String
}

which generates the following SQL:

-- CreateTable
CREATE TABLE "Vertex" (
    "id" TEXT NOT NULL,
    CONSTRAINT "Vertex_pkey" PRIMARY KEY ("id")
);

-- CreateTable
CREATE TABLE "Edge" (
    "id" TEXT NOT NULL,
    "fromVertexId" TEXT NOT NULL,
    "toVertexId" TEXT NOT NULL,
    CONSTRAINT "Edge_pkey" PRIMARY KEY ("id")
);

-- AddForeignKey
ALTER TABLE "Edge" ADD CONSTRAINT "Edge_fromVertexId_fkey" FOREIGN KEY ("fromVertexId") REFERENCES "Vertex"("id") ON DELETE RESTRICT ON UPDATE CASCADE;

-- AddForeignKey
ALTER TABLE "Edge" ADD CONSTRAINT "Edge_toVertexId_fkey" FOREIGN KEY ("toVertexId") REFERENCES "Vertex"("id") ON DELETE RESTRICT ON UPDATE CASCADE;


/* ⚡ Electrify (added manually) ⚡ */
CALL electric.electrify('Vertex');
CALL electric.electrify('Edge');

I then call npx electric-sql generate to generate the Prisma client. I can take a look in node_modules/.prisma/client/schema.prisma to see what schema was inferred, and it came up with the following:

model Vertex {
  @@map("Vertex")
  id                             String @id
  Edge_Edge_toVertexIdToVertex   Edge[] @relation("Edge_toVertexIdToVertex")
  Edge_Edge_fromVertexIdToVertex Edge[] @relation("Edge_fromVertexIdToVertex")
}

model Edge {
  @@map("Edge")
  id                               String @id
  fromVertexId                     String
  toVertexId                       String
  Vertex_Edge_toVertexIdToVertex   Vertex @relation("Edge_toVertexIdToVertex", fields: [toVertexId], references: [id])
  Vertex_Edge_fromVertexIdToVertex Vertex @relation("Edge_fromVertexIdToVertex", fields: [fromVertexId], references: [id])
}

So the generator has made the following name changes compared to the original schema:

  • Vertex.edgesIn => Vertex.Edge_Edge_toVertexIdToVertex
  • Vertex.edgesOut => Vertex.Edge_Edge_fromVertexIdToVertex
  • Edge.fromVertex => Edge.Vertex_Edge_fromVertexIdToVertex
  • Edge.toVertex => Edge.Vertex_Edge_toVertexIdToVertex

Now that I have a client, I am trying to work with the Prisma API in a few different situations:

Create a vertex linked to another vertex with an edge

Here I'm trying to do a nested create starting with the root vertex, and creating an edge linked to another vertex:

await electric.db.Vertex.create({
  data: {
    id: "vertex-1",
    Edge_Edge_fromVertexIdToVertex: {
      create: {
        id: "edge-1",
        Vertex_Edge_toVertexIdToVertex: {
          create: {
            id: "vertex-2",
          }
        }
      }
    }
  }
});

This fails with the following zod error:

index.mjs:538 Uncaught (in promise) ZodError: [
  {
    "code": "invalid_union",
    "unionErrors": [
      {
        "issues": [
          {
            "code": "invalid_type",
            "expected": "object",
            "received": "undefined",
            "path": [
              "data",
              "Vertex_Edge_fromVertexIdToVertex"
            ],
            "message": "Required"
          },
          {
            "code": "unrecognized_keys",
            "keys": [
              "fromVertexId"
            ],
            "path": [
              "data"
            ],
            "message": "Unrecognized key(s) in object: 'fromVertexId'"
          }
        ],
        "name": "ZodError"
      },
      {
        "issues": [
          {
            "code": "invalid_type",
            "expected": "string",
            "received": "undefined",
            "path": [
              "data",
              "toVertexId"
            ],
            "message": "Required"
          },
          {
            "code": "unrecognized_keys",
            "keys": [
              "Vertex_Edge_toVertexIdToVertex"
            ],
            "path": [
              "data"
            ],
            "message": "Unrecognized key(s) in object: 'Vertex_Edge_toVertexIdToVertex'"
          }
        ],
        "name": "ZodError"
      }
    ],
    "path": [
      "data"
    ],
    "message": "Invalid input"
  }
]

From the look of the error, it seems that zod is both unhappy that Vertex_Edge_fromVertexIdToVertex hasn't been explicitly specified, and is also unhappy about Vertex_Edge_toVertexIdToVertex in the nested vertex. As far as I can tell, the Prisma docs seem to suggest that this should work.

Note that if I do the same thing, but starting from the edge it works correctly:

await electric.db.Edge.create({
  data: {
    id: "edge-1",
    Vertex_Edge_fromVertexIdToVertex: {
      create: {
        id: "vertex-1",
      }
    },
    Vertex_Edge_toVertexIdToVertex: {
      create: {
        id: "vertex-2",
      }
    }
  }
});

Create a vertex linked to an existing vertex with an edge

Now that I have two vertices linked with an edge, I want to attach a new vertex via a new edge.

await electric.db.Edge.create({
  data: {
    id: "edge-2",
    fromVertexId: "vertex-2",
    Vertex_Edge_toVertexIdToVertex: {
      create: {
        id: "vertex-3",
      }
    }
  }
});

This fails with the same error as my create query.

Uncaught ZodError: [
  {
    "code": "invalid_union",
    "unionErrors": [
      {
        "issues": [
          {
            "code": "invalid_type",
            "expected": "object",
            "received": "undefined",
            "path": [
              "data",
              "Vertex_Edge_fromVertexIdToVertex"
            ],
            "message": "Required"
          },
          {
            "code": "unrecognized_keys",
            "keys": [
              "fromVertexId"
            ],
            "path": [
              "data"
            ],
            "message": "Unrecognized key(s) in object: 'fromVertexId'"
          }
        ],
        "name": "ZodError"
      },
      {
        "issues": [
          {
            "code": "invalid_type",
            "expected": "string",
            "received": "undefined",
            "path": [
              "data",
              "toVertexId"
            ],
            "message": "Required"
          },
          {
            "code": "unrecognized_keys",
            "keys": [
              "Vertex_Edge_toVertexIdToVertex"
            ],
            "path": [
              "data"
            ],
            "message": "Unrecognized key(s) in object: 'Vertex_Edge_toVertexIdToVertex'"
          }
        ],
        "name": "ZodError"
      }
    ],
    "path": [
      "data"
    ],
    "message": "Invalid input"
  }
]

Other attempts to do this also failed.

Apologies for the wall of text, and any advice would be very gratefully received!

PRAGMA foreign_keys assumed to be ON in Electric

The Electric Satellite logic assumes that foreign_keys are ON but it never actually sets the PRAGMA. For example, when defer_foreign_keys is set to ON and there is error handling during transactions.

The tests work correctly by default when using the better-sqlite3 adapter because that dependency changes the SQLite foreign keys default to ON, but an "untouched" SQLite driver has foreign keys off by default. For instance, when querying PRAGMA foreign_keys in the example app from the monorepo, which uses wa-sqlite, it returns OFF.

Does it make sense to enable the PRAGMA internally in Electric, or is it something that the user has to set?

Compensations update rows even when they didn't change

When operations get executed on a row on a client then triggers in sqlite create corresponding oplog entries for the affected row and all relations reachable from the given row(compensations). Each compensation oplog is an UPDATE statement which does nothing(updates the PK to the same value). Those compensations get synced to the corresponding shadow table in postgres and when an insert happens into the shadow table a trigger in postgres copies the data into the original table.

The problem is that those no-op compensations get applied as a postgres upsert even when there is nothing to change. My app is mostly append only and there are some tables where a trigger sanity checks that updates on the table are disallowed. When compensations get applied those triggers properly protect the table from updates but this breaks electric. As I workaround I will now check in those sanity-check triggers whether the data in the row actually changed.

I think electric should do nothing if a compensation won't change the data so update triggers don't fire.

can't have healthcheck when ssl is activate

i try to start a electric-sql server on AWS connect to an RDS Postgresql 15.3

when i try to launch the the service without ssl i have error :

[error] initialization for postgresql failed with reason: {:error, :invalid_authorization_specification}"
[info] connect: %{database: ~c""database_web"", host: ~c""rds.web.com"", port: 5432, ssl: false, timeout: 5000, username: ~c""username""}"
[debug] Attempting to initialize postgres_1: [email protected]:5432"
[info] Running Electric.Plug.Router with Bandit 1.0.0-pre.14 at 0.0.0.0:5133 (http)"
[notice] :alarm_handler: {:set, {:system_memory_high_watermark, []}}"

so i add DATABASE_REQUIRE_SSL = TRUE the i had a new error :

"(electric 0.6.4) lib/electric/replication/postgres_manager.ex:92: Electric.Replication.PostgresConnectorMng.handle_continue/2"
"** (MatchError) no match of right hand side value: {:error, {{:shutdown, {:failed_to_start_child, :postgres_producer, {:bad_return_value, {:error, {:error, :error, ""55006"", :object_in_use, ""replication slot \""electric_replication_out_database_web\"" is active for PID 32436"", [file: ""slot.c"", line: ""518"", routine: ""ReplicationSlotAcquire"", severity: ""ERROR""]}}}}}, {:child, :undefined, :sup, {Electric.Replication.PostgresConnectorSup, :start_link, [[origin: ""postgres_1"", producer: Electric.Replication.Postgres.LogicalReplicationProducer, connection: [replication: ""database"", ssl: true, host: ""rds.web.com"", database: ""database_web"", port: 5432, username: ""username"", password: ""*****"", timeout: 5000], replication: [electric_connection: [host: ""electric-tcp.web.com"", port: 5433, dbname: ""electric"", connect_timeout: 5000]]]]}, :temporary, false, :infinity, :supervisor, [Electric.Replication.PostgresConnectorSup]}}}"
"11:39:21.217 pid=<0.2811.0> origin=postgres_1 [error] GenServer #PID<0.2811.0> terminating"
"11:39:21.217 pid=<0.2822.0> [debug] Terminating idle db connection #PID<0.2825.0>"
"11:39:21.216 pid=<0.2824.0> [debug] Elixir.Electric.Replication.Postgres.Client start_replication: slot: 'electric_replication_out_database_web', publication: 'electric_publication'"
"11:39:21.212 pid=<0.2821.0> pg_producer=postgres_1 [info] CREATE TABLE acknowledged_client_lsns (...)"
"11:39:21.210 pid=<0.2824.0> [debug] Elixir.Electric.Replication.Postgres.Client: CREATE_REPLICATION_SLOT ""electric_replication_out_database_web"" LOGICAL pgoutput NOEXPORT_SNAPSHOT"
"Reason: ~c""The option {verify, verify_peer} and one of the options 'cacertfile' or 'cacerts' are required to enable this."""
"11:39:21.198 pid=<0.2826.0> [warning] Description: ~c""Server authenticity is not verified since certificate path validation is not enabled"""
"Reason: ~c""The option {verify, verify_peer} and one of the options 'cacertfile' or 'cacerts' are required to enable this."""
"11:39:21.197 pid=<0.2825.0> [warning] Description: ~c""Server authenticity is not verified since certificate path validation is not enabled"""
"11:39:21.194 pid=<0.2824.0> [debug] Elixir.Electric.Replication.Postgres.LogicalReplicationProducer init:: publication: 'electric_publication', slot: 'electric_replication_out_database_web'"
"11:39:21.194 pid=<0.2822.0> [debug] Starting SchemaLoader pg connection: [origin: ""postgres_1"", producer: Electric.Replication.Postgres.LogicalReplicationProducer, connection: [replication: ""database"", ssl: true, host: ""rds.web.com"", database: ""database_web"", port: 5432, username: ""username"", password: ""*****"", timeout: 5000], replication: [electric_connection: [host: ""electric-tcp.web.com"", port: 5433, dbname: ""electric"", connect_timeout: 5000]]]"
"11:39:21.194 pid=<0.2821.0> pg_producer=postgres_1 [warning] SchemaCache ""postgres_1"" registered as the global instance"
"11:39:21.194 pid=<0.2821.0> pg_producer=postgres_1 [info] Starting Elixir.Electric.Postgres.Extension.SchemaCache for postgres_1"
"11:39:21.194 pid=<0.2811.0> origin=postgres_1 [info] Successfully initialized origin postgres_1 at extension version"
"ORDER BY oid"
"WHERE typtype != 'c'"
"JOIN pg_namespace ON typnamespace = pg_namespace.oid"
"FROM pg_type"
"11:39:21.190 pid=<0.2811.0> origin=postgres_1 [debug] Elixir.Electric.Replication.Postgres.Client: SELECT nspname, typname, pg_type.oid, typarray, typelem, typlen, typtype, typbasetype, typrelid, EXISTS(SELECT 1 FROM pg_type as t WHERE pg_type.oid = t.typarray) as is_array"
"11:39:21.189 pid=<0.2811.0> origin=postgres_1 [debug] Elixir.Electric.Replication.Postgres.Client: CREATE SUBSCRIPTION ""postgres_1"" CONNECTION 'host=electric-tcp.web.com port=5433 dbname=electric connect_timeout=5000' PUBLICATION ""electric_publication"" WITH (connect = false)"
"Reason: ~c""The option {verify, verify_peer} and one of the options 'cacertfile' or 'cacerts' are required to enable this."""
"11:39:21.168 pid=<0.2813.0> [warning] Description: ~c""Server authenticity is not verified since certificate path validation is not enabled"""
"11:39:21.161 pid=<0.2811.0> origin=postgres_1 [info] connect: %{database: ~c""database_web"", host: ~c""rds.web.com"", port: 5432, ssl: true, timeout: 5000, username: ~c""username""}"
"11:39:21.161 pid=<0.2811.0> origin=postgres_1 [debug] Attempting to initialize postgres_1: [email protected]:5432"
"11:39:21.161 pid=<0.2195.0> [info] Running Electric.Plug.Router with Bandit 1.0.0-pre.14 at 0.0.0.0:5133 (http)"

if i try to check th health check ai have a new error :

"Last message: {:continue, :handle_connection}"
"(electric 0.6.4) deps/plug/lib/plug/router.ex:246: anonymous fn/4 in Electric.Plug.Router.dispatch/2"
"(plug 1.14.2) lib/plug.ex:168: Plug.forward/4"
"(electric 0.6.4) lib/electric/plug/status.ex:1: Electric.Plug.Status.plug_builder_call/2"
"(electric 0.6.4) deps/plug/lib/plug/router.ex:242: Electric.Plug.Status.dispatch/2"
"(telemetry 1.2.1) /app/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3"
"(electric 0.6.4) deps/plug/lib/plug/router.ex:246: anonymous fn/4 in Electric.Plug.Status.dispatch/2"
"(electric 0.6.4) lib/electric/plug/status.ex:15: anonymous fn/2 in Electric.Plug.Status.do_match/4"
"(elixir 1.15.4) lib/gen_server.ex:1074: GenServer.call/3"
"** (EXIT) time out"
"** (stop) exited in: GenServer.call({:via, :gproc, {:n, :l, {Electric.Replication.PostgresConnectorMng, ""postgres_1""}}}, :status, 5000)"
"16:20:31.009 pid=<0.2845.0> [error] GenServer #PID<0.2845.0> terminating"
"16:20:26.008 pid=<0.2845.0> [info] GET /api/status"

image

run.sh file

DATABASE_URL=postgresql://${RDS_USERNAME}:${RDS_PASSWORD}@${RDS_HOSTNAME_ELEC}:${RDS_PORT}/${RDS_DB_NAME} /app/bin/entrypoint start

task definition

{
    "taskDefinitionArn": "arn:aws:ecs:eu-west-1:xxxx:task-definition/electric-service-develop:31",
    "containerDefinitions": [
        {
            "name": "electric",
            "image": "xxxx.dkr.ecr.eu-west-1.amazonaws.com/develop/electric:10-10-2023_13-16-31",
            "cpu": 0,
            "memoryReservation": 50,
            "portMappings": [
                {
                    "containerPort": 5133,
                    "hostPort": 0,
                    "protocol": "tcp"
                },
                {
                    "containerPort": 5433,
                    "hostPort": 0,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "entryPoint": [
                "/bin/bash",
                "/home/src/electric-service/scripts/run.sh"
            ],
            "command": [
                "scripts/run.sh"
            ],
            "environment": [
                {
                    "name": "LOGICAL_PUBLISHER_HOST",
                    "value": "electric-tcp.web.com"
                },
                {
                    "name": "ENV_TYPE",
                    "value": "stage"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "workingDirectory": "/home/src/electric-service",
            "dockerLabels": {
                "traefik.enable": "true",
                "traefik.http.routers.electric.rule": "Host(`electric.stage.web.com`)",
                "traefik.http.routers.electric.service": "electric",
                "traefik.http.services.electric.loadbalancer.server.port": "5133"
            }
        }
    ],
    "family": "electric-service-develop",
    "taskRoleArn": "arn:aws:iam::xxx:role/electric-role",
    "revision": 31,
    "volumes": [],
    "status": "ACTIVE",
    "requiresAttributes": [
        {
            "name": "com.amazonaws.ecs.capability.ecr-auth"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.21"
        },
        {
            "name": "com.amazonaws.ecs.capability.task-iam-role"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
        }
    ],
    "placementConstraints": [],
    "compatibilities": [
        "EXTERNAL",
        "EC2"
    ],
    "registeredAt": "2023-10-10T13:18:43.050Z"
}

[VAX-965] Modify generator to introspect PG DB

Currently, the generator fetches all SQLite migrations from Electric, constructs a SQLite DB, and introspects it in order to generate a Prisma schema and Zod schemas. However, by going from a PG DB to a SQLite DB we lose information about the original types of columns in the PG DB.

In order not to lose information about the original PG schema, we need to modify the generator to introspect the PG database or a copy of it such that the generated Prisma schema and Zod schemas reflect the original PG types. Then, the DAL can do the conversion of PG values to SQLite values when storing them and do the opposite conversion from SQLite values to PG values when reading.

From SyncLinear.com | VAX-965

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.