Comments (86)
@Akash187 use npx
instead or install using brew install supabase/tap/supabase-beta
(beta comes first and then it is pushed to the main homebrew tap).
Looks like this is working now so we can go ahead and close this one! Thanks to everyone who helped to debug this and pushed this! π₯ π
from supabase.
I've reverted studio to the last known working version in cli. You can use it via the beta release channel.
npx supabase@beta start
I will keep this issue open until the the root cause is resolved in studio.
from supabase.
and it's running finally π₯³π₯³π₯³π₯³π₯³π₯³
from supabase.
I was able to overcome this issue by installing [email protected]
.
This will force to you use supabase/studio:20240422-5cf8f30
, instead of supabase/studio:20240506-2976cd6
.
As a quick check I also installed the latest cli version ([email protected]
), which also has the same issue.
When you inspect the docker logs for supabase/logflare:1.4.0
, I get the following error:
API Gateway Logs
20:57:03.237 [error] GenServer {Logflare.Endpoints.Cache, 1, %{"iso_timestamp_start" => "2024-05-27T19:00:00.000Z", "project" => "default", "project_tier" => "ENTERPRISE", "ref" => "default", "sql" => "select id, identifier, timestamp, event_message, request.method, request.path, response.status_code\n from edge_logs\n cross join unnest(metadata) as m\n cross join unnest(m.request) as request\n cross join unnest(m.response) as response\n \n order by timestamp desc\n limit 100\n "}} terminating
** (Postgrex.Error) ERROR 42703 (undefined_column) column "body" does not exist
query: WITH retention AS (SELECT (CASE WHEN $1::text = 'FREE' THEN current_timestamp - INTERVAL '1 day' WHEN $2::text = 'PRO' THEN current_timestamp - INTERVAL '7 day' WHEN ($3::text = 'PAYG' OR $4::text = 'ENTERPRISE') THEN current_timestamp - INTERVAL '90 day' ELSE current_timestamp - INTERVAL '1 day' END) AS date), edge_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_c9901b6d_4e07_4ae8_a11f_e3a57eec0dca" AS t WHERE (t.body ->> 'project') = $5::text AND CASE WHEN COALESCE($6::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($7::text AS TIMESTAMP) END AND CASE WHEN COALESCE($8::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($9::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), postgres_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_12a6b45f_96c0_4512_95a4_d408173d3ccb" AS t WHERE (t.body ->> 'project') = $10::text AND CASE WHEN COALESCE($11::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($12::text AS TIMESTAMP) END AND CASE WHEN COALESCE($13::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($14::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), function_edge_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_71139d09_53d4_4ab0_aa0b_55f15a50b0dd" AS t WHERE CASE WHEN COALESCE($15::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($16::text AS TIMESTAMP) END AND CASE WHEN COALESCE($17::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($18::text AS TIMESTAMP) END AND (body #>> '{metadata,project_ref}') = $19::text AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), function_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_7a917400_0149_426e_97fd_d0a4791b4523" AS t WHERE (body #>> '{metadata,project_ref}') = $20::text AND CASE WHEN COALESCE($21::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($22::text AS TIMESTAMP) END AND CASE WHEN COALESCE($23::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($24::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), auth_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_eb3f3364_b8c2_4d41_b37f_61980c8cad3e" AS t WHERE (t.body ->> 'project') = $25::text AND CASE WHEN COALESCE($26::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($27::text AS TIMESTAMP) END AND CASE WHEN COALESCE($28::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($29::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), realtime_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_cfec0bd5_4e11_44ee_ad20_cea9b74a4175" AS t WHERE (body #>> '{metadata,project}') = $30::text AND CASE WHEN COALESCE($31::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($32::text AS TIMESTAMP) END AND CASE WHEN COALESCE($33::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($34::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), storage_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_30649fa8_b145_4298_8e88_f8b068311ef0" AS t WHERE (body #>> '{metadata,project}') = $35::text AND CASE WHEN COALESCE($36::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($37::text AS TIMESTAMP) END AND CASE WHEN COALESCE($38::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($39::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), postgrest_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_e2280f5c_f870_4465_9845_efa3091b7e6d" AS t WHERE CASE WHEN COALESCE($40::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($41::text AS TIMESTAMP) END AND CASE WHEN COALESCE($42::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'times (truncated)
20:57:03.238 [error] Endpoint query exited for an unknown reason
Postgres Logs
20:59:07.068 [error] GenServer {Logflare.Endpoints.Cache, 1, %{"iso_timestamp_start" => "2024-05-27T19:00:00.000Z", "project" => "default", "project_tier" => "ENTERPRISE", "ref" => "default", "sql" => "select identifier, postgres_logs.timestamp, id, event_message, parsed.error_severity from postgres_logs\n cross join unnest(metadata) as m\n cross join unnest(m.parsed) as parsed\n \n order by timestamp desc\n limit 100\n "}} terminating
** (Postgrex.Error) ERROR 42703 (undefined_column) column "body" does not exist
query: WITH retention AS (SELECT (CASE WHEN $1::text = 'FREE' THEN current_timestamp - INTERVAL '1 day' WHEN $2::text = 'PRO' THEN current_timestamp - INTERVAL '7 day' WHEN ($3::text = 'PAYG' OR $4::text = 'ENTERPRISE') THEN current_timestamp - INTERVAL '90 day' ELSE current_timestamp - INTERVAL '1 day' END) AS date), edge_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_c9901b6d_4e07_4ae8_a11f_e3a57eec0dca" AS t WHERE (t.body ->> 'project') = $5::text AND CASE WHEN COALESCE($6::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($7::text AS TIMESTAMP) END AND CASE WHEN COALESCE($8::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($9::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), postgres_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_12a6b45f_96c0_4512_95a4_d408173d3ccb" AS t WHERE (t.body ->> 'project') = $10::text AND CASE WHEN COALESCE($11::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($12::text AS TIMESTAMP) END AND CASE WHEN COALESCE($13::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($14::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), function_edge_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_71139d09_53d4_4ab0_aa0b_55f15a50b0dd" AS t WHERE CASE WHEN COALESCE($15::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($16::text AS TIMESTAMP) END AND CASE WHEN COALESCE($17::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($18::text AS TIMESTAMP) END AND (body #>> '{metadata,project_ref}') = $19::text AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), function_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_7a917400_0149_426e_97fd_d0a4791b4523" AS t WHERE (body #>> '{metadata,project_ref}') = $20::text AND CASE WHEN COALESCE($21::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($22::text AS TIMESTAMP) END AND CASE WHEN COALESCE($23::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($24::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), auth_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_eb3f3364_b8c2_4d41_b37f_61980c8cad3e" AS t WHERE (t.body ->> 'project') = $25::text AND CASE WHEN COALESCE($26::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($27::text AS TIMESTAMP) END AND CASE WHEN COALESCE($28::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($29::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), realtime_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_cfec0bd5_4e11_44ee_ad20_cea9b74a4175" AS t WHERE (body #>> '{metadata,project}') = $30::text AND CASE WHEN COALESCE($31::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($32::text AS TIMESTAMP) END AND CASE WHEN COALESCE($33::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($34::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), storage_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_30649fa8_b145_4298_8e88_f8b068311ef0" AS t WHERE (body #>> '{metadata,project}') = $35::text AND CASE WHEN COALESCE($36::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($37::text AS TIMESTAMP) END AND CASE WHEN COALESCE($38::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) <= CAST($39::text AS TIMESTAMP) END AND CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > retention.date ORDER BY CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) DESC), postgrest_logs AS (SELECT (t.body -> 'timestamp') AS timestamp, (t.body -> 'id') AS id, (t.body -> 'event_message') AS event_message, (t.body -> 'metadata') AS metadata FROM retention, "_analytics"."log_events_e2280f5c_f870_4465_9845_efa3091b7e6d" AS t WHERE CASE WHEN COALESCE($40::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTAMP) > CAST($41::text AS TIMESTAMP) END AND CASE WHEN COALESCE($42::text, '') = '' THEN true ELSE CAST((to_timestamp(CAST((t.body ->> 'timestamp') AS BIGINT) / 1000000.0) AT TIME ZONE 'UTC') AS TIMESTA (truncated)
20:59:07.069 [error] Endpoint query exited for an unknown reason
In both cases, the error message is (Postgrex.Error) ERROR 42703 (undefined_column) column "body" does not exist
.
from supabase.
Can confirm this is occurring on the latest CLI version (1.169.5
) and also in 1.168.1
.
Confirmed this is in the latest CLI (1.169.6) and is as @clxyder said.
Informing CLI and Analytics teams
from supabase.
Thanks, @encima and @clxyder finally it started working.
Let me know if it is fixed in the latest release so I can switch to the new version.
from supabase.
Can confirm this is fixed in the latest CLI (1.176.4), though the default query is not.
npx supabase@latest init
in an empty directory (also useful to check no images are cached)- Enable
analytics
insupabase/config.toml
npx supabase@latest start
- Navigate to Studio -> Logs
- Select
Postgres
orKong
etc OR modify query to be:select timestamp, event_message, metadata from edge_logs limit 5
- Success
Will leave this open while the query is fixed in the frontend
from supabase.
In an existing project brew upgrade supabase
upgrading to 1.176.4
can confirm logs showing up now! Thanks so much for the fix!
from supabase.
Fixed the query in local here
Taking a look at the other issues. Sorry for the inconvenience folks.
from supabase.
the bugfix is not yet out on cli, as we are fixing other related bugs as well so that this is fully resolved. The issue will be closed once it is verified fixed on cli. The docker logs UI can still be used while we get this fix out, thanks for the patience.
from supabase.
This is so frustrating, have to remind myself updating supabase cli is never a good option.
from supabase.
has this logging issue has already been fixed @encima ? Is this merged to current cli version?
from supabase.
@raman04-byte Add the below line in
config.toml
and test and confirm if it works.[analytics] enabled = true port = 54327 vector_port = 54328 # Configure one of the supported backends: `postgres`, `bigquery`. backend = "postgres"
I will let you know after updating the file
from supabase.
Luckily firebase guys do not need to run docker for local development.
from supabase.
You can also download the binary from the Releases tab
The Analytics and CLI teams will investigate and provide a fix here when it is ready
from supabase.
Darwin
from supabase.
Any update @encima @sweatybridge @Ziinc
from supabase.
Ok, thanks @encima confirm with us tomorrow if everything is fine.
from supabase.
@tecoad please see the above comment. this issue is not yet resolved. we will comment and close once the fix is out. sorry for the confusion and thank you for your patience!
from supabase.
@encima below is the suggestion.
Topic - Checking Logflare logs in a docker container.
Information - It will cover how users can see supabase logs plus Postgres logs especially the RAISE statement within your PL/pgSQL functions to log messages
.
from supabase.
If you see the image, no supabase
was installed before I ran npx supabase@latest
.
Anyway, I got it working through Homebrew.
Thanks a lot for fixing this issue and hope it will not appear in future versions.
from supabase.
Let me know if you need any other information.
from supabase.
Hey there,
thanks for opening this; the local development experience is something we are working to improve and we do hear your feedback. Many teams are involved in this and we hope some of the already implemented improvements are noticeable.
For this, I see that you have analytics enabled and that should be fine. Do you see all of your containers as healthy
when running docker ps
?
from supabase.
Hi @encima,
I can see errors in supabase_analytics
container log. Video attached.
Please suggest some way to fix it.
supabase_analytics_error.mov
from supabase.
The image you are using for the Studio is likely not up to date as it is using body
in the query when it should not be. See #26309.
If you update your Studio image (and others) then this should be resolved
from supabase.
@encima my current studio version is 20240506-2976cd6
. I even delete all images and re-run supabase start
Image added.
Can you recommend any command to update to the latest studio version?
from supabase.
looks like you have one of the latest versions.
to make sure they are update (and you do not lose data), try the commands in this comment
from supabase.
@encima according to Docker Hub this is not the lastest version.
Link - https://hub.docker.com/r/supabase/studio/tags
from supabase.
Suggest any way even hard reset also works. I am not concerned about the data right now.
from supabase.
The method in that comment should work. Also, remove supabase/.temp
and docker pull
the latest studio image should force it
from supabase.
Doesn't seem to work. I deleted the supabase/.temp
and pulled the latest studio image but upon supabase start
it start pulling the same old image.
As much as I like and praise Supabase, Developer Experience sucks.
Do you think I would be debugging why the log is not working if I used Firebase?
I even delete all the images and volumes. Restarting again doesn't fix the issue.
from supabase.
That is odd.
You mentioned that you do not mind losing data so could you try running "supabase init" in a new directory with the latest CLI?
Perhaps the config has something I am not aware of that is forcing the image
We do appreciate your patience and help when debugging the issue. I have tried for years to run Firebase locally but I haven't had success yet!
from supabase.
@encima still facing the same issue.
- I deleted all the images and volume from the docker desktop.
- I created a new project using
supabase init
. - Started a new project using
supabase start
.
Can you share your docker images version/tag so that I can compare?
from supabase.
@encima I even went ahead and deleted docker and re-installed it. I created a new fresh project but still getting the same error.
@Ziinc can you have a look?
from supabase.
@encima That's mean [email protected]
is the safest version for now.
from supabase.
@clxyder I am struggle to run [email protected]
on my mac.
I installed it as dev dependency using npm but when I try to run I am getting error.
from supabase.
A dev dependency does not put a binary in the PATH.
Try running: npx [email protected] start
from supabase.
Thanks @encima but sadly it's not working.
from supabase.
Seems like an npx issue there maybe. You can try installing the CLI from homebrew or via devDependency as you did and the binary will be in node_modules/.bin
from supabase.
I use homebrew to install supabase but the problem with homebrew is that I can't install a specific version of supabase cli.
Can you confirm that?
from supabase.
@encima any update as you said [email protected] also has the same issue.
from supabase.
@encima which binary should I download for mac?
from supabase.
@sweatybridge one more bug. The Load More
button not working for postgres
and Kong API
.
Can you confirm?
supabase_load_more_bug.mov
from supabase.
@Ziinc is this on your radar? I wonder if it's related to log search queries not working on studio (Postgres backend).
from supabase.
@sweatybridge one more bug for any Error Alert. The message is undefined. Without any detail about what caused the error.
from supabase.
Hi @Akash187 - Do you have a screenshot of the Error alert you are seeing?
from supabase.
Hello @Hallidayo
As I said instead of reason, message I am getting is undefined.
Example- for below screen I should have got message "Function in use can't delete it" but I got "undefined" message.
from supabase.
@sweatybridge new version of CLI(v1.172.2) has been released can you confirm if this issue is fixed?
I don't want to loose logs so I can't test it myself.
from supabase.
Any update π
from supabase.
same error
from supabase.
Only fix for now is to use CLI version 1.170.1
from supabase.
The same error occurs when I start the whole system with docker compose up
. Looking forward to related updates. :)
from supabase.
Any update @encima @sweatybridge @Ziinc
from supabase.
tested with the latest commit from master for Supabase and all works. API gateway and Analytics work but the front end query for Postgres is wrong; possibly for @supabase/frontend to check out
will check latest CLI tomorrow
from supabase.
supabase --version
1.183.5
Doing local dev so I ran supabase start/stop
config.toml:
[analytics]
enabled = true
port = 54327
vector_port = 54328
# Configure one of the supported backends: `postgres`, `bigquery`.
backend = "postgres"
API Gateway and Postgres logs:
from supabase.
I had no issue before updating my supabase cli this week.
Now i came across this. Running on latest 1.183.5 cli-version and 20240701-05dfbec studio.
Its amazing how these issues are solved and recreated in new versions.
Any update @encima @sweatybridge @Ziinc?
I promised myself will never update supabase stuff once again... Arghh.
Such a waste of time on my development worflow.
from supabase.
Any update on the solution of this problem
from supabase.
The update is here. This is the latest news and we will post a comment when a fix is released.
Thanks for your patience.
from supabase.
Dear @encima,
Can you please explain why logging works in some versions and then stops working in the next CLI version?
It is frustrating whether we work on building a product or worry about broken logging. Being a developer yourself I guess you understand the frustration we are going through.
Thank you
from supabase.
Yes, we can understand that. The issue is outlined in the comments of this issue and the Pull Requests to fix this are linked also. Different versions of the CLI pin to different versions of docker images for each Supabase service and bugs may exist in some of them that impact other services running. On top of that, updating older installations to the latest can be quite a change and may sometimes cause things to break.
I would recommend using docker logs alongside Supabase if the local logs are critical for your workflow.
The final issue to be fixed is the query in the frontend that will allow logs to be viewed. Note that logging is not currently broken, only the query, and they can be viewed by clicking the service links in the sidebar.
Thanks again to all for your patience, this is a bigger issue involving multiple teams and the final fix will be released soon. We will post here when that is done.
from supabase.
I appreciate the thorough response. I'm not very familiar with Docker, and I think many others here are in the same boat. Could you recommend some documentation or videos on using "docker logs alongside Supabase"? It would be great if your Devrel team could create a video to address this topic.
Thanks and Cheers
from supabase.
Can you share the link of the PR which are associated for this fix.
from supabase.
Hey @Akash187 ,
I will suggest it to the team, what kind of information would you like to see? These are the docs for docker logs
@raman04-byte one PR is linked to this issue, you should be able to see them in the comment thread (i.e. here). You can also check the logflare repo for changes as some initial fixes were made there.
We will comment here when the fix is released with the latest CLI version
from supabase.
@sweatybridge could you help with the docker driver revert?
from supabase.
Getting same error, and also when I have analytics set to true - edge functions container immediately stops when serving the edge functions.
from supabase.
@christ-offer can you open a separate issue and provide more details for that, please? That certainly seems like a different bug and would need fixing!
from supabase.
I am still getting
{
"code": 502,
"errors": [],
"message": "Something went wrong! Unknown error. If this continues please contact support.",
"status": "UNKNOWN"
}
Has this been fixed????
from supabase.
2 months no fixes. Sadly, this feature is working and then breaking in the next release.
@encima @sweatybridge please take this issue seriously, being a developer working without any log can be frustrating.
Also, there is no video to help us check the log using Docker logging.
from supabase.
Hello all,
Thank you for your patience and your feedback with this.
As far as I am aware, the query itself has been fixed and the logflare
image should be updated. I have tested this on a new and existing project using the latest CLI (1.187.8
).
To test:
- Ensure your CLI is up to date (
npx supabase@latest
) - Ensure no other projects/images related to Supabase are running
- Ensure
analytics
is enabled insupabase/config.toml
- Start your project:
npx supabase@latest start
- Ensure all images are running with
docker ps
and look forsupabase/logflare
andsupabase/vector
- Open the studio and check the
Logs
tab
Please try these steps and let us know the results.
@Akash187 if you would like to see how to check Docker logs, their documentation is the best resource.
from supabase.
I have tested it using the cli
and it's working fine but I am not using cli for my project so when will the next update of repo will come.
I have also updated the version in the docker-compose.yml and match the version which is of CLI and not working.
There is no [analytics]
in the config.toml
file as well but I think it will still not work and it is not working
from supabase.
@raman04-byte Add the below line in config.toml
and test and confirm if it works.
[analytics]
enabled = true
port = 54327
vector_port = 54328
# Configure one of the supported backends: `postgres`, `bigquery`.
backend = "postgres"
from supabase.
@Akash187 And it's not working
which .toml file I have to update cuz we have like 4 files name config.toml
from supabase.
@raman04-byte The docker
directory was updated a few hours ago with the new images for self-hosted version so this should also be working now. Please pull the latest changes and confirm (no need to update a config.toml file)
from supabase.
@encima you mean analytics is enabled by default and right now there is no need to manually enable it.
@raman04-byte you are required to paste code in your supabase config.toml file.
from supabase.
I have just pasted the code and still the same error it's not working
from supabase.
kong:
container_name: supabase-kong
image: kong:2.8.1
restart: unless-stopped
# https://unix.stackexchange.com/a/294837
entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
ports:
- ${KONG_HTTP_PORT}:8000/tcp
- ${KONG_HTTPS_PORT}:8443/tcp
depends_on:
analytics:
condition: service_healthy
environment:
KONG_DATABASE: "off"
KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
# https://github.com/supabase/cli/issues/14
KONG_DNS_ORDER: LAST,A,CNAME
KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
volumes:
# https://github.com/supabase/supabase/issues/12661
- ./volumes/api/kong.yml:/home/kong/temp.yml:ro
auth:
container_name: supabase-auth
image: supabase/gotrue:v2.151.0
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:9999/health"
]
timeout: 5s
interval: 5s
retries: 3
restart: unless-stopped
environment:
GOTRUE_API_HOST: 0.0.0.0
GOTRUE_API_PORT: 9999
API_EXTERNAL_URL: ${API_EXTERNAL_URL}
GOTRUE_DB_DRIVER: postgres
GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
GOTRUE_SITE_URL: ${SITE_URL}
GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}
GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}
GOTRUE_JWT_ADMIN_ROLES: service_role
GOTRUE_JWT_AUD: authenticated
GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
GOTRUE_JWT_EXP: ${JWT_EXPIRY}
GOTRUE_JWT_SECRET: ${JWT_SECRET}
GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: ${ENABLE_ANONYMOUS_USERS}
GOTRUE_MAILER_AUTOCONFIRM: ${ENABLE_EMAIL_AUTOCONFIRM}
# GOTRUE_MAILER_SECURE_EMAIL_CHANGE_ENABLED: true
# GOTRUE_SMTP_MAX_FREQUENCY: 1s
GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
GOTRUE_SMTP_HOST: ${SMTP_HOST}
GOTRUE_SMTP_PORT: ${SMTP_PORT}
GOTRUE_SMTP_USER: ${SMTP_USER}
GOTRUE_SMTP_PASS: ${SMTP_PASS}
GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}
GOTRUE_MAILER_URLPATHS_INVITE: ${MAILER_URLPATHS_INVITE}
GOTRUE_MAILER_URLPATHS_CONFIRMATION: ${MAILER_URLPATHS_CONFIRMATION}
GOTRUE_MAILER_URLPATHS_RECOVERY: ${MAILER_URLPATHS_RECOVERY}
GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: ${MAILER_URLPATHS_EMAIL_CHANGE}
GOTRUE_EXTERNAL_PHONE_ENABLED: ${ENABLE_PHONE_SIGNUP}
GOTRUE_SMS_AUTOCONFIRM: ${ENABLE_PHONE_AUTOCONFIRM}
# Uncomment to enable custom access token hook. You'll need to create a public.custom_access_token_hook function and grant necessary permissions.
# See: https://supabase.com/docs/guides/auth/auth-hooks#hook-custom-access-token for details
# GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_ENABLED="true"
# GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_URI="pg-functions://postgres/public/custom_access_token_hook"
# GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_ENABLED="true"
# GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_URI="pg-functions://postgres/public/mfa_verification_attempt"
# GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_ENABLED="true"
# GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_URI="pg-functions://postgres/public/password_verification_attempt"
rest:
container_name: supabase-rest
image: postgrest/postgrest:v12.2.0
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
restart: unless-stopped
environment:
PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}
PGRST_DB_ANON_ROLE: anon
PGRST_JWT_SECRET: ${JWT_SECRET}
PGRST_DB_USE_LEGACY_GUCS: "false"
PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_SECRET}
PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}
command: "postgrest"
realtime:
# This container name looks inconsistent but is correct because realtime constructs tenant id by parsing the subdomain
container_name: realtime-dev.supabase-realtime
image: supabase/realtime:v2.29.15
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
healthcheck:
test:
[
"CMD",
"curl",
"-sSfL",
"--head",
"-o",
"/dev/null",
"-H",
"Authorization: Bearer ${ANON_KEY}",
"http://localhost:4000/api/tenants/realtime-dev/health"
]
timeout: 5s
interval: 5s
retries: 3
restart: unless-stopped
environment:
PORT: 4000
DB_HOST: ${POSTGRES_HOST}
DB_PORT: ${POSTGRES_PORT}
DB_USER: supabase_admin
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_NAME: ${POSTGRES_DB}
DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
DB_ENC_KEY: supabaserealtime
API_JWT_SECRET: ${JWT_SECRET}
FLY_ALLOC_ID: fly123
FLY_APP_NAME: realtime
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
ERL_AFLAGS: -proto_dist inet_tcp
ENABLE_TAILSCALE: "false"
DNS_NODES: "''"
command: >
sh -c "/app/bin/migrate && /app/bin/realtime eval 'Realtime.Release.seeds(Realtime.Repo)' && /app/bin/server"
# To use S3 backed storage: docker compose -f docker-compose.yml -f docker-compose.s3.yml up
storage:
container_name: supabase-storage
image: supabase/storage-api:v1.0.6
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
rest:
condition: service_started
imgproxy:
condition: service_started
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:5000/status"
]
timeout: 5s
interval: 5s
retries: 3
restart: unless-stopped
environment:
ANON_KEY: ${ANON_KEY}
SERVICE_KEY: ${SERVICE_ROLE_KEY}
POSTGREST_URL: http://rest:3000
PGRST_JWT_SECRET: ${JWT_SECRET}
DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
FILE_SIZE_LIMIT: 52428800
STORAGE_BACKEND: file
FILE_STORAGE_BACKEND_PATH: /var/lib/storage
TENANT_ID: stub
# TODO: https://github.com/supabase/storage-api/issues/55
REGION: stub
GLOBAL_S3_BUCKET: stub
ENABLE_IMAGE_TRANSFORMATION: "true"
IMGPROXY_URL: http://imgproxy:5001
volumes:
- ./volumes/storage:/var/lib/storage:z
imgproxy:
container_name: supabase-imgproxy
image: darthsim/imgproxy:v3.8.0
healthcheck:
test: [ "CMD", "imgproxy", "health" ]
timeout: 5s
interval: 5s
retries: 3
environment:
IMGPROXY_BIND: ":5001"
IMGPROXY_LOCAL_FILESYSTEM_ROOT: /
IMGPROXY_USE_ETAG: "true"
IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}
volumes:
- ./volumes/storage:/var/lib/storage:z
meta:
container_name: supabase-meta
image: supabase/postgres-meta:v0.83.2
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
restart: unless-stopped
environment:
PG_META_PORT: 8080
PG_META_DB_HOST: ${POSTGRES_HOST}
PG_META_DB_PORT: ${POSTGRES_PORT}
PG_META_DB_NAME: ${POSTGRES_DB}
PG_META_DB_USER: supabase_admin
PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}
functions:
container_name: supabase-edge-functions
image: supabase/edge-runtime:v1.55.0
restart: unless-stopped
depends_on:
analytics:
condition: service_healthy
environment:
JWT_SECRET: ${JWT_SECRET}
SUPABASE_URL: http://kong:8000
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_ROLE_KEY: ${SERVICE_ROLE_KEY}
SUPABASE_DB_URL: postgresql://postgres:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
# TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
VERIFY_JWT: "${FUNCTIONS_VERIFY_JWT}"
volumes:
- ./volumes/functions:/home/deno/functions:Z
command:
- start
- --main-service
- /home/deno/functions/main
analytics:
container_name: supabase-analytics
image: supabase/logflare:1.4.0
healthcheck:
test: [ "CMD", "curl", "http://localhost:4000/health" ]
timeout: 5s
interval: 5s
retries: 10
restart: unless-stopped
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
# Uncomment to use Big Query backend for analytics
# volumes:
# - type: bind
# source: ${PWD}/gcloud.json
# target: /opt/app/rel/logflare/bin/gcloud.json
# read_only: true
environment:
LOGFLARE_NODE_HOST: 127.0.0.1
DB_USERNAME: supabase_admin
DB_DATABASE: ${POSTGRES_DB}
DB_HOSTNAME: ${POSTGRES_HOST}
DB_PORT: ${POSTGRES_PORT}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_SCHEMA: _analytics
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
LOGFLARE_SINGLE_TENANT: true
LOGFLARE_SUPABASE_MODE: true
LOGFLARE_MIN_CLUSTER_SIZE: 1
# Comment variables to use Big Query backend for analytics
POSTGRES_BACKEND_URL: postgresql://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
POSTGRES_BACKEND_SCHEMA: _analytics
LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true
# Uncomment to use Big Query backend for analytics
# GOOGLE_PROJECT_ID: ${GOOGLE_PROJECT_ID}
# GOOGLE_PROJECT_NUMBER: ${GOOGLE_PROJECT_NUMBER}
ports:
- 4000:4000
# Comment out everything below this point if you are using an external Postgres database
db:
container_name: supabase-db
image: supabase/postgres:15.1.1.61
healthcheck:
test: pg_isready -U postgres -h localhost
interval: 5s
timeout: 5s
retries: 10
depends_on:
vector:
condition: service_healthy
command:
- postgres
- -c
- config_file=/etc/postgresql/postgresql.conf
- -c
- log_min_messages=fatal # prevents Realtime polling queries from appearing in logs
restart: unless-stopped
ports:
# Pass down internal port because it's set dynamically by other services
- ${POSTGRES_PORT}:${POSTGRES_PORT}
environment:
POSTGRES_HOST: /var/run/postgresql
PGPORT: ${POSTGRES_PORT}
POSTGRES_PORT: ${POSTGRES_PORT}
PGPASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATABASE: ${POSTGRES_DB}
POSTGRES_DB: ${POSTGRES_DB}
JWT_SECRET: ${JWT_SECRET}
JWT_EXP: ${JWT_EXPIRY}
volumes:
- ./volumes/db/realtime.sql:/docker-entrypoint-initdb.d/migrations/99-realtime.sql:Z
# Must be superuser to create event trigger
- ./volumes/db/webhooks.sql:/docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql:Z
# Must be superuser to alter reserved role
- ./volumes/db/roles.sql:/docker-entrypoint-initdb.d/init-scripts/99-roles.sql:Z
# Initialize the database settings with JWT_SECRET and JWT_EXP
- ./volumes/db/jwt.sql:/docker-entrypoint-initdb.d/init-scripts/99-jwt.sql:Z
# PGDATA directory is persisted between restarts
- ./volumes/db/data:/var/lib/postgresql/data:Z
# Changes required for Analytics support
- ./volumes/db/logs.sql:/docker-entrypoint-initdb.d/migrations/99-logs.sql:Z
# Use named volume to persist pgsodium decryption key between restarts
- db-config:/etc/postgresql-custom
vector:
container_name: supabase-vector
image: timberio/vector:0.28.1-alpine
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://vector:9001/health"
]
timeout: 5s
interval: 5s
retries: 3
volumes:
- ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
- ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro
environment:
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
command: [ "--config", "etc/vector/vector.yml" ]
volumes:
db-config:
this is my docker-compose.yml file
from supabase.
No, you are both running Supabase in different ways so the advice is conflicting.
- @Akash187 For the CLI (local development):
- use the steps here and update the
config.toml
- use the steps here and update the
- @raman04-byte For self-hosted (from the
docker
folder of the Supabase repo):- Bring your Supabase down (
docker compose down
) - Update to the latest commit (
git pull origin master
) - Remove any cached images (
docker system prune
) - Restart Supabase (
docker compose up
)
- Bring your Supabase down (
from supabase.
I am testing again with the fresh self-hosted by downloading again
from supabase.
Thanks for confirmation
No, you are both running Supabase in different ways so the advice is conflicting.
@Akash187 For the CLI (local development):
- use the steps here and update the
config.toml
@raman04-byte For self-hosted (from the
docker
folder of the Supabase repo):
- Bring your Supabase down (
docker compose down
)- Update to the latest commit (
git pull origin master
)- Remove any cached images (
docker system prune
)- Restart Supabase (
docker compose up
)
Thanks @encima I assumed that local development and self-hosted have the same setup.
Finger crossed hope it will not break in the next version.
from supabase.
@encima I am not able to download the latest version using brew upgrade supabase
It is stuck at v1.187.3
from supabase.
@encima npx
seems to not work on my device. My npx
version is 10.8.1
. Any Solution?
from supabase.
@Akash187 Looks like a path problem or similar. Remove/replace npx
or install the beta through homebrew.
from supabase.
Yeah, I installed the beta through homebrew. But npx
is working for everything else supabase
.
I just created next-js
project and it worked as expected.
How come this be a problem with only supabase
on my device?
from supabase.
Possibly having both installed is impacting with the path. If you have supabase
installed through homebrew, then you do not need to use npx
to run it
from supabase.
@encima (beta comes first and then it is pushed to the main homebrew tap)
Hopefully the promotion happens fast, this is critical for debugging.
Don't forget if you are installing supabase-beta with homebrew and supabase is already installed you will need to perform brew unlink supabase && brew link supabase-beta
and then brew unlink supabase-beta && brew link supabase
to go back after the main tap updates.
from supabase.
Yeah, there was a brew
message to link supabse-beta
.
from supabase.
Related Issues (20)
- Include OIDC compliant JWT issuer like FusionAuth to Supabase Auth HOT 1
- Unable to update values in cells that have a foreign key constraint using supabase studio HOT 6
- Some versions of docs are being indexed by browsers HOT 1
- Suggestions for error handling in SSR auth confirm routes HOT 1
- Supabase Error sending confirmation OTP to provider. Invalid message-capable Twilio phone number for this destination HOT 1
- Layout Issue: Show More Tweets Button Misaligns Tweets on Firefox HOT 3
- Unable to run `supabase start` locally, ulimit error HOT 2
- Unknown config fields: auth.hook.send_email.secret HOT 9
- Wrong callback is triggered when listening to postgres table changes in a React.useEffect.
- Contradictory instructions for creating SSR client in SvelteKit
- API closes socket with error `worker_connections are not enough, reusing connections` when lots of concurrent requests are made HOT 3
- send_email auth hook email_change does not contain token HOT 1
- ζδ»Άεε¨δΈζ―ζδΈζε HOT 1
- File storage does not support Chinese names HOT 4
- CSV Import in studio fails to assign authenticated user id HOT 2
- Supabase CLI starts the containers but then does Stopping containers... resulting in broken: malformed HTTP response HOT 3
- Two different clock times in the same column HOT 3
- error in guidance on setting up email template URL's for Next.js project HOT 1
- [Supabase.com ONLY] Duplicate key value violates unique constraint updating non-unique column
- Cannot delete an organization HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from supabase.