GithubHelp home page GithubHelp logo

nhost / hasura-storage Goto Github PK

View Code? Open in Web Editor NEW
202.0 6.0 38.0 25.1 MB

Storage for Hasura built on top of S3

License: Apache License 2.0

Go 92.23% PLpgSQL 1.74% Shell 1.45% Dockerfile 0.08% Makefile 1.73% Nix 2.76%
hasura s3 storage files nhost upload

hasura-storage's Introduction

Hasura Storage

Hasura storage is a service that adds a storage service on top of hasura and any s3-compatible storage service. The goal is to be able to leverage the cloud storage service while also leveraging hasura features like its graphql API, permissions, actions, presets, etc...

Workflows

To understand what hasura-storage does we can look at the two main workflows to upload and retrieve files.

Uploading files

When a user wants to upload a file hasura-storage will first check with hasura if the user is allowed to do so, if it the file will be uploaded to s3 and, on completion, file metadata will be stored in hasura.

sequenceDiagram
    actor User
    autonumber
    User->>+hasura-storage: upload file
    hasura-storage->>+hasura: check permissions
    hasura->>-hasura-storage: return if user can upload file
    hasura-storage->>+s3: upload file
    s3->>-hasura-storage: file information
    hasura-storage->>+hasura: file metadata
    hasura->>-hasura-storage: success
    hasura-storage->>-User: file metadata

Retrieving files

Similarly, when retrieving files, hasura-storage will first check with hasura if the user has permissions to retrieve the file and if the user is allowed, it will forward the file to the user:

sequenceDiagram
    actor User
    autonumber
    User->>+hasura-storage: request file
    hasura-storage->>+hasura: check permissions
    hasura->>-hasura-storage: return if user can access file
    hasura-storage->>+s3: request file
    s3->>-hasura-storage: file
    hasura-storage->>-User: file

Features

The main features of the service are:

  • leverage hasura's permissions to allow users to upload/retrieve files
  • upload files to any s3-compatible service
  • download files from any s3-compatible service
  • create presigned URLs to grant temporary access
  • caching information to integrate with caches and CDNs (cache headers, etag, conditional headers, etc)
  • perform basic image manipulation on the fly
  • integration with clamav antivirus

Antivirus

Integration with clamav antivirus relies on an external clamd service. When a file is uploaded hasura-storage will create the file metadata first and then check if the file is clean with clamd via its TCP socket. If the file is clean the rest of the process will continue as usual. If a virus is found details about the virus will be added to the virus table and the rest of the process will be aborted.

sequenceDiagram
    actor User
    User ->> storage: upload file
    storage ->>clamav: check for virus
    alt virus found
        storage-->s3: abort upload
        storage->>graphql: insert row in virus table
    else virus not found
        storage->>s3: upload
        storage->>graphql: update metadata
    end

This feature can be enabled with the flag --clamav-server string, where string is the tcp address for the clamd service.

OpenAPI

The service comes with an OpenAPI definition which you can also see online.

Using the service

Easiest way to get started is by using nhost's free tier but if you want to self-host you can easily do it yourself as well.

Self-hosting the service

Requirements:

  1. hasura running, which in turns needs postgres or any other supported database.
  2. An s3-compatible service. For instance, AWS S3, minio, etc...

A fully working example using docker-compose can be found here. Just remember to replace the image hasura-storage:dev with a valid docker image, for instance, nhost/hasura-storage:0.1.5.

Contributing

If you need help or want to contribute it is recommended to read the contributing information first. In addition, if you plan to contribute with code it is also encouraged to read the development guide.

hasura-storage's People

Contributors

bernix01 avatar bestmazzo avatar chrissg avatar dbarrosop avatar dependabot[bot] avatar elephant3 avatar elitan avatar github-actions[bot] avatar jhleao avatar kevinrodriguez-io avatar nunopato avatar phishy avatar sradigan avatar szilarddoro avatar tuanalumi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

hasura-storage's Issues

Self Hosting instructions

Would love to see instructions/ an example for self hosting a docker container.

I found this docker container:
https://hub.docker.com/r/nhost/hasura-storage

Though the environment variables are different from the ones here:
https://github.com/nhost/hasura-storage/blob/da427a37c97fd026d93c606beffaa572cbbde2ed/build/dev/docker/docker-compose.yaml
Or here:
https://github.com/nhost/hasura-storage/blob/da427a37c97fd026d93c606beffaa572cbbde2ed/hasura-storage.yaml

I've tried using that docker container and included all of the environment variables I've found but I only get logs like this:

2022-02-03T19:08:34.684 app[c7653d88] gru [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:36.656 app[47c5ecf9] mia [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:37.688 app[c7653d88] gru [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:39.660 app[47c5ecf9] mia [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:40.691 app[c7653d88] gru [info]hasura is not ready, retry in 3 seconds

I assume it means I have one of these env variables misconfigured:

  GRAPHQL_ENDPOINT = "https://example.com/v1/graphql"
  GRAPHQL_ENGINE_BASE_URL="https://example.com/"
  HASURA_METADATA = "true"
  HASURA_GRAPHQL_ADMIN_SECRET= "some_secret"
  HASURA_METADATA_ADMIN_SECRET = "some_secret"

Or I might be missing something all together. Also I want to make sure that the migrations and metadata updates run as well when needed.

Would love to see an example env. I know that it's run locally on the cli but I'm not sure how to find which env variables are used.

Service unable to restart after crashing

Hi. I hope you're doing fine.

After trying to retreive some files using nhost.storage.getPublicUrl function, my hasura storage crashed.

image

Since then, the server is unabled to restart :

  • In log/hasura :
{
  "detail": {
    "http_info": {
      "content_encoding": null,
      "http_version": "HTTP/1.1",
      "ip": "10.110.27.195",
      "method": "POST",
      "status": 400,
      "url": "/v1/metadata"
    },
    "operation": {
      "error": {
        "code": "already-exists",
        "error": "field with name \"bucket\" already exists in table \"storage.files\"",
        "path": "$.args"
      },
      "query": { "type": "pg_create_object_relationship" },
      "request_id": "2803cfb1-7794-4151-8d2a-fb5a8204bcfa",
      "request_mode": "error",
      "response_size": 120,
      "uncompressed_response_size": 120,
      "user_vars": { "x-hasura-role": "admin" }
    },
    "request_id": "2803cfb1-7794-4151-8d2a-fb5a8204bcfa"
  },
  "level": "error",
  "timestamp": "2024-03-21T12:27:48.717+0000",
  "type": "http-log"
}
  • in log/storage :
2024-03-21 13:27:48 | hasura-storage | time="2024-03-21T12:27:48Z" level=info msg="starting server"
2024-03-21 13:27:48 | hasura-storage | time="2024-03-21T12:27:48Z" level=info msg="enabling fastly middleware"
2024-03-21 13:27:48 | hasura-storage | time="2024-03-21T12:27:48Z" level=info msg="applying hasura metadata"
2024-03-21 13:27:48 | hasura-storage | time="2024-03-21T12:27:48Z" level=info msg="applying postgres migrations"
2024-03-21 13:27:48 | hasura-storage | time="2024-03-21T12:27:48Z" level=info msg="Using static aws credentials"
2024-03-21 13:27:48 | hasura-storage | time="2024-03-21T12:27:48Z" level=info msg="storage version 0.6.0"
2024-03-21 13:27:48 | hasura-storage | problem reading : Config File "hasura-storage" Not Found in "[/]"

I didn't change anything on the storage tables, neither in config files from nhost cli init.

Using storage 0.6.0

problem applying hasura metadata: problem executing request (Client.Timeout exceeded while awaiting headers)

At deep-foundation/hasura#19 we found a problem with nhost/hasura-storage:0.2.3:

Logs are repeating only these lines:

time="2024-04-28T13:19:24Z" level=info msg="applying hasura metadata"
time="2024-04-28T13:19:34Z" level=error msg="problem applying hasura metadata: problem adding metadata for the buckets table: problem executing request: Post \"http://host.docker.internal:8080/v1/metadata\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
problem reading : Config File "hasura-storage" Not Found in "[/]"
time="2024-04-28T13:19:35Z" level=info msg="storage version 0.2.3"
time="2024-04-28T13:19:35Z" level=debug msg=parameters bind=":8000" debug=true hasura-endpoint="http://host.docker.internal:8080/v1" hasura-metadata=true postgres-migrations=false s3-bucket=default s3-endpoint="http://host.docker.internal:9000" s3-region=no-region s3-root-folder=default trusted-proxies="[]"

Do you have an idea on why there might be timeout exceeded? And how to diagnose and fix such problem?

Image transform sometimes malforms the resulting image

Hi, I noticed that some images get malformed when transforming them with the &w=1080 url adjustments. See this image for example, without any transformation it looks like here:
https://rklttyqwdzlikrmxlvpg.storage.eu-central-1.nhost.run/v1/files/9cb82a28-9119-425f-8e2e-5695b0e92c37
image

But when transforming it with a w=1080, then the image is cut and repeating (from the looks of it):
https://rklttyqwdzlikrmxlvpg.storage.eu-central-1.nhost.run/v1/files/9cb82a28-9119-425f-8e2e-5695b0e92c37?w=1080
image

Any ideas what is going on here?

How to handle secret injection

Hi all,

we are running Nhost v2 in a self-hosted Kubernetes cluster and using a vault that can only inject secrets (S3_ACCESS_KEY, S3_SECRET_KEY, ...) as a file. With all other services (hasura, hasura-auth) it was easy because we can override the entrypoint and run the injected file as a script and then run the origin command (example: /bin/sh /secrets/env pnpm run start).

But with Hasura-Storage we have some problems. Is there any way to inject Secrets (it's only one volume) and use this injected file as a script before starting the Hasura storage server?

We also saw that there is a "-config" flag, but I can't find any more information about the this flag.
Maybe this could be the way to inject a config file with all secrets?

Additional Information

benchmark service

  • Uploading/Downloading/Resizing
  • Check memory and allocations
  • Large/Small files

Conditinal migration issue

Currently, we add a FK to user.id from uploaded_by_user_id if the auth.users table exists. (link)

But I think we run into a race condition here.

If Hasura Storage starts before Hsaura Auth the FK will not be created.

At the same time, it seems like Hasura Storage is trying to add a GraphQL Relationship between the columns when its metadata is being applied:

image

Causing Hasura metadata to be out of sync:

image

Either both the migration and the metadata need to be conditional. Or we need to let Storage wait for Auth before it starts and applies its migration and metadata (without conditions).

v0.5.0 - s3-endpoint breaking change

Hi,

On version 0.5.0 when using AWS S3 and s3-endpoint (or S3_ENDPOINT env) is not defined there are errors on every function (GET / POST file) like:

"[problem getting object: operation error S3: GetObject, failed to resolve service endpoint, endpoint rule error, Custom endpoint `` was not a valid URI]"

The fix is to set S3_ENDPOINT to https://s3.[region].amazonaws.com along with S3_BUCKET, S3_REGION, etc.
Note that S3_ENDPOINT wasn't required in 0.4.1 and earlier. I think it should be noted as breaking change in changelog.

Allow admin role to create pre-signed URLs with arbitrary expirations

User roles are bound by the default bucket expiration in the storage.buckets table, but admin should be able to generate presigned URLs with longer expirations if needed.

Per Discord discussion, there are some use cases when a longer expiration is useful (i.e. in serverless functions when needing to send the URL to external services but immediate download can't be guaranteed) and admin should be able to override the default expiration settings when necessary.

Storage.files table should have a metadata column (like auth.users)

Since storage.files is not supposed to be edited in any way other than for permissions/relationships, it would be useful to add a metadata column like on the auth.users table for arbitrary data that needs to be added while uploading and might need to be used to process the file. (It looks like the actual files themselves don't accept object metadata during upload like most S3 uploads so that may be a separate but related issue.)

Problem Executing Query

Hi!

I'm using this project for the first time, I followed this docker compose and I already have the container running. Migrations were done and the metadata was applied.

However I'm getting this error even after setting user permissions on storage.files table.

time="2022-04-28T08:28:37Z" level=error msg="call completed with some errors" client_ip=172.20.0.1 errors="[problem processing request: problem executing query: Message: field \"bucket\" not found in type: 'query_root', Locations: []]" latency_time=161.356167ms method=POST status_code=403 url=/v1/storage/files

In the http request I am sending the following headers:

  • x-hasura-admin-secret
  • authorization

I believe I'm failing to check permissions, what things should I be aware of?

Documentation for using hasura-storage outside of nhost

Hi folks,

I'm trying to get hasura-storage working with a pre-existing hasura install on fly.io. The client is authenticated with clerk.

I've got the hasura-storage service up and running apparently, but I'm struggling a bit to figure out how to use it stand alone from the front end.

Are there any documents I could follow? Alternatively and guidance you might have would be helpful - happy to contribute some docs when I get it up and running :-)

how to do virus scan on uploaded files

Is it possible to do virus scan on uploaded files in hasura storage?
all files in our case is image and I wonder whether it is possible for anyone to inject a virus in the files stored and make a problem for other users and our server? any user can see other users' images in our use case.
Is it possible to only allow image upload in the Hasura storage API?

Forwarding Content-Length and Content-Type breaks upload flow

Hasura Storage normally forwards all incoming headers from the upload request to HGE when performing tasks such as intializing metadata. This causes Content-Length and Content-Type to be forwarded as well, which are normally referring to a multipart form data and more or less the size of the uploaded file.

When sending these headers over to HGE to perform plain GraphQL operations, they become inconsistent with the actual body of the requests (i.e. a GraphQL JSON request body, not multipart for data and not of that size).

Certain webservers break with this: we were using Hasura Cloud at some point, and never had a problem with this inconsistency. Though when switching to self hosted Hasura under Azure App Services, the App Services platform webserver couldn't handle the request and it wouldn't event reach HGE.

I would normally open a pull request with the fix (it's simply filtering out these headers on ForWardHeadersAuthorizer), but beucase of #182, I'm unable to run and test v0.4.0. We are running a custom v0.3.6 at the moment.

cdn integration shouldn't include surrogate-key for presigned URLs

Presigned URLs are meant to expire when the presigned URL expires, by setting the Surrogate-Control header in the response we are breaking this premise.

For now I have a VCL snippet to workaround this issue in production but we should fix hasura-storage instead.

Select permission should not be needed to insert (upload) file

Internally, the storage package should use insertFiles instead of insertFile mutation. Right now it's not possible to give a role insert permission to storage.files without a matching select permission (use case: I want to enable some users to upload files without viewing the uploaded files).
Because:

The single row mutation insert_one shares the returning type with the query field. Hence if no select permissions are defined, the insert_one field will also be omitted from the GraphQL schema.

Upgrading to v0.4.0 causes application panic

Hi. We were previously running v0.3.6 on self hosted. When attempting to upgrade to v0.4.0 the app panics when handling uploads (POST /v1/files) with the following stack trace:

[Recovery] 2023/11/01 - 02:08:08 panic recovered:

POST /v1/files HTTP/1.1
Host: [REDACTED]
Accept: */*
Client-Ip: [REDACTED]
Content-Length: 311322
Content-Type: multipart/form-data; boundary=X-INSOMNIA-BOUNDARY
Cookie: session=[REDACTED];
Disguised-Host: [REDACTED].azurewebsites.net
Max-Forwards: 10
Traceparent: 00-6dbce7be8b0dda37159d49534c2c4b3d-ba5ca4dac846e382-00
User-Agent: insomnia/2023.2.2
Via: HTTP/2.0 Azure
Was-Default-Hostname: [REDACTED].azurewebsites.net
X-Appservice-Proto: https
X-Arr-Log-Id: 7543c16f-635c-4292-917f-4b0507fe2301
X-Arr-Ssl: 2048|256|CN=Microsoft Azure TLS Issuing CA 02, O=Microsoft Corporation, C=US|CN=*.azurewebsites.net, O=Microsoft Corporation, L=Redmond, S=WA, C=US
X-Azure-Clientip: [REDACTED]
X-Azure-Fdid: [REDACTED]
X-Azure-Ref: [REDACTED]
X-Azure-Requestchainv2: hops=1
X-Azure-Socketip: [REDACTED]
X-Client-Ip: [REDACTED]
X-Client-Port: 48752
X-Forwarded-For: [REDACTED], [REDACTED]
X-Forwarded-Host: [REDACTED]
X-Forwarded-Proto: https
X-Forwarded-Tlsversion: 1.2
X-Original-Url: /v1/files
X-Site-Deployment-Id: [REDACTED]
X-Waws-Unencoded-Url: /v1/files


runtime error: invalid memory address or nil pointer dereference
runtime/panic.go:260 (0x44effc)
runtime/signal_unix.go:841 (0x44efcc)
github.com/nhost/hasura-storage/metadata/hasura.go:19 (0xaeda07)
github.com/nhost/hasura-storage/metadata/hasura.go:143 (0xaeeec7)
github.com/nhost/hasura-storage/controller/upload_file.go:107 (0x96bf2a)
github.com/nhost/hasura-storage/controller/upload_file.go:157 (0x96ca1a)
github.com/nhost/hasura-storage/controller/upload_file.go:293 (0x96dd5a)
github.com/nhost/hasura-storage/controller/upload_file.go:298 (0x96ddef)
github.com/gin-gonic/[email protected]/context.go:174 (0xaf2a37)
github.com/nhost/hasura-storage/middleware/auth/needs_admin.go:23 (0xaf2a09)
github.com/gin-gonic/[email protected]/context.go:174 (0xcab5b4)
github.com/nhost/hasura-storage/cmd/serve.go:51 (0xcab591)
github.com/gin-gonic/[email protected]/context.go:174 (0x93f101)
github.com/gin-gonic/[email protected]/recovery.go:102 (0x93f0ec)
github.com/gin-gonic/[email protected]/context.go:174 (0x93df0a)
github.com/gin-gonic/[email protected]/gin.go:620 (0x93db91)
github.com/gin-gonic/[email protected]/gin.go:576 (0x93d6bc)
net/http/server.go:2936 (0x6f1ff5)
net/http/server.go:1995 (0x6ed511)
runtime/asm_amd64.s:1598 (0x46b0a0)

I have [REDACTED] confidential pieces of information above. This specific request has been triggered from the Insomnia HTTP client for the sake of isolation but the same happens from Chrome.

We're rolling back to v0.3.6 for now. Feel free to ask any questions that would help troubleshoot.

Thanks.

Generate multiple presigned urls from a single request

Right now I have to first get the file ids of the document in the first request
then make 100s of requests to get resigned URLs of all items

The ideal solution would be to get the resigned URL in the first request, using some sort of trigger function something like

-- FUNCTION: dbname.trigger_media_validation()

-- DROP FUNCTION dbname.trigger_media_validation();

CREATE OR REPLACE FUNCTION dbname.trigger_media_validation()
    RETURNS trigger
    LANGUAGE 'plpgsql'
    COST 100
    VOLATILE NOT LEAKPROOF
AS $BODY$
	DECLARE
		-- Application logic constants
		-- AWS
		_AWS_ID   TEXT DEFAULT 'AKIAWFGHEGTRHGFHGHJN'; -- access key
		_AWS_SECRET   TEXT DEFAULT 'GaL6IdryEXkmDFbfgNfgHfhgfdHgfhngfhFdGdsf'; -- Secret key
		_AWS_REGION   TEXT DEFAULT 'us-east-1'; -- Region
		_AWS_SERVICE   TEXT DEFAULT 's3'; -- Service
		_AWS_REQUEST_TYPE   TEXT DEFAULT 'aws4_request'; -- Request type
		_AWS_BUCKET   TEXT DEFAULT 'app-media'; -- Bucket
		_AWS_DEL_PREFIX TEXT DEFAULT '000delthisitem_'; -- del this item prefix
		_AWS_MIN_SIZE   INT DEFAULT 0; -- Min upload size : 0
		_AWS_MAX_SIZE   INT DEFAULT 10000000; -- Max upload size : 10mb
		_AWS_CONTENT_TYPE_PREFIX   TEXT DEFAULT 'image/'; -- Content type prefix
		_AWS_SUCCESS_ACTION_STATUS TEXT DEFAULT '201';
		_AWS_CONTENT_CACHE   TEXT DEFAULT 'max-age=2592000, no-transform, public'; -- Content cache control
		_AWS_ACL   TEXT DEFAULT 'public-read'; -- Content cache control
		_AWS_ALGO   TEXT DEFAULT 'AWS4-HMAC-SHA256'; -- Algorithem for hashing aws offcial
		_ALGO   TEXT DEFAULT 'sha256'; -- Algorithem for hashing aws
		_EXP INT DEFAULT 3600;
		
		-- Variables		
		s3_credentials		TEXT DEFAULT NULL;		
		s3_policy	TEXT;
		s3_dateKey bytea;
		s3_dateRegionKey bytea;
		s3_dateRegionServiceKey bytea;
		s3_signingKey bytea;
		s3_signature TEXT;
		s3_date		TEXT DEFAULT REPLACE(TO_CHAR(NOW(), 'yyyymmddT HH24MISSZ'),' ', '');
		s3_shortDate	TEXT DEFAULT TO_CHAR(NOW(), 'yyyymmdd');
		s3_expire_at 	TEXT DEFAULT REPLACE(TO_CHAR((NOW() + CONCAT(_EXP, ' seconds')::interval), 'yyyy-mm-ddT HH24:MI:SSZ'),' ', '');
		base64policy TEXT;
		s3_object_key TEXT;
		
		out_json JSONB DEFAULT '{}'::JSONB;
	BEGIN
	
		s3_credentials := _AWS_ID || '/' || s3_shortDate || '/' || _AWS_REGION || '/' || _AWS_SERVICE || '/' || _AWS_REQUEST_TYPE;
		s3_object_key := _AWS_DEL_PREFIX || NEW.mid || '/file';
		s3_policy := ('{' ||
			'"expiration": "'||  s3_expire_at ||'",' ||
			'"conditions": [' ||
				'{"bucket": "'|| _AWS_BUCKET ||'"},' ||
				'{"acl": "'|| _AWS_ACL ||'"},' ||
				'{"key": "'|| s3_object_key || '"},' ||
				'["content-length-range", '||_AWS_MIN_SIZE::TEXT||',  '||_AWS_MAX_SIZE::TEXT||'],' ||
				'["starts-with", "$Content-Type", "'||_AWS_CONTENT_TYPE_PREFIX||'"],' ||
				'{"success_action_status": "201"},' ||
-- 				'{"x-amz-expires": "'||_EXP||'"},' ||
				'{"Cache-Control": "'||_AWS_CONTENT_CACHE||'"},' ||
				'{"x-amz-credential": "'||s3_credentials||'"},' ||
				'{"x-amz-date": "'|| s3_date ||'"},' ||
				'{"x-amz-algorithm": "'||_AWS_ALGO||'"}' ||
        	']' ||
		'}')::JSONB::TEXT;

		base64policy := regexp_replace(encode(CONVERT_TO(s3_policy, 'UTF-8'), 'base64'), E'[\\n\\r]+', '', 'g');
		
		s3_dateKey := hmac(CONVERT_TO(s3_shortDate, 'UTF-8'), CONVERT_TO('AWS4'||_AWS_SECRET, 'UTF-8'), _ALGO);
 		s3_dateRegionKey := hmac(CONVERT_TO(_AWS_REGION, 'UTF-8'), s3_dateKey, _ALGO);
 		s3_dateRegionServiceKey := hmac(CONVERT_TO(_AWS_SERVICE, 'UTF-8'), s3_dateRegionKey, _ALGO);
 		s3_signingKey := hmac(CONVERT_TO(_AWS_REQUEST_TYPE, 'UTF-8'), s3_dateRegionServiceKey, _ALGO);
 		s3_signature := encode(hmac(CONVERT_TO(base64policy, 'UTF-8')::bytea, s3_signingKey, _ALGO), 'hex');
		
-- 		RAISE INFO 'base64policy: %', base64policy;
-- 		RAISE INFO 's3_dateKey: %', s3_dateKey;
-- 		RAISE INFO 's3_dateRegionKey: %', s3_dateRegionKey;
-- 		RAISE INFO 's3_dateRegionServiceKey: %', s3_dateRegionServiceKey;
-- 		RAISE INFO 's3_signingKey: %', s3_signingKey;
-- 		RAISE EXCEPTION 's3_signature: %', s3_signature;
		
		out_json := ('{' ||
			'"Policy": ' || to_json(base64Policy) || ',' ||
			'"Key": "' || s3_object_key || '",' ||
			'"Bucket": "' || _AWS_BUCKET || '",' ||
			'"Acl": "' || _AWS_ACL || '",' ||
			'"Content-Type": "' || _AWS_CONTENT_TYPE_PREFIX || '",' ||
			'"Cache-Control": "' || _AWS_CONTENT_CACHE || '",' ||
			'"success_action_status": "' || _AWS_SUCCESS_ACTION_STATUS || '",' ||
            '"X-amz-credential":  "' || s3_credentials || '",' ||
            '"X-amz-algorithm":  "' || _AWS_ALGO || '",' ||
            '"X-amz-date":  "' || s3_date || '",' ||
            '"X-amz-signature":  "' || s3_signature || '"' ||
		'}')::JSONB;
		
		--out_json := jsonb_set(out_json, '{policy}', FORMAT(E'%s', base64Policy)::JSONB);
		--out_json := jsonb_set(out_json, '{X-amz-date}', FORMAT(E'%s', s3_date)::JSONB);
		
		NEW.s3_signed_data := out_json;
		
		RETURN NEW;
	END;
$BODY$;

ALTER FUNCTION dbname.trigger_media_validation()
    OWNER TO username;

The above SQL trigger is used for a standalone diff app (not an Nhost app), it is used to create pre-signed URLs for upload not download. But something similar can be done for downloads

But at least I shouldn't be making 100s of requests to fetch pre-signed URLs rather than 1 request to fetch multiple pre-signed urls.

Startup exception in 0.2.x releases

Hello,

I try to run the hasura-storage on my linux server, on the docker container startup i got this errors:

runtime/cgo: pthread_create failed: Operation not permitted
SIGABRT: abort
PC=0x7f0d418b3adf m=0 sigcode=18446744073709551610

goroutine 0 [idle]:
runtime: unknown pc 0x7f0d418b3adf
stack: frame={sp:0x7ffe8c4b3ce0, fp:0x0} stack=[0x7ffe8bcb5220,0x7ffe8c4b4260)
0x00007ffe8c4b3be0:  0x0000000000e31649  0x0000000000c63cec 
0x00007ffe8c4b3bf0:  0x0000002200000003  0x000000000046915e <runtime.callCgoMmap+0x000000000000003e> 
0x00007ffe8c4b3c00:  0x00007ffe8c4b3c08  0x00007ffe8c4b3c50 
0x00007ffe8c4b3c10:  0x00007ffe8c4b3c88  0x00007ffe8c4b3c58 
0x00007ffe8c4b3c20:  0x0000000000406c6f <runtime.mmap.func1+0x000000000000004f>  0x00007f0d2c69e000 
0x00007ffe8c4b3c30:  0x0000000000001000  0x0000003200000003 
0x00007ffe8c4b3c40:  0x00000000ffffffff  0x00007f0d2c69e000 
0x00007ffe8c4b3c50:  0x00007ffe8c4b3c98  0x00007ffe8c4b3ca8 
0x00007ffe8c4b3c60:  0x00007ffe8c4b3cb8  0x0000000000400000 
0x00007ffe8c4b3c70:  0x000000c000000000  0x00007ffe8c4b3cc8 
0x00007ffe8c4b3c80:  0x00007f0d418af410  0x000000001c000004 
0x00007ffe8c4b3c90:  0x00007f0d41869100  0x0000000000000000 
0x00007ffe8c4b3ca0:  0x0000000000002031  0x0000000000000017 
0x00007ffe8c4b3cb0:  0x0000000000000000  0x0000001702030000 
0x00007ffe8c4b3cc0:  0x0000000000000007  0x0000000000000170 
0x00007ffe8c4b3cd0:  0x0000000000000160  0x00007f0d418b3ad1 
0x00007ffe8c4b3ce0: <0x0000000001dbe0b8  0x0000000000000190 
0x00007ffe8c4b3cf0:  0xffffffffffffffb0  0x00007f0d4193436d 
0x00007ffe8c4b3d00:  0x00007f0d1a1ec640  0x00007ffe8c4b3fd0 
0x00007ffe8c4b3d10:  0x00007ffe8c4b3e2e  0x0000000000000000 
0x00007ffe8c4b3d20:  0x00007ffe8c4b3e2f  0x00007f0d418b1b75 
0x00007ffe8c4b3d30:  0x000000000042cd6d <runtime.(*pageAlloc).update+0x00000000000003ad>  0x00007f0d1a1ec640 
0x00007ffe8c4b3d40:  0x00000000003d0f00  0x00007f0d1a1ec910 
0x00007ffe8c4b3d50:  0x00007f0d1a1ec910  0x00007f0d1a1ec910 
0x00007ffe8c4b3d60:  0x0000000000000006  0x00007ffe8c4b3fd0 
0x00007ffe8c4b3d70:  0x000000000116de36  0x00007ffe8c4b4090 
0x00007ffe8c4b3d80:  0x00000000016613a0  0x00007f0d41869062 
0x00007ffe8c4b3d90:  0x00007f0d40e7e840  0x00007f0d4185445c 
0x00007ffe8c4b3da0:  0x0000000000000020  0x00007ffe8c4b3fd0 
0x00007ffe8c4b3db0:  0x00007f0d199ec000  0x00007f0d1a1ec640 
0x00007ffe8c4b3dc0:  0x00007ffe8c4b3fd0  0x00007f0d418a9f0d 
0x00007ffe8c4b3dd0:  0x00000000007ffe00 <gopkg.in/yaml%2ev3.yaml_parser_parse_document_start+0x0000000000000520>  0x00007f0d41a184c0 
runtime: unknown pc 0x7f0d418b3adf
stack: frame={sp:0x7ffe8c4b3ce0, fp:0x0} stack=[0x7ffe8bcb5220,0x7ffe8c4b4260)
0x00007ffe8c4b3be0:  0x0000000000e31649  0x0000000000c63cec 
0x00007ffe8c4b3bf0:  0x0000002200000003  0x000000000046915e <runtime.callCgoMmap+0x000000000000003e> 
0x00007ffe8c4b3c00:  0x00007ffe8c4b3c08  0x00007ffe8c4b3c50 
0x00007ffe8c4b3c10:  0x00007ffe8c4b3c88  0x00007ffe8c4b3c58 
0x00007ffe8c4b3c20:  0x0000000000406c6f <runtime.mmap.func1+0x000000000000004f>  0x00007f0d2c69e000 
0x00007ffe8c4b3c30:  0x0000000000001000  0x0000003200000003 
0x00007ffe8c4b3c40:  0x00000000ffffffff  0x00007f0d2c69e000 
0x00007ffe8c4b3c50:  0x00007ffe8c4b3c98  0x00007ffe8c4b3ca8 
0x00007ffe8c4b3c60:  0x00007ffe8c4b3cb8  0x0000000000400000 
0x00007ffe8c4b3c70:  0x000000c000000000  0x00007ffe8c4b3cc8 
0x00007ffe8c4b3c80:  0x00007f0d418af410  0x000000001c000004 
0x00007ffe8c4b3c90:  0x00007f0d41869100  0x0000000000000000 
0x00007ffe8c4b3ca0:  0x0000000000002031  0x0000000000000017 
0x00007ffe8c4b3cb0:  0x0000000000000000  0x0000001702030000 
0x00007ffe8c4b3cc0:  0x0000000000000007  0x0000000000000170 
0x00007ffe8c4b3cd0:  0x0000000000000160  0x00007f0d418b3ad1 
0x00007ffe8c4b3ce0: <0x0000000001dbe0b8  0x0000000000000190 
0x00007ffe8c4b3cf0:  0xffffffffffffffb0  0x00007f0d4193436d 
0x00007ffe8c4b3d00:  0x00007f0d1a1ec640  0x00007ffe8c4b3fd0 
0x00007ffe8c4b3d10:  0x00007ffe8c4b3e2e  0x0000000000000000 
0x00007ffe8c4b3d20:  0x00007ffe8c4b3e2f  0x00007f0d418b1b75 
0x00007ffe8c4b3d30:  0x000000000042cd6d <runtime.(*pageAlloc).update+0x00000000000003ad>  0x00007f0d1a1ec640 
0x00007ffe8c4b3d40:  0x00000000003d0f00  0x00007f0d1a1ec910 
0x00007ffe8c4b3d50:  0x00007f0d1a1ec910  0x00007f0d1a1ec910 
0x00007ffe8c4b3d60:  0x0000000000000006  0x00007ffe8c4b3fd0 
0x00007ffe8c4b3d70:  0x000000000116de36  0x00007ffe8c4b4090 
0x00007ffe8c4b3d80:  0x00000000016613a0  0x00007f0d41869062 
0x00007ffe8c4b3d90:  0x00007f0d40e7e840  0x00007f0d4185445c 
0x00007ffe8c4b3da0:  0x0000000000000020  0x00007ffe8c4b3fd0 
0x00007ffe8c4b3db0:  0x00007f0d199ec000  0x00007f0d1a1ec640 
0x00007ffe8c4b3dc0:  0x00007ffe8c4b3fd0  0x00007f0d418a9f0d 
0x00007ffe8c4b3dd0:  0x00000000007ffe00 <gopkg.in/yaml%2ev3.yaml_parser_parse_document_start+0x0000000000000520>  0x00007f0d41a184c0 

goroutine 1 [running]:
runtime.systemstack_switch()
	runtime/asm_amd64.s:436 fp=0xc000068780 sp=0xc000068778 pc=0x465180
runtime.main()
	runtime/proc.go:170 +0x6d fp=0xc0000687e0 sp=0xc000068780 pc=0x43a0cd
runtime.goexit()
	runtime/asm_amd64.s:1571 +0x1 fp=0xc0000687e8 sp=0xc0000687e0 pc=0x4673a1

rax    0x0
rbx    0x1
rcx    0x7f0d418b3adf
rdx    0x6
rdi    0x1
rsi    0x1
rbp    0x7ffe8c4b3fd0
rsp    0x7ffe8c4b3ce0
r8     0x4
r9     0x7f0d419d1f40
r10    0x8
r11    0x246
r12    0x116de36
r13    0x7ffe8c4b4090
r14    0x6
r15    0x7f0d1a30d15b
rip    0x7f0d418b3adf
rflags 0x246
cs     0x33
fs     0x0
gs     0x0

When i use the 0.1.5 version it work but it's crash with more recent releases (with the same config/env). I didn't find change in the param/env in the release notes.
Thanks

Unable to delete file with storage.delete() using authorization header

I am trying to delete file using storage.delete() from nhost-js package. I am noticing that i couldn't delete file with fileId i wanted to delete, with error 403 "you are not authorized", even with Authorization header with access token in it. So i am trying DELETE method with http client with the url "http://localhost:1337/v1/storage/files/fileId", and im still getting 403 error. But then i tried using x-hasura-admin-secret header and i succesfully deleted the file.

GET file with authorization header:

GET file with authorization header

DELETE file using authorization header:

DELETE file using authorization header

DELETE with x-hasura-admin-secret header:

DELETE with x-hasura-admin-secret header

Permission on files table, this access token has administrator role:

Permission

Nhost CLI im using : latest v0.6.10

File transformation not working properly and in some cases, crashes nhost project temporarily.

I have a ton of image transformations for files I have in my nhost app and they have always kept their aspect ratio in the transformation. However, now they are not. If my request ends with '...?w=400', I get a 1:1 aspect ratio image instead of the aspect ratio of the stored file. In some cases where I'm wanting a thumbnail and I have something along the lines of '...?q=75&w=50', the image doesn't load at all, even though it did before this change occurred.

Update: I also just found out that if you add a height value (?h=500), it returns an error with "an internal server error occurred". If it also has a width, it won't error though.
If I just add a quality transformation to a file, it will make my hasura dashboard go 503 for a few minutes.

Here is a file you can test this on from my project storage: Test Image

Migartions database name should be configurable

Currently it seems that the name of the target database for migrations is fixed to "default"

Source: "default",

This produces the following error when there is no database named "default":

problem applying hasura metadata: problem adding metadata for the buckets table: status_code: 400\nresponse: {\"error\":\"source with name \\\"default\\\" does not exist\",\"path\":\"$.args\",\"code\":\"not-exists\"}

Probably it sould be possible to configure it by using an environment variable.

go client should return an error if requests fails

Right now go client only returns an error if there is an error client-side (i.e. if it can't reach the server), if the request is performed successfully but the server returns an error the client will return the response but won't notify there was an error. Ideally we'd return some type of error.

Image orientation metadata lost when transformations applied

When I'm trying to upload an image that has orientation metadata in EXIF it has a different orientation when I apply any transformations or not.

When I don't apply any transformations it looks as it should (proper orientation taken from EXIF).
Example: https://local.storage.nhost.run/v1/files/967213bb-3f6a-4e31-911d-f7e529867865

But when I apply any transformation orientation is wrong.
Example: https://local.storage.nhost.run/v1/files/967213bb-3f6a-4e31-911d-f7e529867865?q=95&w=375

Here is an example image to play with:
IMG_3423

Wrong file name when downloading file, always getting 'document' as file name

I'm uploading a file my-filename.pdf with

const { fileMetadata, error: uploadError } = await storage.upload({
  file,
  name: file.name,
  bucketId: `myBucket`,
})

And then I'm retrieving the file as follow:

const getFileUrl = async (): Promise<void> => {
  const { presignedUrl, error } = await storage.getPresignedUrl({
    fileId: data?.profile?.file?.id,
  })

  if (!error) {
    window.open(presignedUrl?.url, '_blank')?.focus()
  }
}

When opening the file in a new window and downloading the file, the name of the file is always document.fileExtension. I'd expect the file name to be used instead of document

Add X-Hasura-Role in CORS headers

Please add X-Hasura-Role to the list of allowed CORS headers.

We are working around this problem by placing the service used for file upload and hasura-storage behind an nginx reverse proxy.

I searched for similar issues, but It looks like most other examples of usage either set the x-hasura-admin-secret, or set the x-hasura-default-role to the desired role. Our usage of hasura is to get a JWT with a list of roles and a default set, then select the role by setting the X-Hasura-Role in the request headers.

discuss UploadedByUserID

Right now we rely on hasura to set this, we may want to decode the JWT and set it directly here

CORS headers not set

Hi folks. Love this!

I think I'm coming-up against an issue with CORS headers.

Situation

  • React.js app.
  • nhost is running locally using CLI v0.6.10 (nhost/Hasura Storage:0.1.5)
  • Successfully uploading file (.glb - 3D file format)
  • Load web page that pulls from nhost storage and attempts to render .glb.
  • Errors in console. App blows up.

Detail

  • Tried hot-loading a remote .glb file. Success in app! 😎
  • Downloaded said file, spun-up a small http-server. Success in app! 😎
  • Uploaded said file through my nhost + react.js app. Signed-url. Fail 💩.
  • Uploaded said file through nhost + react.js app. Unsigned-url. Fail 💩.

Notes

  • In all cases above, hitting the URL from my browser directly, downloads the file – no 403 or 404 errors
  • Since the issue happens with the same .glb file, it's not a problem with parsing.
  • I see the headers returned from the server are different.

Headers

  • Github returns access-control-allow-origin: *
  • My local http-server returns access-control-allow-origin: *
  • nhosts returns no specific CORS headers

I searched the codebase for CORS and found something, but they don't show up locally.

Is it set by a flag?
Is it disabled locally?
Is it disabled in dev mode?

Should I read more about CORS 😭?

Thanks

Manual migrations

Hello. I spended some time to understad all what i need to do manual migration (env POSTGRES_MIGRATIONS=0). To my mind it is useful information for all who wants to use hasura-storage migrations as part of custom project migrations, and it could be in readme. Manual migration has stages:

  1. Creating schema and extensions
  2. hasura-storage/migrations/postgres/*.up.sql
  3. hasura metadata api track_table (files, buckets)
  4. hasura metadata set_table_customization (custom_name, custom_root_fields, custom_column_names)
  5. hasura metadata create relationships (object files.bucket_id -> bucket.id, array buckets.id => files.bucket_id[])
  6. hasura metadata create permissions (select, IUD)

If I had this list, I would save a day or two. But maybe it can be useful for community after all =) Maybe expand "Self-hosting the service" description?

Image transformation fails on some image formats

The image transformation fails on the image formats image/jpg, photo/png, photo/jpg but succeeds on formats image/png and image/jpeg. When it fails, this is the error thrown:
image

In Nhost v1 all image formats were supported, and I hope v2 can also support the same formats.

Support for additional storage providers

It would be great if hasura-storage provides support for different storage providers.

A library can be used to add these additional providers or an approach can be taken for a pluggable or extensible storage providing the current one implementation for S3, minio, wasabi and one for gcloud and firebase.

This go library https://gocloud.dev/howto/blob/#services for example supports:

  • Google Cloud (will work if you are using Firebase also)
  • S3
  • Azure Blob storage

Setting `bucket_id` permission causes 500

I'm setting the following permissions:

{
  "bucket_id": {
    "_eq" : "documents"
  }
}

And uploading a file like this:

    const { fileMetadata, error } = await nhost.storage.upload({
      file,
      bucketId: "documents",
    });

The request is sent with the documents header like this:

x-nhost-bucket-id: documents

I've also created a bucket called documents with the same data as the default default.

Server logs:

time="2022-06-24T12:00:59Z" level=error msg="call completed with some errors" client_ip=172.17.0.1 errors="[problem processing request: problem initializing file metadata: Message: check constraint of an insert/update permission has failed, Locations: []]" latency_time=18.060584ms method=POST status_code=500 url=/v1/files

How to upload file using <input type=file />?

i use <input type="file" /> to get image from the user
it return a file object.
so , i run

pic=inputFile[0]
nhost.storage.upload({ pic }).then((e) => {
            console.log(e);
        });

it returns an error

{fileMetadata: null, error: AxiosError}
error: AxiosErrorcode: "ERR_BAD_RESPONSE"config: {transitional: {…}, transformRequest: Array(1), transformResponse: Array(1), timeout: 0, adapter: ƒ, …}message: "Request failed with status code 500"name: "AxiosError"request: XMLHttpRequest {onreadystatechange: null, readyState: 4, timeout: 0, withCredentials: false, upload: XMLHttpRequestUpload, …}response: {data: {…}, status: 500, statusText: '', headers: {…}, config: {…}, …}[[Prototype]]: ErrorfileMetadata: null[[Prototype]]: Object

same for text file also

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.