GithubHelp home page GithubHelp logo

georgysavva / scany Goto Github PK

View Code? Open in Web Editor NEW
1.2K 1.2K 64.0 252 KB

Library for scanning data from a database into Go structs and more

License: MIT License

Go 100.00%
database go golang mysql pgx postgresql sql

scany's Introduction

Software engineer with seven years of professional experience.

Author of the open-source library Scany.

scany's People

Contributors

dependabot[bot] avatar georgysavva avatar jfyne avatar kmpm avatar krazik-intuit avatar mrkagelui avatar nopcoder avatar paulforgey avatar talbse avatar vadimi avatar zolstein avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scany's Issues

cannot scan into struct - columnToFieldIndex does not get populated with column names

Similar to the example given in readme, I tried to tailor it to my case, using postgres (local db) and pgx module v4

Here I tried the two versions of select with 4 or just 2 columns which would fit into the FanOutStatus struct

package` pgrds

import (
   "context"
   "fmt"
   "github.com/georgysavva/scany/pgxscan"
   "github.com/jackc/pgx/v4/pgxpool"
   "os"
)

//const GET_FANOUT_STATUS_SQL = "select requestid,requestcount,status,count(*) as totalrequest from srepfanoutstatus where requestid=$1 "+
//                     "group by requestid,requestcount,status;"

const GET_FANOUT_STATUS_SQL = "SELECT requestid,requestcount FROM srepfanoutstatus"

type FanOutStatus struct {
   RequestId string
   RequestCount int
   //status string
   //totalrequest int
}


func GetFanoutRequestStatus(request_id string) []*FanOutStatus {
   // urlExample := "postgres://username:password@localhost:5432/database_name"
   conn, err := pgxpool.Connect(context.Background(), os.Getenv("DATABASE_URL"))
   if err != nil {
      fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
      os.Exit(1)
   }

   var records []*FanOutStatus
   //var RequestId string
   //var RequestCount int
   //var Status string
   //var TotalRequest int
   //rows, err := conn.Query(context.Background(), GET_FANOUT_STATUS_SQL, "1234")
   err = pgxscan.Select(context.Background(), conn, &records, GET_FANOUT_STATUS_SQL)
   if err != nil {
      fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)
      os.Exit(1)
   }

   for rec := range records {
      fmt.Println(rec)
   }


   return records
}

ERROR when calling the method is
QueryRow failed: scany: column: 'requestid': no corresponding field found, or it's unexported in pgrds.FanOutStatus

from debugging, found that issue triggers at dbscan.go

func (rs *RowScanner) scanStruct(structValue reflect.Value) error {
   scans := make([]interface{}, len(rs.columns))
   for i, column := range rs.columns {
      fieldIndex, ok := rs.columnToFieldIndex[column]
...

line 294

rs(RowScanner) is populated with all the columns but not the map, which is empty.

type RowScanner struct {
   rows               Rows
   columns            []string
   columnToFieldIndex map[string][]int
   mapElementType     reflect.Type
   started            bool
   start              startScannerFunc
}

โ“ Use array_agg

Hey. I have a problem that I described earlier at this link.

I updated my code a bit and it turns out the following.

package sqlstore

import (
	"context"
	"fmt"
	"log"
	"os"
	"time"

	"github.com/georgysavva/scany/pgxscan"
	"github.com/jackc/pgx/v4"
)

// Service ...
type Service struct {
	ID   int    `db:"id"`
	Name string `db:"name"`
}

// Price ...
type Price struct {
	ID        int        `db:"id"`
	Value     int        `db:"value"`
	Category  string     `db:"category"`
	Available bool       `db:"available"`
	Services  []Service  `db:"services"`
	CreatedAt time.Time  `db:"created_at"`
	DeletedAt *time.Time `db:"deleted_at"`
}

// Event ...
type Event struct {
	ID          int        `db:"id"`
	Name        string     `db:"name"`
	Description string     `db:"description"`
	Address     string     `db:"address"`
	StartDate   time.Time  `db:"start_date"`
	Duration    int        `db:"duration"`
	Prices      []Price    `db:"prices"`
	CreatedAt   time.Time  `db:"created_at"`
	DeletedAt   *time.Time `db:"deleted_at"`
}

// GetEvents ...
func GetEvents(conn *pgx.Conn) ([]Event, error) {
	var items []Event

	err := pgxscan.Select(
		context.Background(),
		conn,
		&items,
		`
			SELECT 
				e.*,
				array_agg(array[pr.*]) prices
			FROM events e LEFT JOIN
				(select
					p2.*
				from prices p2
				group by p2.id) pr
				ON e.id = pr.event_id
			WHERE e.deleted_at IS NULL GROUP BY e.id ORDER BY e.id DESC
		`,
	)
	if err != nil {
		return nil, err
	}

	return items, nil
}

func main() {

	conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
	defer conn.Close(context.Background())

	events, err := GetEvents(conn)
	if err != nil {
		log.Fatalln(err)
	}
	fmt.Printf("%+v\n", events)

}

The question is as follows. How to use array_agg? I am unable to decode the array []Price

scany: scan row into struct fields: can't scan into dest[20]: unknown oid 2287 cannot be scanned into *[]Price

If possible, show with this example how to decode an array from postgres

Thanks

Deep Scan with JSON

type User struct {
	Name string
	Customers []Customer
}

type Customer struct {
	Name string
	Type CustomerType
}

type CustomerType struct {
	Id int
	Name string
}

sqlQuery := `
	SELECT 
		u.name AS "name", 
		COALESCE(c.customers, '[]') AS "customers"
	FROM user u
	LEFT JOIN LATERAL (
		SELECT json_agg(
			json_build_object(
				'name', customer.name,
				'type.id', ct.id,
				'type.name', ct.name
			)
		) AS customers
		FROM customer 
		INNER JOIN user_customer uc ON uc.customer_id = customer.id
		INNER JOIN customer_type ct ON ct.id = customer.customer_type
		WHERE u.id = uc.user_id
	) c AS TRUE
`

var users []*User
err := pgxscan.Select(ctx, db, &users, sqlQuery) 

Issue: customer type does not get populated. Only customer name does.

pgxscan for pgx QueryRow (pgx.Row) support

I've read the source for pgxscan, and I haven't found the package supporting scanning pgx.Row (one returned from (p *pgx.Pool) QueryRow(ctx context.Context, sql string, args ...interface{}) pgx.Row).

Do you plan to add this into the library?

As for now, I've only used the Scan or ScanRow functions as opposed to the "higher level" function Select or Get. Reading further into the dbscan package, it seems to me that the various checks (like checking for no multiple rows) would make adding this implementation non trivial. However, having the single return pgx.Row and needing only to check only one error code can save a lot of coding.

Thank you in advanced.

Scanning into a slice

Hi,

The scany documentation states "Apart from structs, support for other destination types: maps, slices, etc." but I have not been able to find anything related to this. I have a SQL procedure which returns an array of UUIDs, and I'm having trouble figuring out how to use scany to scan that result into a slice (or an array, doesn't really matter).

Would you have any pointers to put me on the right path? Note: I use pgx as my postgres driver.

Thanks in advance.

Scanning postgres array columns into map[string]interface{}

Thanks a lot for this great library!

I try to scan a query where some of the columns are arrays, with the destination being a map[string]interface{}. This works fine for all columns, but the array columns don't turn into slices, but instead an object with a structure like this:

"myArray": {
    "Elements": ["my first value"],
    "Dimensions": [
      {
        "Length": 1,
        "LowerBound": 1
      }
    ],
    "Status": 2
  }

What I actually expected is:

"myArray": ["my first value"]

Is this intentional? Is there some way to get around this? If I scan into a struct with slice fields, it works fine. But for my use case, a map is needed.

Custom enums

DBeaver type representation

type Film struct {
	ID           string
	Name         string
	ReleaseYear  int
	Genres       []string
	CoverImageId string
}

func GetFilms() ([]*Film, error) {
	ctx := context.Background()
	var films []*Film
	err := pgxscan.Select(ctx, DBConn, &films, `SELECT * FROM film;`)
	if err != nil {
		return films, err
	}
	return films, nil
}

Error:

{
    "error": "scany: scan row into struct fields: can't scan into dest[2]: unknown oid 16397 cannot be scanned into *[]string"
}

The problem is clearly that scany doesn't work with custom enums. How to make it work like in https://github.com/jackc/pgx/blob/master/example_custom_type_test.go?

one2many

Hi, is there support for one2many relationships?
e.g.

type User struct {}
type Book struct {
  Authors []User
}

transaction support

Hi! Thank you for your library !
Do you have any plan of supporting native pgx transaction ? (like sqlx)

Feature: use a pool of scans slices

Thanks for the great library.

While looking through this library I noticed that each time RowScanner.Scan was called a new interface slice was created.

Since the row scanner caches column names and field indexes I wanted to see if there could be a benefit to using a pool of slices rather than allocating a new one each scan.

I created a data struct with 1024 columns and some quick benchmarks to my fork of scany here. The benchmark data struct and the benchmarks are in two new files in my fork, bench_data_test.go and bench_test.go, if anyone wants to run the benchmarks for themselves.

Results of benchmarks:

goos: darwin
goarch: amd64
pkg: github.com/georgysavva/scany
BenchmarkStructPool
BenchmarkStructPool-8   	   16312	     84675 ns/op	      44 B/op	       1 allocs/op
BenchmarkStruct
BenchmarkStruct-8       	   13929	     81237 ns/op	   16397 B/op	       1 allocs/op
BenchmarkMapPool
BenchmarkMapPool-8      	    5966	    171132 ns/op	   57429 B/op	    2050 allocs/op
BenchmarkMap
BenchmarkMap-8          	    6478	    171839 ns/op	   73760 B/op	    2050 allocs/op
PASS

Using a pool of slices reduces the memory usage by over 16000 B/op when scanning into both, a struct or a map. And specifically for a struct, the bytes allocated remain constant even though there are 1024 different columns.

This is a great use of sync.Pool since due to RowScanner's caching the allocated slices are of the same length each time Scan is called. I think it would be useful for RowScanner to provide an option for using a pool instead of allocating a new slice.

Support for recursive scanning

Hi, I recently stumbled across this library and was wondering if something like the following is possible.

I have a struct that looks like this

type Comment struct {
	ID       string `db:"id"`
	Text     string `db:"text"`
	ParentID string `db:"parent_id"`
	PostID   string `db:"post_id"`
	Replies  []Comment
}

With a recursive function like so for retrieving data

CREATE OR REPLACE FUNCTION get_comments(parent_comment_id varchar) returns SETOF public.comments AS $$
BEGIN
    RETURN QUERY
    WITH RECURSIVE x AS (
        -- anchor:
        SELECT * FROM comments WHERE post_id = parent_comment_id UNION ALL
        -- recursive:
        SELECT t.* FROM x INNER JOIN comments AS t ON t.parent_id = x.id
    )
    SELECT * FROM x;
END
$$ LANGUAGE plpgsql;

Can scany retrieve the results of something like SELECT * FROM get_comments('id'); into a comments []Comment or would I need to further process the results myself after retrieving them?

When wrapping a single anonymous structure we lose its column mapping

In the case need to wrap a structure anonymously to simply implement some interfaces or custom methods, scany loses the column mapping throwing the error: scany: scan row into struct fields: can't scan into dest[1]: cannot assign ...

type UUID struct {
    pgtype.UUID
}
q := `SELECT id FROM ...`

var rows []struct {
    UUID `db:"id"`
}
if err := pgxscan.Select(ctx, pg.Client(tx), &rows, q); err != nil {
    return nil, err
}

// listing uuids
fmt.Println(rows)

I propose the following fix: #38

FR: config to ignore columns not found in destination type

Hi,
Thanks for building this!

We have an SQL query that's part static (e.g. select * from users) and part dynamic (e.g. multiple joins with user-defined tables, the number of joins is decided in runtime). We have no problem scanning the "dynamic" columns by ourselves, but we hoped to use scany to scan the static part, so for example the following would be great:

type User struct {
    ...
}

type Result struct {
    User
    DynamicStuff []DynamicData
}
...
...
scanner := dbscan.NewRowScanner(...).IgnoreUnknownColumns()
// we use per-row scanning interface so we can do our own scans for the dynamic part.
for rows.Next() {
    var user User
    scanner.Scan(&user)
    // .. our own scanning for the dynamic part ..
    d := scanDynamicStuff(&rows)
    result := Result{user, d}
}

Please let me if this makes sense - we can do a PR as well.

Type does not implement 'Querier'

Wanted to make a fork to test some extensions like Insert etc but it seem this build is not working. Passing a PGX connection or pool results in: Type does not implement 'Querier'

Type does not implement 'Querier'
need method: Query(ctx context.Context, sql string, args ...interface{}) (pgx.Rows, error)
have method: Query(ctx context.Context, sql string, args ...interface{}) (pgx.Rows, error)

A bit strange I must admit

map[string]interface{} mapping not working for database/sql

Hi!

I am trying to map a json object from the database to a map[string]interface{}

I am doing this because I want to reuse the struct and there can be an arbitrary number of columns queried from the database.

I managed to get it working with pgx driver, however, with database/sql I am getting an error that I don't understand ...

Scan error on column index 2, name "data": unsupported Scan, storing driver.Value type []uint8 into type *map[string]interface {}

I managed to get all the code in a single file, here it is:

package main

import (
	"context"
	"database/sql"
	"fmt"
	"github.com/georgysavva/scany/pgxscan"
	"github.com/georgysavva/scany/sqlscan"
	"github.com/jackc/pgx/v4"
	_ "github.com/lib/pq"
	"time"
)

type MetricMap struct {
	Time     time.Time
	DeviceId string
	Data     map[string]interface{}
}

func main() {
	fmt.Println("Starting timescale playground")
	ctx := context.Background()
	connStr := "postgres://postgres:postgres@localhost:5430/postgres?sslmode=disable"
	db, err := sql.Open("postgres", connStr)
	if err != nil {
		fmt.Println(err)
		return
	}
	sqlJson := `
		select time, device_id, json_build_object('ph_con_tot', ph_con_tot , 'ph_gen_tot', ph_gen_tot) as "data"
		from metrics_computer_wide
		  WHERE device_id = '9d5eaae0-421b-11ec-9949-7f0fdad2c99c' and ph_con_tot is not null and 
		  time > '2022-04-01' and time <= '2022-04-05' limit 10
		`
	//sqlJson = "SELECT time, device_id, json_build_object('ph_con_tot',ph_con_tot,'ph_gen_tot',ph_gen_tot) as data FROM metrics_base WHERE device_id = '08c210ca-7077-4907-8ea7-a98b77d4df0c' AND time >= '2022-05-02 13:13:56' AND time <= '2022-05-02 13:14:56'"
	var metrics []MetricMap

	err = sqlscan.Select(ctx, db, &metrics, sqlJson)
	if err != nil {
		fmt.Println(err)
	} else {
		for _, metric := range metrics {
			fmt.Printf("%s %s %f %f\n", metric.Time, metric.DeviceId, metric.Data["ph_con_tot"], metric.Data["ph_gen_tot"])
		}
	}

	// Now with PGX Scan
	dbPgx, err := pgx.Connect(ctx, connStr)
	if err != nil {
		fmt.Println(err)
		return
	}
	ctxPgx := context.Background()

	pgxscan.Select(ctxPgx, dbPgx, &metrics, sqlJson)
	if err != nil {
		fmt.Println(err)
	} else {
		for _, metric := range metrics {
			fmt.Printf("%s %s %f %f\n", metric.Time, metric.DeviceId, metric.Data["ph_con_tot"], metric.Data["ph_gen_tot"])
		}
	}

}

What am I doing wrong? I tried to debug the problem going deeper into your library, but still I don't understand where some magic happens ..

Thank you very much!

Scanning into a pointer to string inside struct

Hello, I read through the note in doc, but i need to implement some columns as null-able field. Is there any workaround to scan pointer to string using pgxscan ?

My Struct is as such
image

It is able to scan to pointer in nested struct, which is of composite type, but is not able to scan to plain pointer to nil. Am I missing something ?

image

Nested/Embeded structs don't seem to work correctly

I'm trying to switch from sqlx, I have nested structs for commonly used fields, these worked with sqlx. you seem to have a function initializeNested() but it's called after the error, so not sure if it doesn't work or for something else?

error scany: column: 'id': no corresponding field found, or it's unexported in...

type AutoIncrIDField struct {
	ID uint32 `db:"id,primarykey,autoincrement" json:"id,omitempty"`
}
type Another struct {
	AutoIncrIDField
        SomeOtherID     uint32      `db:"some_other_id"`
}

I tried to poke around to see where to fix but not quite sure. Usually you need to recurse into the child structs.

From another part of my code where I do this

func sqlMockFieldsFromStruct(inputStruct interface{}) []sqlMockField {
	result := []sqlMockField{}
	structValue := reflect.ValueOf(inputStructValue)
	structType := reflect.TypeOf(inputStructValue)

	for i := 0; i < structType.NumField(); i++ {
		fieldValue := structValue.Field(i)
		if structType.Field(i).Anonymous { // <<-- HERE
			result = append(result, sqlMockFieldsFromStruct(fieldValue.Interface())...)
		} else {
			result = append(result, sqlMockField{
				Field: structType.Field(i),
				Value: fieldValue,
			})
		}
	}
	return result
}

scany thinks it's scanning into a primitive type

i have this code:

type Instance struct {
	Domain       string `sql:"domain"`
	URL          string `sql:"url"`
	ClientID     string `sql:"client_id"`
	ClientSecret string `sql:"client_secret"`
}

func GetInstance(domain string) (i *Instance, err error) {
	err = pgxscan.Get(context.Background(), pool, &i, "select * from instances where domain = $1", domain)
	if errors.Is(err, pgx.ErrNoRows) {
		err = InstanceNotFound
	}
	return
}

i get the error: scany: to scan into a primitive type, columns number must be exactly 1, got: 4
what am I doing wrong here? this error doesn't seem correct (a struct is not a primitive type), and i'm pretty sure i'm using similar code (just with a different struct) elsewhere

Cannot scan NULL into integer fields

If I have a struct with an integer field, and if the value in DB is NULL, scany returns an error like:

scany: scan row into struct fields: can't scan into dest[4]: cannot assign 0 1 into *uint64"

What would be a good workaround here? Is there any way to instruct scany to just put 0 into the field instead?

Support for custom implementations of Scan for models

I'm moving from sqlx to scany because of using pgx as a postgres driver.

I have a custom implementation of sql.Scanner for my model, but it seems doesn't working with scany.

func (m *MyModel) Scan(data interface{}) error {
    // ...
}

When selecting rows, I got errors like that: scany: column: 'body': no corresponding field found, or it's unexported in model.MyModel, because MyModel does not really have a body column, but a custom Scan instead.

Is it possible to implement check if models implement some Scanner interface?

Null sub-struct fields

Hello, thank you for making this library, it has helped me a lot!

However, I've stumbled across an edge case I believe. Either that or my design is off.

When I scan into a struct that has a pointer sub-struct, it gives an error if the fields of the sub-struct are null. Just to clarify, the reason I have assigned the sub-struct to be a pointer value, is to be able to accept null values. So its <nil> when any of its values cannot be scanned

I haven't looked into it to figure out why, but JSON unmarshal works this way?

Take the following example:

err = pgxscan.Select(ctx, db, &users, `
	SELECT
		a."ignore_field",
		a."address_id",
		b."ignore_field" as "address.ignore_field" -- specifically this line.
	FROM "user" a
	LEFT JOIN "address" b ON b."address_id" = a."address_id"
	`)

See, "address" in this case is pointer to struct (*Address). But when it's fields are NULL, this query returns an error: eg cannot scan null into *string.

I found a workaround using json objects, but it's not particularly tasteful:

err = pgxscan.Select(ctx, db, &users, `
	SELECT
		a."ignore_field",
		a."address_id",
		-- using JSON works fine but needs json tag for each struct field.
		json_build_object(
			'ignore_field', b."ignore_field"
		) as "address"
	FROM "user" a
	LEFT JOIN "address" b ON b."address_id" = a."address_id"
	`)

Is there something I missed, or maybe a better design where I can keep the normal select fields without using JSON?

See a full code example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/jackc/pgx/v4/pgxpool"

	"github.com/georgysavva/scany/pgxscan"
)

type User struct {
	IgnoreField string   `db:"ignore_field"`
	AddressID   *uint64  `db:"address_id"`
	Address     *Address `db:"address"`
}

type Address struct {
	IgnoreField string `db:"ignore_field" json:"ignore_field"`
}

func main() {
	var err error
	ctx := context.Background()
	db, err := pgxpool.Connect(ctx, "connection-string")
	if err != nil {
		log.Fatalf("couldn't connect: %s", err)
	}

	_, err = db.Exec(ctx, `
	create table if not exists "address" (
		"address_id" integer generated always as identity primary key,
		"ignore_field" text not null
	);`)
	if err != nil {
		log.Fatalf("couldn't create address: %s", err)
	}

	_, err = db.Exec(ctx, `
	create table if not exists "user" (
		"user_id" integer generated always as identity primary key,
		"ignore_field" text not null,
		"address_id" integer constraint user_address_address_id_fk references "address"
	);`)
	if err != nil {
		log.Fatalf("couldn't create user: %s", err)
	}

	insertData(ctx, db)

	users := make([]User, 0)

	// Select 1: not working.
	err = pgxscan.Select(ctx, db, &users, `
	SELECT
		a."ignore_field",
		a."address_id",
		b."ignore_field" as "address.ignore_field"
	FROM "user" a
	LEFT JOIN "address" b ON b."address_id" = a."address_id"
	`)
	if err != nil {
		log.Fatalf("couldn't select 1: %s", err)
	}

	// Select 2: working but with json object (plus needs json tags on each field).
	err = pgxscan.Select(ctx, db, &users, `
	SELECT
		a."ignore_field",
		a."address_id",
		json_build_object(
			'ignore_field', b."ignore_field"
		) as "address"
	FROM "user" a
	LEFT JOIN "address" b ON b."address_id" = a."address_id"
	`)
	if err != nil {
		log.Fatalf("couldn't select 2: %s", err)
	}

	for _, user := range users {
		fmt.Printf("result %+v\n", user.Address)
	}
}

func insertData(ctx context.Context, db *pgxpool.Pool) {
	_, err := db.Exec(ctx, `
	insert into "address" ("ignore_field")
	values ('test address 1');`)
	if err != nil {
		log.Fatalf("couldn't insert address: %s", err)
	}

	_, err = db.Exec(ctx, `
	insert into "user" ("ignore_field", "address_id")
	values
		('test 1', 1),
		('test 2', NULL);`)
	if err != nil {
		log.Fatalf("couldn't insert user: %s", err)
	}
}

Is there a way to specify a time zone when scanning postgres timestamp without time zone fields?

To help with debugging please provide your code with all relevant information to scany.
I have a query which selects one row from the database. This row has date fields in it. When the dates are scanned they end up as UTC dates

I would like to be able to specify that the date is in fact in a different time zone. I don't want to convert the date into my time zone because the date is already in my time zone. I just want to set the date to that time zone.

Thanks.

scanning to nested structs

Hello
I was using sqlx and I moved to pgx with scany.

I have an issue here that I didn't resolve with sqlx also, i hope that there is some sort of resolution to this instead of manually scanning every row.

I use PostgreSQL 12.4 and I'm executing a function that returns a table with the following:

CREATE FUNCTION ... RETURNS TABLE (
             type text,
              name text,
              based_on text[],
              source text,
              ingredients json[],
              accessories json[],
              glasses json[],
              created_at timestamptz
              )
...

I use sqlgen for GraphQL and it created the following classes:

type SearchResultRow struct {
	Name        string            `json:"name" db:"name"`
	Type        string            `json:"type" db:"type"`
	AddedBy     *string           `json:"added_by" db:"added_by"`
	Source      *string           `json:"source" db:"source"`
	Ratings     *int              `json:"ratings" db:"ratings"`
	BasedOn     pq.StringArray    `json:"based_on" db:"based_on"`
	Ingredients []*IngredientType `json:"ingredients" db:"ingredients"`
	Accessories []*AccessoryType  `json:"accessories" db:"accessories"`
	Glasses     []*GlassType      `json:"glasses" db:"glasses"`
	CreatedAt   string            `json:"created_at" db:"created_at"`
}

type IngredientType struct {
	Name       string         `json:"name" db:"name"`
	Amount     float64        `json:"amount" db:"amount"`
	AmountType pq.StringArray `json:"amount_type" db:"amount_type"`
}

type AccessoryType struct {
	Name       string         `json:"name" db:"name"`
	Amount     float64        `json:"amount" db:"amount"`
	AmountType pq.StringArray `json:"amount_type" db:"amount_type"`
}

type GlassType struct {
	Name       string         `json:"name" db:"name"`
	Amount     float64        `json:"amount" db:"amount"`
	AmountType pq.StringArray `json:"amount_type" db:"amount_type"`
}

I'm querying the database using pgxscan with the following code:

var rows []*model.SearchResultRow
err := pgxscan.Select(context.Background(), Connection, &rows, sqlQuery, query);

and the error that I got is this:
scany: scan row into struct fields: can't scan into dest[4]: unknown oid 199 cannot be scanned into *[]*model.IngredientType

is there anything I can do to overcome this ? I wouldn't mind returning different types from the database, changing it from json array to one json object, anything that can help me resolve this.

thank you

Improve quickstart documentation

Please make clear the use of the db tag on fields within structs in the README doc. It is not at all obvious. E.g.

type Simple struct { Email stringdb:"Email" }

Is required where the name of the field being returned from the database is Email as opposed to email. It took me a long time to debug why dbscan was throwing scany: column: 'Email': no corresponding field found, or it's unexported when I could clearly see "Email" as a field in my struct and a corresponding "Email" field in my select statement.

I know it's alluded to in the pkg.go.dev docs but it's not obvious even there.

Or maybe I should just do a PR?

Project depending on incorrect version of sqlite3 driver

Awesome project!

Unfortunately I'm getting restricted with what sqlite features I can use when introducing scany into my project.

migration.sql

CREATE TABLE processes(
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  status TEXT CHECK( status IN ('RUNNING','FINISHED','FAILED') )
)

main.go

package main

import (
	"context"
	"database/sql"
	_ "embed"
	"fmt"
	"os"

	"github.com/georgysavva/scany/sqlscan"
	_ "github.com/mattn/go-sqlite3"
	log "github.com/sirupsen/logrus"
)

//go:embed migration.sql
var migration string

type Process struct {
	Pid    int
	Status string
}

func main() {
	os.Create("./test.db")
	conn, err := sql.Open("sqlite3", "./test.db")

	if err != nil {
		log.WithError(err).Fatal("failed to establish connection")
	}

	if _, err := conn.Exec(migration); err != nil {
		log.WithError(err).Fatal("failed to run migration")
	}

	var process Process
	if err := sqlscan.Get(context.TODO(), conn, &process, "INSERT INTO processes(status) VALUES($1) RETURNING *", "RUNNING"); err != nil {
		log.WithError(err).Fatal("failed to scan")
	}

  fmt.Println(process)
}

go mod file here would look like

module example

go 1.16

require (
	github.com/georgysavva/scany v0.2.9
	github.com/mattn/go-sqlite3 v2.0.1+incompatible
	github.com/sirupsen/logrus v1.8.1
)

Running this I get a scany: query one result row: near \"RETURNING\": syntax error error, that is complaining about the RETURNING syntax in sqlite.

I try to fix this by upgrading go-sqlite3 driver to v1.14, which in turn supports sqlite v3.35.0 via go get...

And it just seems to remove scanny altogether :/

 temp/will-remove % go get github.com/mattn/[email protected]
go get: removed github.com/georgysavva/scany v0.2.9
go get: downgraded github.com/jinzhu/gorm v1.9.12 => v1.9.11
go get: downgraded github.com/mattn/go-sqlite3 v2.0.1+incompatible => v1.14.8

I'm sure it's because of the sentence right below https://github.com/mattn/go-sqlite3, but wanting to track this down so I can use scany over sqlite3 with the RETURNING syntax.

Pgxscan uses a lot of memory

My app downloads db rows to csv file.
I tried to use pprof to understand why my app uses a lot of memory.
I see it:

image

I tried to compare with the same package for MySQL - github.com/blockloop/scan
And here I have this result:
image

It is two time less memory usage.
Tell me please How I can optimize my app to use less memory when I work with this package?

.Get()

I have noticed that the.Get()method of Scany and Sqlx varies when you try to scan a select now() into a var t time.Time. Scany.Get() treats the &t as a struct with missing fields, while sqlx.Get() does write the value into the variable. Is this "by design"? I noticed this when switching from Sqlx to Scany.

Unable to use a function in my query

Hi,

I'm trying to execute the following query: "SELECT * FROM my_function()".
I run this using the pgx.conn.Query() method, returning rows I then attempt to scan with pgxscan.ScanAll() and I get the following error: scany: column: 'my_function': no corresponding field found, or it's unexported.

I'm a bit confused as to why this is happening, what am I doing wrong? Thanks.

Scanning into a recursive struct

Hi,

I'd like to scan into a recursive struct:

type Node struct {
	ID       string
	Parent   *Node
}
var nodes []*Node
err := sqlscan.Select(ctx, conns.PgClient, &nodes, `select id, null as parent from nodes where parent_id is null`)

This causes an infinite loop as scany introspects struct recursively.

Any tips for achieving that? I might be missing something obvious since I'm new to Go and scany. Ideally I'd like not to touch the Node struct definition since in my real-life code it is auto-generated. Also I'd like to avoid using db:"-" on Parent since I might want to get the immediate parent with a join.

Thanks!

Any examples showing how to handle NULL db entries?

In trying to run the Get() or ScanRow() method of the pgxscan library, if any of the fields are NULL in database an error is returned. Is there a way to scan while accounting for possible NULL values?

Thanks
Chris

Functions are not supported in queries

I am performing the following query using pgx:

err := pgxscan.Select(context.Background(), db.pool, &things, `SELECT ST_AsBinary("coordinate") FROM "thing"`)

After executing it, I am getting the following error:

column: 'st_asbinary': no corresponding field found, or it's unexported in main.SafePlayer

It seems that functions are not supported in queries. Is it possible to add this functionality?

Scan to struct even if it does not contain all columns

Is your feature request related to a problem? Please describe.
Problem is in no corresponding field found error. Columns that not yet present in struct are OK, because migrations come first and code comes second, crash between two steps is unacceptable

Describe the solution you'd like
I want option to skip columns that don't have corresponding fields in destination struct

Scanning one column into pgtype

How to scan one column (and one or multiple rows) into pgx type ?

I'm getting errors like scany: scan row into struct fields: can't scan into dest[0]: cannot assign &{[some bytes ...] 2} into *[]pgtype.UUID

all my attempts ended in failures ..

-- sql : SELECT array_agg(uuid) FROM ...
var ids pgtype.UUIDArray
if err := pgxscan.Get(ctx, pg.Client(tx), &ids, sql, args...); err != nil {
    return nil, err
}

------ or

-- sql : SELECT uuid FROM ...
var ids []pgtype.UUID
if err := pgxscan.Select(ctx, pg.Client(tx), &ids, sql, args...); err != nil {
    return nil, err
}

Can't catch `context.Cancelled` error

I usually use fmt.Errorf and the wrap directive, which allows me to check if a context compliant method has exited due to a closed context via err == context.Cancelled. Seems like I can't do this with `scany'.

Reproducing minimal example:

package main

import (
	"context"
	"database/sql"
	"fmt"
	"os"

	"github.com/georgysavva/scany/sqlscan"

	_ "github.com/mattn/go-sqlite3" // sqlite driver
)

func main() {
	ctx, cancel := context.WithCancel(context.Background())
	cancel()
	if _, err := os.Create("test.db"); err != nil {
		panic(err)
	}

	conn, err := sql.Open("sqlite3", "./data.db")
	if err != nil {
		panic(err)
	}
	defer conn.Close()

	if _, err := conn.Exec("CREATE TABLE bla(blu int)"); err != nil {
		panic(err)
	}

	bla := struct{ Blu int }{}
	if err := sqlscan.Get(ctx, conn, &bla, "INSERT INTO bla(blu) VALUES(1) RETURNING *", 1); err != nil && err != context.Canceled {
		panic(err)
	}
	fmt.Print("should reach here...")
}

accompanied go.mod:

module reproduce

go 1.16

require (
	github.com/georgysavva/scany v0.2.9
	github.com/mattn/go-sqlite3 v2.0.1+incompatible
)

Upon running this I get:

 Documents/cancelledContext % go run .
panic: scany: query one result row: context canceled

goroutine 1 [running]:
main.main()
	~/Documents/cancelledContext/main.go:30 +0x385
exit status 2

getColumnToFieldIndexMap mishandles index paths >= 3 deep

The field path index build with

index := append(traversal.IndexPrefix, field.Index...)

Is not creating unique index path copies. It happens to be when the capacity of the slice is 1 or 2 and adding one more, but not once you get to 3, where the capacity doubles again to 4. This means append() will quit making copies at that point, and subsequently assigned fields at this level will all point to the last entry.

You can reproduce this by creating a struct like

type Level1 struct {
    Level2
}

type Level2 struct {
    Level3
}

type Level3 struct {
    Level4
}

type Level4 struct {
    One string
    Two string
    Three string
    Four string
}

The resulting map will have

"one": [0,0,0,3],
"two": [0,0,0,3],
// and so on

Confusing error message for NULLable fields

Hello!

When (wrongly) trying to scan a NULL value into a non-pointer type, pgxscan gives me an error like so:
can't scan into dest[18]: cannot assign NULL to *float64

The context for the call:

type MyStruct struct {
	...
	ProblemField       float64   // 18th field
	...
}

// Called with address of a MyStruct
func GetMyStruct(ctx context.Context, dest interface{}, someID uint32) error {
	err := pgxscan.Get(ctx, db, dest,
		sqlQuery, // Query has a outer join hence the NULLable fields
		someID,
	)
	return err
}

I believe the error message should read can't scan into dest[18]: cannot assign NULL to float64 instead since the issue would be solved by having a pointer type. (The same type of error message happens in #36.)

Thanks for taking a look!

Thank you for this library

Thank you so much for building this library, it saved me a lot of time. You rock!

Does it support omitting certain properties from a struct?

Does scany support "SELECT * <...>" queries?

Documentation says the following:

In terms of scanning and mapping abilities, scany provides all features of sqlx

However I can't find any example of using scany with "select *" queries like in case of sqlx:

    // Query the database, storing results in a []Person (wrapped in []interface{})
    people := []Person{}
    db.Select(&people, "SELECT * FROM person ORDER BY first_name ASC")

hybrid mapping struct / map[string]float

Hi,

First of all, thank you very much for contributing this library to the community. It seems to bee quite easy to use.

I am trying to get arbitrary data out from a timescale database and I would like to mix structs and map[string] scanning.

Given those structs:

type Data struct {
	PhConTot float64
	PhGenTot float64
}

type Metric struct {
	Time     time.Time
	DeviceId string
	Data     Data
}

I am correctly scanning data with

	var metrics []*Metric
	if err := pgxscan.Select(ctx, conn, &metrics, `
		select time, device_id, ph_con_tot as "data.ph_con_tot", ph_gen_tot as "data.ph_gen_tot"
		from metrics_computer_wide
		  WHERE device_id = '9d5eaae0-421b-11ec-9949-7f0fdad2c99c' and ph_con_tot is not null and 
		  time > '2022-04-01' and time <= '2022-04-05'
		`); err != nil {
		fmt.Fprintf(os.Stderr, "Unable to query database: %v\n", err)
	}

But since Data fields can be a huge list of options, I would like it to be a map[string]float64. For example

type MetricMap struct {
	Time     time.Time
	DeviceId string
	Data     map[string]float64
}

And querying like

	var metricsMap []*MetricMap
	if err := pgxscan.Select(ctx, conn, &metricsMap, `
		select time, device_id, ph_con_tot as "data.ph_con_tot", ph_gen_tot as "data.ph_gen_tot"
		from metrics_computer_wide
		  WHERE device_id = '9d5eaae0-421b-11ec-9949-7f0fdad2c99c' and ph_con_tot is not null and 
		  time > '2022-04-01' and time <= '2022-04-05' limit 10
		`); err != nil {
		fmt.Fprintf(os.Stderr, "Unable to query database: %v\n", err)
	}

Then I am getting

Unable to query database: scany: column: 'data.ph_con_tot': no corresponding field found, or it's unexported in main.MetricMap

Is there any way to mix structs and map[string]? or can I extend the scanner?

Thank you very much!

Add scanning for a single row

Hi, thanks for creating this library, it's very useful!

However, I think it'd be great to add a function (or change behaviour of ScanOne) so we can pass the row type. Right now, when implementing scany, all code that uses QueryRow has to switch to using Query because none of the functions support scanning sql's row, only rows.

Feature: cache the columnToFieldIndex map for each struct type

Thanks for your library.

Because you can't update a struct dynamically in go, when scanning a row into a struct there isn't much reason to create a columnToFieldIndex map for the same struct more than once.

I created a test struct with 1024 fields and some quick benchmarks to see if the cache helps, you can see the benchmarks in my fork of scanny here.

To implement a cache I only changed the getColumnToFieldIndexMap function which will attempt to get a column to field index map from and return it if it exists. The cache can be implemented using both a map[reflect.Type] with a sync.RWMutex or with a sync.Map. I tested both in the benchmarks.

The BenchmarkStruct functions reuse the same row scanner for each iteration. The BenchmarkScannerStruct functions create a new row scanner for each iteration. The benchmarks that end in MapCache use a map[reflect.Type] with a sync.RWMutex, while the SyncMapCache benchmarks use a sync.Map to store the column to field index maps. The results of the benchmarks:

goos: darwin
goarch: amd64
pkg: github.com/georgysavva/scany
BenchmarkStruct
BenchmarkStruct-8                       	   14983	     78204 ns/op	   16396 B/op	       1 allocs/op
BenchmarkStruct_MapCache
BenchmarkStruct_MapCache-8              	   14470	     88521 ns/op	   16397 B/op	       1 allocs/op
BenchmarkStruct_SyncMapCache
BenchmarkStruct_SyncMapCache-8          	   14360	     80351 ns/op	   16397 B/op	       1 allocs/op
BenchmarkScannerStruct
BenchmarkScannerStruct-8                	    2630	    462315 ns/op	  188686 B/op	    3081 allocs/op
BenchmarkScannerStruct_MapCache
BenchmarkScannerStruct_MapCache-8       	    7016	    147001 ns/op	   57477 B/op	       4 allocs/op
BenchmarkScannerStruct_SyncMapCache
BenchmarkScannerStruct_SyncMapCache-8   	    7268	    149239 ns/op	   57476 B/op	       4 allocs/op
BenchmarkMap
BenchmarkMap-8                          	    5004	    246030 ns/op	  114842 B/op	    2054 allocs/op
PASS

When reusing a row scanner there isn't much difference to the performance when using a cache. The real benefit happens when you create a new row scanner each iteration. The allocs drop from 3081 to only 4. And both the bytes/op and ns/op drop by over three times.

I think this would be a useful feature to add since even though having 1024 fields in a struct is pretty extreme but from the benchmarks the getColumnToFieldIndexMap function is creating 3 allocs per field of a struct which can be avoided after the first call to getColumnToFieldIndexMap with the same struct without much overhead.

Missing support for sql.Null types with aggregate functions like SUM

I just tried plugging in this library for my use case with pgx; for the most part, it works great. One bug I have run into is for a code snippet like this

	var mySum sql.NullInt64
	query := "SELECT SUM(my_column) FROM my_table"
	err := pgxscan.Get(ctx, p.pool, &mySum, query)
	if err != nil {
		logger.Errorw("Failed to fetch sum", "err", err)
	}

In this case, I would expect the SUM() aggregator to be parsed and then populate the sql.NullInt64 type. However, this query produces the error: `scany: column: 'sum': no corresponding field found, or it's unexported in sql.NullInt64``.

I noticed that by changing the declaration to var mySum int64, this appeared to solve the problem.

I am guessing that the bug might be that the sql.Null* types do not populate the struct correctly when aggregate functions are at play, so I would expect those to be populated appropriately rather than throwing an error.

Seeking help troubleshooting a JOIN / nested struct problem

Hi there,

I'm following along with the instructions at https://pkg.go.dev/github.com/georgysavva/scany/dbscan#hdr-Reusing_structs for JOINs / nested structs, and I'm struggling. I'm not getting any errors... I'm just not getting any columns mapped at all.

I have confirmed that with a single table query for each type, they do get mapped properly... it's when trying to join them together that it does not work.

Here are my types:

type IntegrationType struct {
	Id          int    `db:"id"`
	Name        string `db:"integration_type_name"`
	Description string `db:"description"`
}

type Integration struct {
	IntegrationType IntegrationType `db:"rit"`
	Id              int             `db:"id"`
	Name            string          `db:"integration_name"`
	LicenseId       int             `db:"license_id"`

}

And here's the query:

select integrations.id as id, integration_name, license_id, 
	      rit.id, rit.integration_type_name, rit.description
	from integrations
	join ref_integration_types rit on integrations.integration_type_id = rit.id
	where integrations.id = $1

I'd appreciate any help. Thanks!

pgx module needs to be updated

The version of pgx used uses an older version of pgtype which does not support nullable pointers to custom types. Later versions of pgtype >= 1.4.0 support this.

Strict correspondence between structure fields and columns from SELECT

The current implementation requires strict correspondence between the columns from the SELECT and the fields of the structure. Thus, additional columns cannot be used if they are not present in the target structure.

scany/dbscan/dbscan.go

Lines 298 to 303 in e037f94

if !ok {
return errors.Errorf(
"scany: column: '%s': no corresponding field found, or it's unexported in %v",
column, structValue.Type(),
)
}

Would you like to relax this rule? For example, this is useful when additional WHERE columns are needed, but they are not needed in the final structure.

Error handling

Hi thanks for the library.

Can you explain the best way to handle a pgx.ErrNoRows? It appears I can't just do something like

if err := pgxscan.Select(ctx, db, "select * from whatever"); err != nil {
    // note this is the standard import "errors"
    if errors.Is(err, pgx.ErrNoRows) {
        return fmt.Errorf("no results: %w", err)
    }
    return err
}

Does this mean I have to use "pkg/errors" to find the cause?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.