zippoxer / bow Goto Github PK
View Code? Open in Web Editor NEWBow - Minimal embedded database powered by Badger
License: MIT License
Bow - Minimal embedded database powered by Badger
License: MIT License
do you work/plan on a protocol buffer codec ?
../../../github.com/zippoxer/bow/db.go:308:9: cannot assign 1 values to 2 variables
../../../github.com/zippoxer/bow/db.go:308:22: not enough arguments in call to item.Value
Hi,
Are there any plans to add TTL support?
When I am trying to open the database in read-only mode, like this:
dir := /bow
opts := badger.DefaultOptions
opts.ReadOnly = true
opts.Logger = nil
db, err := bow.Open(dir, bow.SetBadgerOptions(opts))
I see:
error: No sets or deletes are allowed in a read-only transaction
From badger docs:
// ErrReadOnlyTxn is returned if an update function is called on a read-only transaction.
ErrReadOnlyTxn = errors.New("No sets or deletes are allowed in a read-only transaction")
I update nothing; or am I ?
Since your at the stage of the query engine design maybe you should look at this
https://github.com/vilterp/treesql
It's got some amazing features and built on boltdb.
It's query engine is impressive.
It does live queries.
It does grahql which is a great match for KV stores..
Anyway have a look.
If your key is int
or uint
it fails in Put()
with the following error binary.Write failed: binary.Write: invalid type int
(https://play.golang.org/p/5U3qCrozRNJ) It is because encoding/binary is for fixed size data.
Since your at the stage of the query engine design maybe you should look at this
https://github.com/vilterp/treesql
It's got some amazing features and built on boltdb.
It's query engine is impressive.
It does live queries.
It does grahql which is a great match for KV stores..
Anyway have a look.
package main
import (
"github.com/zippoxer/bow"
"fmt"
"time"
"github.com/dgraph-io/badger"
"encoding/json"
)
type TS struct {
Key string bow:"key"
Value string
}
func main () {
fmt.Println("START MAIN")
dbDir := "./testdb"
r := TS{"key1", "value1"}
db, _ := bow.Open(dbDir)
defer db.Close()
fmt.Println("Putting ", r, " in bucket test")
db.Bucket("test").Put(&r)
time.Sleep(1 * time.Second)
go func () {
fmt.Println("--------START ROUTINE-------------")
opts := badger.DefaultOptions
opts.ReadOnly = true
t := time.Now().Unix() + 8
db, _ = bow.Open(dbDir, bow.SetBadgerOptions(opts))
for time.Now().Unix() < t {
fmt.Println("DOT")
eo := db.Bucket("test2").Get("key", &r)
if eo != nil { fmt.Println(eo.Error()) } else {
txt, _ := json.Marshal(r)
fmt.Println(string(txt)) }
time.Sleep(100 * time.Millisecond) }
fmt.Println("----------END OF ROUTINE------------")
}()
dbro-put2
START MAIN
badger 2019/05/26 14:42:52 INFO: All 1 tables opened in 0s
badger 2019/05/26 14:42:52 DEBUG: Value Log Discard stats: map[]
badger 2019/05/26 14:42:52 INFO: Replaying file id: 0 at offset: 2691
badger 2019/05/26 14:42:52 INFO: Replay took: 104.161µs
Putting {key1 value1} in bucket test
--------START ROUTINE-------------
DOT
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x7cf78c]
goroutine 53 [running]:
sync.(*RWMutex).RLock(...)
/usr/local/go/src/sync/rwmutex.go:48
github.com/zippoxer/bow.(*DB).bucket(0x0, 0x8b28d1, 0x5, 0x0, 0x0)
/opt/test/vendor/src/github.com/zippoxer/bow/db.go:276 +0x2c
github.com/zippoxer/bow.(*DB).Bucket(0x0, 0x8b28d1, 0x5, 0x1)
/opt/test/vendor/src/github.com/zippoxer/bow/db.go:232 +0x43
main.main.func1(0x8b4998, 0x8, 0xc00000e0b8, 0xc00000cf60)
/opt/test/dbro/dbro-put2.go:37 +0x29a
created by main.main
/opt/test/dbro/dbro-put2.go:29 +0x2fe
I am still not sure if that is a bug or undocumented wrong usage, but so far I think this should not happen. Any help appreciate.
For example, if I have struct like this:
type SomeEntry struct {
Id string `bow:key`
Names map[string]string
}
... // Here we do all inits and stuff (skipped)
func (se *SomeEntry) AddName(name string, explanation string) {
se.Names[name] = string
}
So then I want to sort of accumulate Names
map by re-visiting Bucket
, fetching an object by an Id
and add another key/vale to this Names
map, then put that object back to the Bucket
. Pretty straight-forward. If I have sometimes one name, sometime two names that are related to the entry, then it should appear per object in the bucket accordingly. Right?
To fetch the object I do this:
var entry SomeEntry
iterator := db.Bucket(someName).Iter()
defer iterator.Close()
for iterator.Next(&entry) {
// do something with the entry, e.g. display Names
}
So then myobj.Names
should have that map and in that loop I expect map-per-entry.
Problem is, that that map is always filled with other "garbage" — entries that already appeared elsewhere and does not belong to this particular entry. What I found is, that the map is kind of shared for the iterator.Next
during re-filling the data to it, and so is not "flushed" in order to get filled in with the original data from the database, but the data is like written over the old one.
To workaround this, I've added the following method to my SomeEntry
:
func (se *SomeEntry) Flush() {
*se = *(&SomeEntry{})
}
This basically kills all the data inside the current struct instance. And then when I iterate while fetching, I call that method at the end, so then the Next()
can re-fill the content properly.
Is this a bug or a feature? 😉
In case it is a bug, Next
should actually either do it internally or require Flush
(or reset or whatever) method from the structs. Not sure though...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.