GithubHelp home page GithubHelp logo

tylertreat / boomfilters Goto Github PK

View Code? Open in Web Editor NEW
1.6K 42.0 112.0 133 KB

Probabilistic data structures for processing continuous, unbounded streams.

License: Apache License 2.0

Go 100.00%
bloom-filter go stable-bloom-filters cuckoo-filter probabilistic-programming counting-bloom-filters scalable-bloom-filters count-min-sketch data-stream filter

boomfilters's Introduction

Boom Filters

Build Status GoDoc

Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable Bloom Filters, Counting Bloom Filters, Inverse Bloom Filters, Cuckoo Filters, several variants of traditional Bloom filters, HyperLogLog, Count-Min Sketch, and MinHash.

Classic Bloom filters generally require a priori knowledge of the data set in order to allocate an appropriately sized bit array. This works well for offline processing, but online processing typically involves unbounded data streams. With enough data, a traditional Bloom filter "fills up", after which it has a false-positive probability of 1.

Boom Filters are useful for situations where the size of the data set isn't known ahead of time. For example, a Stable Bloom Filter can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives. Alternatively, an Inverse Bloom Filter is ideal for deduplicating a stream where duplicate events are relatively close together. This results in no false positives and, depending on how close together duplicates are, a small probability of false negatives. Scalable Bloom Filters place a tight upper bound on false positives while avoiding false negatives but require allocating memory proportional to the size of the data set. Counting Bloom Filters and Cuckoo Filters are useful for cases which require adding and removing elements to and from a set.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation. Similarly, Count-Min Sketch provides an efficient way to estimate event frequency for data streams, while Top-K tracks the top-k most frequent elements.

MinHash is a probabilistic algorithm to approximate the similarity between two sets. This can be used to cluster or compare documents by splitting the corpus into a bag of words.

Installation

$ go get github.com/tylertreat/BoomFilters

Stable Bloom Filter

This is an implementation of Stable Bloom Filters as described by Deng and Rafiei in Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters.

A Stable Bloom Filter (SBF) continuously evicts stale information so that it has room for more recent elements. Like traditional Bloom filters, an SBF has a non-zero probability of false positives, which is controlled by several parameters. Unlike the classic Bloom filter, an SBF has a tight upper bound on the rate of false positives while introducing a non-zero rate of false negatives. The false-positive rate of a classic Bloom filter eventually reaches 1, after which all queries result in a false positive. The stable-point property of an SBF means the false-positive rate asymptotically approaches a configurable fixed constant. A classic Bloom filter is actually a special case of SBF where the eviction rate is zero and the cell size is one, so this provides support for them as well (in addition to bitset-based Bloom filters).

Stable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory is bounded. For example, an SBF can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    sbf := boom.NewDefaultStableBloomFilter(10000, 0.01)
    fmt.Println("stable point", sbf.StablePoint())
    
    sbf.Add([]byte(`a`))
    if sbf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !sbf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if sbf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    sbf.Reset()
}

Scalable Bloom Filter

This is an implementation of a Scalable Bloom Filter as described by Almeida, Baquero, Preguica, and Hutchison in Scalable Bloom Filters.

A Scalable Bloom Filter (SBF) dynamically adapts to the size of the data set while enforcing a tight upper bound on the rate of false positives and a false-negative probability of zero. This works by adding Bloom filters with geometrically decreasing false-positive rates as filters become full. A tightening ratio, r, controls the filter growth. The compounded probability over the whole series converges to a target value, even accounting for an infinite series.

Scalable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory constraints aren't of particular concern. For situations where memory is bounded, consider using Inverse or Stable Bloom Filters.

The core parts of this implementation were originally written by Jian Zhen as discussed in Benchmarking Bloom Filters and Hash Functions in Go.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    sbf := boom.NewDefaultScalableBloomFilter(0.01)
    
    sbf.Add([]byte(`a`))
    if sbf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !sbf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if sbf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    sbf.Reset()
}

Inverse Bloom Filter

An Inverse Bloom Filter, or "the opposite of a Bloom filter", is a concurrent, probabilistic data structure used to test whether an item has been observed or not. This implementation, originally described and written by Jeff Hodges, replaces the use of MD5 hashing with a non-cryptographic FNV-1 function.

The Inverse Bloom Filter may report a false negative but can never report a false positive. That is, it may report that an item has not been seen when it actually has, but it will never report an item as seen which it hasn't come across. This behaves in a similar manner to a fixed-size hashmap which does not handle conflicts.

This structure is particularly well-suited to streams in which duplicates are relatively close together. It uses a CAS-style approach, which makes it thread-safe.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    ibf := boom.NewInverseBloomFilter(10000)
    
    ibf.Add([]byte(`a`))
    if ibf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !ibf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if ibf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
}

Counting Bloom Filter

This is an implementation of a Counting Bloom Filter as described by Fan, Cao, Almeida, and Broder in Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol.

A Counting Bloom Filter (CBF) provides a way to remove elements by using an array of n-bit buckets. When an element is added, the respective buckets are incremented. To remove an element, the respective buckets are decremented. A query checks that each of the respective buckets are non-zero. Because CBFs allow elements to be removed, they introduce a non-zero probability of false negatives in addition to the possibility of false positives.

Counting Bloom Filters are useful for cases where elements are both added and removed from the data set. Since they use n-bit buckets, CBFs use roughly n-times more memory than traditional Bloom filters.

See Deletable Bloom Filter for an alternative which avoids false negatives.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    bf := boom.NewDefaultCountingBloomFilter(1000, 0.01)
    
    bf.Add([]byte(`a`))
    if bf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !bf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if bf.TestAndRemove([]byte(`b`)) {
        fmt.Println("removed b")
    }
    
    // Restore to initial state.
    bf.Reset()
}

Cuckoo Filter

This is an implementation of a Cuckoo Filter as described by Andersen, Kaminsky, and Mitzenmacher in Cuckoo Filter: Practically Better Than Bloom. The Cuckoo Filter is similar to the Counting Bloom Filter in that it supports adding and removing elements, but it does so in a way that doesn't significantly degrade space and performance.

It works by using a cuckoo hashing scheme for inserting items. Instead of storing the elements themselves, it stores their fingerprints which also allows for item removal without false negatives (if you don't attempt to remove an item not contained in the filter).

For applications that store many items and target moderately low false-positive rates, cuckoo filters have lower space overhead than space-optimized Bloom filters.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    cf := boom.NewCuckooFilter(1000, 0.01)
    
    cf.Add([]byte(`a`))
    if cf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if contains, _ := cf.TestAndAdd([]byte(`b`)); !contains {
        fmt.Println("doesn't contain b")
    }
    
    if cf.TestAndRemove([]byte(`b`)) {
        fmt.Println("removed b")
    }
    
    // Restore to initial state.
    cf.Reset()
}

Classic Bloom Filter

A classic Bloom filter is a special case of a Stable Bloom Filter whose eviction rate is zero and cell size is one. We call this special case an Unstable Bloom Filter. Because cells require more memory overhead, this package also provides two bitset-based Bloom filter variations. The first variation is the traditional implementation consisting of a single bit array. The second implementation is a partitioned approach which uniformly distributes the probability of false positives across all elements.

Bloom filters have a limited capacity, depending on the configured size. Once all bits are set, the probability of a false positive is 1. However, traditional Bloom filters cannot return a false negative.

A Bloom filter is ideal for cases where the data set is known a priori because the false-positive rate can be configured by the size and number of hash functions.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    // We could also use boom.NewUnstableBloomFilter or boom.NewPartitionedBloomFilter.
    bf := boom.NewBloomFilter(1000, 0.01)
    
    bf.Add([]byte(`a`))
    if bf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !bf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if bf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    bf.Reset()
}

Count-Min Sketch

This is an implementation of a Count-Min Sketch as described by Cormode and Muthukrishnan in An Improved Data Stream Summary: The Count-Min Sketch and its Applications.

A Count-Min Sketch (CMS) is a probabilistic data structure which approximates the frequency of events in a data stream. Unlike a hash map, a CMS uses sub-linear space at the expense of a configurable error factor. Similar to Counting Bloom filters, items are hashed to a series of buckets, which increment a counter. The frequency of an item is estimated by taking the minimum of each of the item's respective counter values.

Count-Min Sketches are useful for counting the frequency of events in massive data sets or unbounded streams online. In these situations, storing the entire data set or allocating counters for every event in memory is impractical. It may be possible for offline processing, but real-time processing requires fast, space-efficient solutions like the CMS. For approximating set cardinality, refer to the HyperLogLog.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    cms := boom.NewCountMinSketch(0.001, 0.99)
    
    cms.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
    fmt.Println("frequency of alice", cms.Count([]byte(`alice`)))
    fmt.Println("frequency of bob", cms.Count([]byte(`bob`)))
    fmt.Println("frequency of frank", cms.Count([]byte(`frank`)))
    

    // Serialization example
    buf := new(bytes.Buffer)
    n, err := cms.WriteDataTo(buf)
    if err != nil {
       fmt.Println(err, n)
    }

    // Restore to initial state.
    cms.Reset()

    newCMS := boom.NewCountMinSketch(0.001, 0.99)
    n, err = newCMS.ReadDataFrom(buf)
    if err != nil {
       fmt.Println(err, n)
    }

    fmt.Println("frequency of frank", newCMS.Count([]byte(`frank`)))

   
}

Top-K

Top-K uses a Count-Min Sketch and min-heap to track the top-k most frequent elements in a stream.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
	topk := boom.NewTopK(0.001, 0.99, 5)

	topk.Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`bob`))
	topk.Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`))
	topk.Add([]byte(`fred`))
	topk.Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`))
	topk.Add([]byte(`james`))
	topk.Add([]byte(`fred`))
	topk.Add([]byte(`sara`)).Add([]byte(`sara`))
	topk.Add([]byte(`bill`))

	for i, element := range topk.Elements() {
		fmt.Println(i, string(element.Data), element.Freq)
	}
	
	// Restore to initial state.
	topk.Reset()
}

HyperLogLog

This is an implementation of HyperLogLog as described by Flajolet, Fusy, Gandouet, and Meunier in HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm.

HyperLogLog is a probabilistic algorithm which approximates the number of distinct elements in a multiset. It works by hashing values and calculating the maximum number of leading zeros in the binary representation of each hash. If the maximum number of leading zeros is n, the estimated number of distinct elements in the set is 2^n. To minimize variance, the multiset is split into a configurable number of registers, the maximum number of leading zeros is calculated in the numbers in each register, and a harmonic mean is used to combine the estimates.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation.

This implementation was originally written by Eric Lesh. Some small changes and additions have been made, including a way to construct a HyperLogLog optimized for a particular relative accuracy and adding FNV hashing. For counting element frequency, refer to the Count-Min Sketch.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    hll, err := boom.NewDefaultHyperLogLog(0.1)
    if err != nil {
        panic(err)
    }
    
    hll.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
    fmt.Println("count", hll.Count())

    // Serialization example
    buf := new(bytes.Buffer)
    _, err = hll.WriteDataTo(buf)
    if err != nil {
       fmt.Println(err)
    }
    
    // Restore to initial state.
    hll.Reset()

    newHll, err := boom.NewDefaultHyperLogLog(0.1)
    if err != nil {
       fmt.Println(err)
    }

    _, err = newHll.ReadDataFrom(buf)
    if err != nil {
       fmt.Println(err)
    }
    fmt.Println("count", newHll.Count())

}

MinHash

This is a variation of the technique for estimating similarity between two sets as presented by Broder in On the resemblance and containment of documents.

MinHash is a probabilistic algorithm which can be used to cluster or compare documents by splitting the corpus into a bag of words. MinHash returns the approximated similarity ratio of the two bags. The similarity is less accurate for very small bags of words.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    bag1 := []string{"bill", "alice", "frank", "bob", "sara", "tyler", "james"}
	bag2 := []string{"bill", "alice", "frank", "bob", "sara"}
	
	fmt.Println("similarity", boom.MinHash(bag1, bag2))
}

References

boomfilters's People

Contributors

0x1997 avatar cristaloleg avatar d4l3k avatar djhworld avatar erikdubbelboer avatar haraldnordgren avatar mark-kubacki avatar tolleiv avatar tylertreat avatar tylertreat-wf avatar wv0m56 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

boomfilters's Issues

Buckets.max truncation in buckets.go

Buckets.max can never be larger than 255 even though the math allows for larger numbers. This is because the type of Buckets.max is set to uint8. Obviously, if this behavior is intended, feel free to close this.

The math for setting Buckets.max is:

(1 << bucketSize) -1

where bucketSize is a uint8.

(1 << bucketSize) - 1 will have these outputs given a uint8:

for i := 0; i < 255; i++ {
        fmt.Println(uint64((1 << uint8(i)) - 1))
    }
input output
0 0
1 1
2 3
3 7
4 15
5 31
6 63
7 127
8 255
... ...
62 4611686018427387903
63 9223372036854775807
64 18446744073709551615

Anything past 64 would overflow a uint64.

I'm not sure what the best solution would be (or even if Buckets.max allowing nothing larger than 255 is even a problem). Should Buckets.max be changed to a uint? A uint64? I'm not that familiar with what maximum, if any, should be enforced on Buckets.max.

Manifest file

Can a manifest file be made for this project so security scanning can take place?
Thank you

CountMinSketch remove?

Shouldn't there be a Remove / TestAndRemove API in CountMinSketch?

I'm willing to to submit a PR, just want to make sure first that there's no hole in my reasoning.

Pseudo code:

Compute k hashes h1, h2, ... hk
let count1, count2, ... countk = the cells h1...hk point to
min := Minimum(count1 ... countk)
if min > 0
    conclude element exists
    decrement count1 ... countk by min
else
    conclude element doesn't exist

CuckooFilter have no Encode/Decode

hi,

  1. I found that other implementations have Encode/Decode interface, why Coook doesn't have it?
    Is it because it is not yet implemented, or because Coook does not support Encode?
  2. CuckooFilter use more memory than map[string]struct{} ? keys: 5600000, memory: 7G

Scalable Bloom Filter panics after ~31 million insertions

Hey @tylertreat! I noticed that NewDefaultScalableBloomFilter(0.01) panics after exactly 31,536,466 insertions. Here is a test case:

func TestScalableBloomLarge(t *testing.T) {
	total := uint32(40000000) // 40 million
	f := NewDefaultScalableBloomFilter(0.01)

	i := uint32(0)

	defer func() {
		if err := recover(); err != nil {
			fmt.Printf("panicked after %d iterations: %s\n", i, err)
			panic(err)
		}
	}()

	for ; i < total; i++ {
		if i%1000000 == 0 {
			fmt.Printf("  i: %d\n", i)
		}
		key := make([]byte, 8)
		binary.BigEndian.PutUint32(key, i)
		f.Add(key)
	}
}

and here is the test ouput:

=== RUN   TestScalableBloomLarge
  i: 0
  i: 1000000
  i: 2000000
# ...
  i: 29000000
  i: 30000000
  i: 31000000
panicked after 31536466 iterations: runtime error: makeslice: len out of range
--- FAIL: TestScalableBloomLarge (215.81s)
panic: runtime error: makeslice: len out of range [recovered]
	panic: runtime error: makeslice: len out of range [recovered]
	panic: runtime error: makeslice: len out of range

goroutine 132 [running]:
testing.tRunner.func1(0xc000100f00)
	/usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:830 +0x392
panic(0x1174160, 0x11d3170)
	/usr/local/Cellar/go/1.12.1/libexec/src/runtime/panic.go:522 +0x1b5
github.com/tylertreat/BoomFilters.TestScalableBloomLarge.func1(0xc00007cf7c)
	/Users/jameshageman/stripe/BoomFilters/scalable_test.go:179 +0x118
panic(0x1174160, 0x11d3170)
	/usr/local/Cellar/go/1.12.1/libexec/src/runtime/panic.go:522 +0x1b5
github.com/tylertreat/BoomFilters.NewPartitionedBloomFilter(0x2710, 0x357fb461c091b, 0x54e5e2762f38eb)
	/Users/jameshageman/stripe/BoomFilters/partitioned.go:52 +0xb1
github.com/tylertreat/BoomFilters.(*ScalableBloomFilter).addFilter(0xc00028a000)
	/Users/jameshageman/stripe/BoomFilters/scalable.go:148 +0x6a
github.com/tylertreat/BoomFilters.(*ScalableBloomFilter).Add(0xc00028a000, 0xc0d18e4858, 0x8, 0x8, 0x11d8120, 0xc00028a000)
	/Users/jameshageman/stripe/BoomFilters/scalable.go:120 +0x112
github.com/tylertreat/BoomFilters.TestScalableBloomLarge(0xc000100f00)
	/Users/jameshageman/stripe/BoomFilters/scalable_test.go:189 +0xc6
testing.tRunner(0xc000100f00, 0x11ad908)
	/usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
	/usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:916 +0x35a
FAIL	github.com/tylertreat/BoomFilters	218.857s

The culprit appears to be this line:

partitions = make([]*Buckets, k)

which confuses me because k should be a totally reasonable size, and is only a function of fpr and not n.

making index for bloom

Hi team, i have integration for bloom filter in app, in case my app there is grouping/indexing for my bloom data. is this plugin suuport with this case ?, if yes, how I can use this feature

thanks

Calculated regionSize for DeletableBloomFilter is wrong, causes out-of-bounds panic

The way regionSize is calculated for DeletableBloomFilter does not mesh with the size of collisions. If a value is hashed to a high index (m - 1 for example), if there is a collision at that index, the index into collisions is out of bounds (equal to r). It seems like maybe integer truncation isn't factored in during the calculation of regionSize or during idx/d.regionSize. If not that, then collisions should have r + 1 buckets.

This was discovered in cayleygraph/cayley using this panic:

panic: runtime error: index out of range

goroutine 2752066 [running]:
github.com/tylertreat/BoomFilters.(*Buckets).setBits(0xc42016e4e0, 0x100000078, 0xc400000001)
        /home/kevin/go/src/github.com/tylertreat/BoomFilters/buckets.go:101 +0x128
github.com/tylertreat/BoomFilters.(*Buckets).Set(0xc42016e4e0, 0x78, 0x1, 0xc42016e4b0)
        /home/kevin/go/src/github.com/tylertreat/BoomFilters/buckets.go:62 +0x53
github.com/tylertreat/BoomFilters.(*DeletableBloomFilter).Add(0xc42001e240, 0xc4200243c0, 0x18, 0x18, 0x8, 0x0)
        /home/kevin/go/src/github.com/tylertreat/BoomFilters/deletable.go:96 +0x106
github.com/cayleygraph/cayley/graph/kv.(*QuadStore).bloomAdd(0xc4200dc0a0, 0xc44090cea0)
        /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:943 +0xca
github.com/cayleygraph/cayley/graph/kv.(*QuadStore).indexLink(0xc4200dc0a0, 0x8d1e20, 0xc429580d60, 0xc44090cea0, 0x0, 0x0)
        /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:527 +0x22d
github.com/cayleygraph/cayley/graph/kv.(*QuadStore).indexLinks(0xc4200dc0a0, 0x8d1b60, 0xc420022138, 0x8d1e20, 0xc429580d60, 0xc6558cc000, 0x66d63, 0x66d63, 0x0, 0xc48977d8e7086225)
        /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:510 +0x116
github.com/cayleygraph/cayley/graph/kv.(*QuadStore).ApplyDeltas(0xc4200dc0a0, 0xc608642000, 0x66d63, 0x75b8e, 0x8c0001, 0x0, 0x0)
        /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:400 +0xa54
github.com/cayleygraph/cayley/writer.(*Single).ApplyTransaction(0xc4262440a0, 0xc48c656020, 0xc50ef3d4c0, 0x8cfc00)
        /home/kevin/go/src/github.com/cayleygraph/cayley/writer/single.go:121 +0x5b

topk panics if added elements count is less than total elements defined.

Defined a topk and declared total elements count of 10.
But added only 7 elements.
topk fails if executed to get elements.

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

const NUM_TOPK_ELEMENTS = 10

func main() {
    topk := boom.NewTopK(0.001, 0.99, NUM_TOPK_ELEMENTS) 

    topk.Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`bob`))
    topk.Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`))
    topk.Add([]byte(`fred`))
    topk.Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`))
    topk.Add([]byte(`james`))
    topk.Add([]byte(`fred`))
    topk.Add([]byte(`sara`)).Add([]byte(`sara`))
    topk.Add([]byte(`bill`))

    for i, element := range topk.Elements() {
        fmt.Println(i, string(element))
    }   

    // Restore to initial state.
    topk.Reset()
}
root@server ~/go # go run /tmp/test_tk.go 
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x18 pc=0x4507d1]

goroutine 1 [running]:
github.com/tylertreat/BoomFilters.(*elementHeap).Less(0xc20801e180, 0x9, 0x4, 0xa)
        <autogenerated>:8 +0xb1
container/heap.down(0x7f1db00a6910, 0xc20801e180, 0x4, 0xa)
        /root/.gvm/gos/go1.4/src/container/heap/heap.go:111 +0xb4
container/heap.Init(0x7f1db00a6910, 0xc20801e180)
        /root/.gvm/gos/go1.4/src/container/heap/heap.go:45 +0x81
github.com/tylertreat/BoomFilters.(*TopK).Elements(0xc20801e080, 0x0, 0x0, 0x0)
        /root/.gvm/pkgsets/go1.4/global/src/github.com/tylertreat/BoomFilters/topk.go:76 +0x211
main.main()
        /tmp/test_tk.go:20 +0x4c0

goroutine 2 [runnable]:
runtime.forcegchelper()
        /root/.gvm/gos/go1.4/src/runtime/proc.go:90
runtime.goexit()
        /root/.gvm/gos/go1.4/src/runtime/asm_amd64.s:2232 +0x1

goroutine 3 [runnable]:
runtime.bgsweep()
        /root/.gvm/gos/go1.4/src/runtime/mgc0.go:82
runtime.goexit()
        /root/.gvm/gos/go1.4/src/runtime/asm_amd64.s:2232 +0x1

goroutine 4 [runnable]:
runtime.runfinq()
        /root/.gvm/gos/go1.4/src/runtime/malloc.go:712
runtime.goexit()
        /root/.gvm/gos/go1.4/src/runtime/asm_amd64.s:2232 +0x1
exit status 2

About PartitionedBloomFilter.GobEncode With bytes.Buffer

PartitionedBloomFilter.GobEncode collects the data from its partitions, but bytes.Buffer will grow the double capacity if nessage.
So the memory of program will grow fast.
Actually you can collect the length of data first, grow the buffer and reduce the unnessage memory.

Inverse Bloom Filter: wrong result after ReadFrom gzipped i/o stream

Hi, Thanks for writing this useful package, I like it very much.

However when I found a problem as the title said when using InverseBloomFilter.

I edited the test code (attached below) for writing and reading a gzipped i/o stream for saving disk space, but the result said

Expected both filters to Test the same for 0

And when switching off gzipped, i.e., no compression, everything is OK.

I also test the ScalableBloomFilter, these's no such problem.

Please help, thanks for your time.


Test code:

https://gist.github.com/shenwei356/659cffae84f9cb2365ba70ac3866af49

Is concurrency security of the Test function ?

In Test function, the hash will be changed: hash.Write(data)ใ€‚ So it's concurrency security?

func (b *BloomFilter) Test(data []byte) bool {
	lower, upper := hashKernel(data, b.hash)

	// If any of the K bits are not set, then it's not a member.
	for i := uint(0); i < b.k; i++ {
		if b.buckets.Get((uint(lower)+uint(upper)*i)%b.m) == 0 {
			return false
		}
	}

	return true
}

func hashKernel(data []byte, hash hash.Hash64) (uint32, uint32) {
	hash.Write(data)
	sum := hash.Sum(nil)
	hash.Reset()
	return binary.BigEndian.Uint32(sum[4:8]), binary.BigEndian.Uint32(sum[0:4])
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.