Go语言的嵌入式key/value数据库:bolt。该项目的目标是为不需要完整数据库服务器如Postgres或MySQL项目提供一种简单,快速和可靠的数据库。

Since Bolt is meant to be used as such a low-level piece of functionality, simplicity is key. The API will be small and only focus on getting values and setting values. That's it.

Project Status

Bolt is stable and the API is fixed. Full unit test coverage and randomized black box testing are used to ensure database consistency and thread safety. Bolt is currently in high-load production environments serving databases as large as 1TB. Many companies such as Shopify and Heroku use Bolt-backed services every day.

Table of Contents

Getting Started

Installing

go get
$ go get github.com/boltdb/bolt/...
bolt$GOBIN

Opening a database

DB
bolt.Open()
package main

import (
    "log"

    "github.com/boltdb/bolt"
)

func main() {
    // Open the my.db data file in your current directory.
    // It will be created if it doesn't exist.
    db, err := bolt.Open("my.db", 0600, nil)
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    ...
}
Open()
db, err := bolt.Open("my.db", 0600, &bolt.Options{Timeout: 1 * time.Second})

Transactions

Bolt allows only one read-write transaction at a time but allows as many read-only transactions as you want at a time. Each transaction has a consistent view of the data as it existed when the transaction started.

DB

Read-only transactions and read-write transactions should not depend on one another and generally shouldn't be opened simultaneously in the same goroutine. This can cause a deadlock as the read-write transaction needs to periodically re-map the data file but it cannot do so while a read-only transaction is open.

Read-write transactions

DB.Update()
err := db.Update(func(tx *bolt.Tx) error {
    ...
    return nil
})
nil

Always check the return error as it will report any disk failures that can cause your transaction to not complete. If you return an error within your closure it will be passed through.

Read-only transactions

DB.View()
err := db.View(func(tx *bolt.Tx) error {
    ...
    return nil
})

You also get a consistent view of the database within this closure, however, no mutating operations are allowed within a read-only transaction. You can only retrieve buckets, retrieve values, and copy the database within a read-only transaction.

Batch read-write transactions

DB.Update()DB.Batch()
err := db.Batch(func(tx *bolt.Tx) error {
    ...
    return nil
})

Concurrent Batch calls are opportunistically combined into larger transactions. Batch is only useful when there are multiple goroutines calling it.

BatchDB.Batch()

For example: don't display messages from inside the function, instead set variables in the enclosing scope:

var id uint64
err := db.Batch(func(tx *bolt.Tx) error {
    // Find last key in bucket, decode as bigendian uint64, increment
    // by one, encode back to []byte, and add new key.
    ...
    id = newValue
    return nil
})
if err != nil {
    return ...
}
fmt.Println("Allocated ID %d", id)

Managing transactions manually

DB.View()DB.Update()DB.Begin()
Tx.Begin()
// Start a writable transaction.
tx, err := db.Begin(true)
if err != nil {
    return err
}
defer tx.Rollback()

// Use the transaction...
_, err := tx.CreateBucket([]byte("MyBucket"))
if err != nil {
    return err
}

// Commit the transaction and check for error.
if err := tx.Commit(); err != nil {
    return err
}
DB.Begin()

Using buckets

DB.CreateBucket()
db.Update(func(tx *bolt.Tx) error {
    b, err := tx.CreateBucket([]byte("MyBucket"))
    if err != nil {
        return fmt.Errorf("create bucket: %s", err)
    }
    return nil
})
Tx.CreateBucketIfNotExists()
Tx.DeleteBucket()

Using key/value pairs

Bucket.Put()
db.Update(func(tx *bolt.Tx) error {
    b := tx.Bucket([]byte("MyBucket"))
    err := b.Put([]byte("answer"), []byte("42"))
    return err
})
"answer""42"MyBucketBucket.Get()
db.View(func(tx *bolt.Tx) error {
    b := tx.Bucket([]byte("MyBucket"))
    v := b.Get([]byte("answer"))
    fmt.Printf("The answer is: %s\n", v)
    return nil
})
Get()nil
Bucket.Delete()
Get()copy()

Autoincrementing integer for the bucket

NextSequence()
// CreateUser saves u to the store. The new user ID is set on u once the data is persisted.
func (s *Store) CreateUser(u *User) error {
    return s.db.Update(func(tx *bolt.Tx) error {
        // Retrieve the users bucket.
        // This should be created when the DB is first opened.
        b := tx.Bucket([]byte("users"))

        // Generate ID for the user.
        // This returns an error only if the Tx is closed or not writeable.
        // That can't happen in an Update() call so I ignore the error check.
        id, _ = b.NextSequence()
        u.ID = int(id)

        // Marshal user data into bytes.
        buf, err := json.Marshal(u)
        if err != nil {
            return err
        }

        // Persist bytes to users bucket.
        return b.Put(itob(u.ID), buf)
    })
}

// itob returns an 8-byte big endian representation of v.
func itob(v int) []byte {
    b := make([]byte, 8)
    binary.BigEndian.PutUint64(b, uint64(v))
    return b
}

type User struct {
    ID int
    ...
}

Iterating over keys

Cursor
db.View(func(tx *bolt.Tx) error {
    // Assume bucket exists and has keys
    b := tx.Bucket([]byte("MyBucket"))

    c := b.Cursor()

    for k, v := c.First(); k != nil; k, v = c.Next() {
        fmt.Printf("key=%s, value=%s\n", k, v)
    }

    return nil
})

The cursor allows you to move to a specific point in the list of keys and move forward or backward through the keys one at a time.

The following functions are available on the cursor:

First()  Move to the first key.
Last()   Move to the last key.
Seek()   Move to a specific key.
Next()   Move to the next key.
Prev()   Move to the previous key.
(key []byte, value []byte)Next()nilFirst()Last()Seek()Next()Prev()nil
nilnilBucket.Bucket()

Prefix scans

Seek()bytes.HasPrefix()
db.View(func(tx *bolt.Tx) error {
    // Assume bucket exists and has keys
    c := tx.Bucket([]byte("MyBucket")).Cursor()

    prefix := []byte("1234")
    for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
        fmt.Printf("key=%s, value=%s\n", k, v)
    }

    return nil
})

Range scans

Another common use case is scanning over a range such as a time range. If you use a sortable time encoding such as RFC3339 then you can query a specific date range like this:

db.View(func(tx *bolt.Tx) error {
    // Assume our events bucket exists and has RFC3339 encoded time keys.
    c := tx.Bucket([]byte("Events")).Cursor()

    // Our time range spans the 90's decade.
    min := []byte("1990-01-01T00:00:00Z")
    max := []byte("2000-01-01T00:00:00Z")

    // Iterate over the 90's.
    for k, v := c.Seek(min); k != nil && bytes.Compare(k, max) <= 0; k, v = c.Next() {
        fmt.Printf("%s: %s\n", k, v)
    }

    return nil
})

ForEach()

ForEach()
db.View(func(tx *bolt.Tx) error {
    // Assume bucket exists and has keys
    b := tx.Bucket([]byte("MyBucket"))

    b.ForEach(func(k, v []byte) error {
        fmt.Printf("key=%s, value=%s\n", k, v)
        return nil
    })
    return nil
})

Nested buckets

DB
func (*Bucket) CreateBucket(key []byte) (*Bucket, error)
func (*Bucket) CreateBucketIfNotExists(key []byte) (*Bucket, error)
func (*Bucket) DeleteBucket(key []byte) error

Database backups

Tx.WriteTo()
Tx
cURL
func BackupHandleFunc(w http.ResponseWriter, req *http.Request) {
    err := db.View(func(tx *bolt.Tx) error {
        w.Header().Set("Content-Type", "application/octet-stream")
        w.Header().Set("Content-Disposition", `attachment; filename="my.db"`)
        w.Header().Set("Content-Length", strconv.Itoa(int(tx.Size())))
        _, err := tx.WriteTo(w)
        return err
    })
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
    }
}

Then you can backup using this command:

$ curl http://localhost/backup > my.db
http://localhost/backup
Tx.CopyFile()

Statistics

The database keeps a running count of many of the internal operations it performs so you can better understand what's going on. By grabbing a snapshot of these stats at two points in time we can see what operations were performed in that time range.

For example, we could start a goroutine to log stats every 10 seconds:

go func() {
    // Grab the initial stats.
    prev := db.Stats()

    for {
        // Wait for 10s.
        time.Sleep(10 * time.Second)

        // Grab the current stats and diff them.
        stats := db.Stats()
        diff := stats.Sub(&prev)

        // Encode stats to JSON and print to STDERR.
        json.NewEncoder(os.Stderr).Encode(diff)

        // Save stats for the next loop.
        prev = stats
    }
}()

It's also useful to pipe these stats to a service such as statsd for monitoring or to provide an HTTP endpoint that will perform a fixed-length sample.

Read-Only Mode

Options.ReadOnly
db, err := bolt.Open("my.db", 0666, &bolt.Options{ReadOnly: true})
if err != nil {
    log.Fatal(err)
}

Mobile Use (iOS/Android)

Bolt is able to run on mobile devices by leveraging the binding feature of the gomobile tool. Create a struct that will contain your database logic and a reference to a *bolt.DB with a initializing contstructor that takes in a filepath where the database file will be stored. Neither Android nor iOS require extra permissions or cleanup from using this method.

func NewBoltDB(filepath string) *BoltDB {
    db, err := bolt.Open(filepath+"/demo.db", 0600, nil)
    if err != nil {
        log.Fatal(err)
    }

    return &BoltDB{db}
}

type BoltDB struct {
    db *bolt.DB
    ...
}

func (b *BoltDB) Path() string {
    return b.db.Path()
}

func (b *BoltDB) Close() {
    b.db.Close()
}

Database logic should be defined as methods on this wrapper struct.

To initialize this struct from the native language (both platforms now sync their local storage to the cloud. These snippets disable that functionality for the database file):

Android

String path;
if (android.os.Build.VERSION.SDK_INT >=android.os.Build.VERSION_CODES.LOLLIPOP){
    path = getNoBackupFilesDir().getAbsolutePath();
} else{
    path = getFilesDir().getAbsolutePath();
}
Boltmobiledemo.BoltDB boltDB = Boltmobiledemo.NewBoltDB(path)

iOS

- (void)demo {
    NSString* path = [NSSearchPathForDirectoriesInDomains(NSLibraryDirectory,
                                                          NSUserDomainMask,
                                                          YES) objectAtIndex:0];
    GoBoltmobiledemoBoltDB * demo = GoBoltmobiledemoNewBoltDB(path);
    [self addSkipBackupAttributeToItemAtPath:demo.path];
    //Some DB Logic would go here
    [demo close];
}

- (BOOL)addSkipBackupAttributeToItemAtPath:(NSString *) filePathString
{
    NSURL* URL= [NSURL fileURLWithPath: filePathString];
    assert([[NSFileManager defaultManager] fileExistsAtPath: [URL path]]);

    NSError *error = nil;
    BOOL success = [URL setResourceValue: [NSNumber numberWithBool: YES]
                                  forKey: NSURLIsExcludedFromBackupKey error: &error];
    if(!success){
        NSLog(@"Error excluding %@ from backup %@", [URL lastPathComponent], error);
    }
    return success;
}

Resources

For more information on getting started with Bolt, check out the following articles:

Comparison with other databases

Postgres, MySQL, & other relational databases

Relational databases structure data into rows and are only accessible through the use of SQL. This approach provides flexibility in how you store and query your data but also incurs overhead in parsing and planning SQL statements. Bolt accesses all data by a byte slice key. This makes Bolt fast to read and write data by key but provides no built-in support for joining values together.

Most relational databases (with the exception of SQLite) are standalone servers that run separately from your application. This gives your systems flexibility to connect multiple application servers to a single database server but also adds overhead in serializing and transporting data over the network. Bolt runs as a library included in your application so all data access has to go through your application's process. This brings data closer to your application but limits multi-process access to the data.

LevelDB, RocksDB

LevelDB and its derivatives (RocksDB, HyperLevelDB) are similar to Bolt in that they are libraries bundled into the application, however, their underlying structure is a log-structured merge-tree (LSM tree). An LSM tree optimizes random writes by using a write ahead log and multi-tiered, sorted files called SSTables. Bolt uses a B+tree internally and only a single file. Both approaches have trade-offs.

If you require a high random write throughput (>10,000 w/sec) or you need to use spinning disks then LevelDB could be a good choice. If your application is read-heavy or does a lot of range scans then Bolt could be a good choice.

One other important consideration is that LevelDB does not have transactions. It supports batch writing of key/values pairs and it supports read snapshots but it will not give you the ability to do a compare-and-swap operation safely. Bolt supports fully serializable ACID transactions.

LMDB

Bolt was originally a port of LMDB so it is architecturally similar. Both use a B+tree, have ACID semantics with fully serializable transactions, and support lock-free MVCC using a single writer and multiple readers.

DB.NoSync
mdb_env

Caveats & Limitations

It's important to pick the right tool for the job and Bolt is no exception. Here are a few things to note when evaluating and using Bolt:

DB.Batch()unexpected fault addressBucket.FillPercent

Reading the Source

Bolt is a relatively small code base (<3KLOC) for an embedded, serializable, transactional key/value database so it can be a good starting point for people interested in how databases work.

The best places to start are the main entry points into Bolt:

Open()DB.Begin()writableBucket.Put()Bucket.Get()CursorTx.Commit()fsync()fsync()

If you have additional notes that could be helpful for others, please submit them via pull request.

Other Projects Using Bolt

Below is a list of public, open source projects that use Bolt:

If you are using Bolt in a project please send a pull request to add it to the list.