p2p key/value store over a hyperlog using a multi-value register conflict strategy
p2p key/value store over a hyperlog using a multi-value register conflict strategy
var hyperkv = require('hyperkv') var hyperlog = require('hyperlog') var sub = require('subleveldown')var level = require('level') var db = level('/tmp/kv.db')
var kv = hyperkv({ log: hyperlog(sub(db, 'log'), { valueEncoding: 'json' }), db: sub(db, 'kv') })
var key = process.argv[2] var value = process.argv[3]
kv.put(key, value, function (err, node) { if (err) console.error(err) else console.log(node.key) })
$ node example/put.js greeting hello 1f1b819cd3f7d379914037b473a855b7867f71c76126e379c91cbb31df2a859b $ node example/put.js greeting 'beep boop' eadb22a224313d5fb5b811e50915f16491e7714dd32b83503c1e1a1db2bd9e9b
var hyperkv = require('hyperkv') var hyperlog = require('hyperlog') var sub = require('subleveldown')var level = require('level') var db = level('/tmp/kv.db')
var kv = hyperkv({ log: hyperlog(sub(db, 'log'), { valueEncoding: 'json' }), db: sub(db, 'kv') })
var key = process.argv[2] kv.get(key, function (err, values) { if (err) console.error(err) else console.log(values) })
$ node example/get.js greeting { eadb22a224313d5fb5b811e50915f16491e7714dd32b83503c1e1a1db2bd9e9b: { value: 'beep boop' } }
var hyperkv = require('hyperkv')
Set
keyto a json
value.
If
opts.linksis set, refer to previously set keys. Otherwise, the key will refer to the current "head" key hashes.
If
opts.fieldsis set, merge the object properties of
opts.fieldsinto the raw document that is stored in the db alongside the
kand
vproperties.
cb(err, node)fires from the underlying
log.add()call.
Get the current values for
keyas
cb(err, values)where
valuesmaps hyperlog hashes to set values.
Each value is an object of one of two forms:
{ value: ... }- the value of the key
{ deleted: true }- tombstone indicating the key has been deleted
It is possible to receive both types for the same key; it is left up to the api consumer to decide how the data is best interpreted.
If there are no known values for
key,
valueswill be
{}.
If
opts.fieldsis true, include the raw document as each value instead of individual values.
Remove
key.
If
opts.linksis set, refer to previously set keys. Otherwise, the key will refer to the current "head" key hashes.
Note that keys are only removed with respect to
opts.links, not globally and that edits made in forks may cause deleted keys to "reappear". This is by design.
cb(err, node)fires from the underlying
log.add()call.
If
opts.fieldsis set, merge the object properties of
opts.fieldsinto the raw document that is stored in the db alongside the
dproperty.
Insert an array of documents
rowsatomically into the database.
Each
rowobject in the
rowsarray should have:
row.type- required, one of:
'put'or
'del'
row.key- required key string
row.value- value, required when
row.type === 'put'
row.links- optional array of ancestor hashes, defaults to most recent heads for the key
cb(err, nodes)fires from the underlying
log.batch()call.
Create a readable object mode
streamfor each key/values in the store.
Each object
rowhas:
row.key- the key set with
.put()
row.links- array of hashes that are the current holders for the key
row.values- object mapping hashes to values
Optionally:
opts.values- set to
falseto turn off setting
row.values, which requires an extra lookup in the implementation
opts.fields- when true, include the full document instead of the value
opts.live- when true, keep the stream open and add additional matching results as they are written to the db
opts.limit- close after this many documents if specified
Create a readable object mode
streamwith the history of
key.
Each
rowobject in the output stream has:
row.key- the key (as in key/value) of the document
row.link- the hyperlog key (version hash) of the current document
row.links- array of version hashes that are ancestors of this document
row.value- value associated with this document
You might want to topologically sort the output before displaying it. Otherwise, documents will always appear before their ancestors, but documents in a fork have an undefined ordering.
Whenever a node is put, this event fires.
Whenever the indexes update through a put or replication, this event fires with the underlying
nodeobject from the hyperlog.
This package ships with a
hyperkvcommand.
hyperkv put KEY VALUE {OPTIONS}Insert a json VALUE at KEY.
--links Comma-separated list of ancestor hashes
hyperkv get KEY
Print a json object for the values at KEY, mapping hashes to values.
hyperkv list
Print a list of keys and values as json, one per line.
hyperkv push hyperkv pull hyperkv sync
Replicate with another hyperkv using stdin and stdout.
npm install hyperkv npm install -g hyperkv
BSD