Commit f980471f authored by Kirill Smelkov's avatar Kirill Smelkov

wcfs: zdata: ΔFtail

ΔFtail builds on ΔBtail and  provides ZBigFile-level history that WCFS
will use to compute which blocks of a ZBigFile need to be invalidated in
OS file cache given raw ZODB changes on ZODB invalidation message.

It also will be used by WCFS to implement isolation protocol, where on
every FUSE READ request WCFS will query ΔFtail to find out revision of
corresponding file block.

Quoting ΔFtail documentation:

---- 8< ----

ΔFtail provides ZBigFile-level history tail.

It translates ZODB object-level changes to information about which blocks of
which ZBigFile were modified, and provides service to query that information.

ΔFtail class documentation
~~~~~~~~~~~~~~~~~~~~~~~~~~

ΔFtail represents tail of revisional changes to files.

It semantically consists of

    []δF			; rev ∈ (tail, head]

where δF represents a change in files space

    δF:
    	.rev↑
    	{} file ->  {}blk | EPOCH

Only files and blocks explicitly requested to be tracked are guaranteed to
be present. In particular a block that was not explicitly requested to be
tracked, even if it was changed in δZ, is not guaranteed to be present in δF.

After file epoch (file creation, deletion, or any other change to file
object) previous track requests for that file become forgotten and have no
further effect.

ΔFtail provides the following operations:

  .Track(file, blk, path, zblk)	- add file and block reached via BTree path to tracked set.

  .Update(δZ) -> δF				- update files δ tail given raw ZODB changes
  .ForgetPast(revCut)			- forget changes ≤ revCut
  .SliceByRev(lo, hi) -> []δF		- query for all files changes with rev ∈ (lo, hi]
  .SliceByFileRev(file, lo, hi) -> []δfile	- query for changes of a file with rev ∈ (lo, hi]
  .BlkRevAt(file, #blk, at) -> blkrev	- query for what is last revision that changed
    					  file[#blk] as of @at database state.

where δfile represents a change to one file

    δfile:
    	.rev↑
    	{}blk | EPOCH

See also zodb.ΔTail and xbtree.ΔBtail

Concurrency

ΔFtail is safe to use in single-writer / multiple-readers mode. That is at
any time there should be either only sole writer, or, potentially several
simultaneous readers. The table below classifies operations:

    Writers:  Update, ForgetPast
    Readers:  Track + all queries (SliceByRev, SliceByFileRev, BlkRevAt)

Note that, in particular, it is correct to run multiple Track and queries
requests simultaneously.

ΔFtail organization
~~~~~~~~~~~~~~~~~~~

ΔFtail leverages:

    - ΔBtail to track changes to ZBigFile.blktab BTree, and
    - ΔZtail to track changes to ZBlk objects and to ZBigFile object itself.

then every query merges ΔBtail and ΔZtail data on the fly to provide
ZBigFile-level result.

Merging on the fly, contrary to computing and maintaining vδF data, is done
to avoid complexity of recomputing vδF when tracking set changes. Most of
ΔFtail complexity is, thus, located in ΔBtail, which implements BTree diff
and handles complexity of recomputing vδB when set of tracked blocks
changes after new track requests.

Changes to ZBigFile object indicate epochs. Epochs could be:

    - file creation or deletion,
    - change of ZBigFile.blksize,
    - change of ZBigFile.blktab to point to another BTree.

Epochs represent major changes to file history where file is assumed to
change so dramatically, that practically it can be considered to be a
"whole" change. In particular, WCFS, upon seeing a ZBigFile epoch,
invalidates all data in corresponding OS-level cache for the file.

The only historical data, that ΔFtail maintains by itself, is history of
epochs. That history does not need to be recomputed when more blocks become
tracked and is thus easy to maintain. It also can be maintained only in
ΔFtail because ΔBtail and ΔZtail does not "know" anything about ZBigFile.

Concurrency

In order to allow multiple Track and queries requests to be served in
parallel, ΔFtail bases its concurrency promise on ΔBtail guarantees +
snapshot-style access for vδE and ztrackInBlk in queries:

1. Track calls ΔBtail.Track and quickly updates .byFile, .byRoot and
   _RootTrack indices under a lock.

2. BlkRevAt queries ΔBtail.GetAt and then combines retrieved information
   about zblk with vδE and δZ.

3. SliceByFileRev queries ΔBtail.SliceByRootRev and then merges retrieved
   vδT data with vδZ, vδE and ztrackInBlk.

4. In queries vδE is retrieved/built in snapshot style similarly to how vδT
   is built in ΔBtail. Note that vδE needs to be built only the first time,
   and does not need to be further rebuilt, so the logic in ΔFtail is simpler
   compared to ΔBtail.

5. for ztrackInBlk - that is used by SliceByFileRev query - an atomic
   snapshot is retrieved for objects of interest. This allows to hold
   δFtail.mu lock for relatively brief time without blocking other parallel
   Track/queries requests for long.

Combined this organization allows non-overlapping queries/track-requests
to run simultaneously. (This property is essential to WCFS because otherwise
WCFS would not be able to serve several non-overlapping READ requests to one
file in parallel.)

See also "Concurrency" in ΔBtail organization for more details.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Some preliminary history:

ef74aebc    X ΔFtail: Keep reference to ZBigFile via Oid, not via *ZBigFile
bf9a7405    X No longer rely on ZODB cache invariant for invalidations
46340069    X found by Random
e7b598c6    X start of ΔFtail.SliceByFileRev rework to function via merging δB and δZ histories on the fly
59c83009    X ΔFtail.SliceByFileRoot tests started to work draftly after "on-the-fly" rework
210e9b07    X Fix ΔBtail.SliceByRootRev (lo,hi] handling
bf3ace66    X ΔFtail: Rebuild vδE after first track
46624787    X ΔFtail: `go test -failfast -short -v -run Random -randseed=1626793016249041295` discovered problems
786dd336    X Size no longer tracks [0,∞) since we start tracking when zfile is non-empty
4f707117    X test that shows problem of SliceByRootRev where untracked blocks are not added uniformly into whole history
c0b7e4c3    X ΔFtail.SliceByFileRev: Fix untracked entries to be present uniformly in result
aac37c11    X zdata: Introduce T to start removing duplication in tests
bf411aa9    X zdata: Deduplicate zfile loading
b74dda09    X Start switching Track from Track(key) to Track(keycov)
aa0288ce    X Switch SliceByRootRev to vδTSnapForTracked
588a512a    X zdata: Switch SliceByFileRev not to clone Zinblk
8b5d8523    X Move tracking of which blocks were accessed from wcfs to ΔFtail
30f5ddc7    ΔFtail += .Epoch in δf
22f5f096    X Rework ΔFtail so that BlkRevAt works with ZBigFile checkout from any at ∈ (tail, head]
0853cc9f    X ΔFtail + tests
124688f9    X ΔFtail fixes
d85bb82c    ΔFtail concurrency
parent 2ab4be93
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
// Package zdata provides access for wendelin.core in-ZODB data. // Package zdata provides access for wendelin.core in-ZODB data.
// //
// ZBlk* + ZBigFile. // ZBlk* + ZBigFile + ΔFtail for ZBigFile-level ZODB history.
package zdata package zdata
// module: "wendelin.bigfile.file_zodb" // module: "wendelin.bigfile.file_zodb"
......
// Copyright (C) 2019-2021 Nexedi SA and Contributors.
// Kirill Smelkov <kirr@nexedi.com>
//
// This program is free software: you can Use, Study, Modify and Redistribute
// it under the terms of the GNU General Public License version 3, or (at your
// option) any later version, as published by the Free Software Foundation.
//
// You can also Link and Combine this program with other software covered by
// the terms of any of the Free Software licenses or any of the Open Source
// Initiative approved licenses and Convey the resulting work. Corresponding
// source of such a combination shall include the source code for all other
// software used.
//
// This program is distributed WITHOUT ANY WARRANTY; without even the implied
// warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
//
// See COPYING file for full licensing terms.
// See https://www.nexedi.com/licensing for rationale and options.
package zdata
// ΔFtail provides ZBigFile-level history tail.
//
// It translates ZODB object-level changes to information about which blocks of
// which ZBigFile were modified, and provides service to query that information.
// See ΔFtail class documentation for usage details.
//
//
// ΔFtail organization
//
// ΔFtail leverages:
//
// - ΔBtail to track changes to ZBigFile.blktab BTree, and
// - ΔZtail to track changes to ZBlk objects and to ZBigFile object itself.
//
// then every query merges ΔBtail and ΔZtail data on the fly to provide
// ZBigFile-level result.
//
// Merging on the fly, contrary to computing and maintaining vδF data, is done
// to avoid complexity of recomputing vδF when tracking set changes. Most of
// ΔFtail complexity is, thus, located in ΔBtail, which implements BTree diff
// and handles complexity of recomputing vδB when set of tracked blocks
// changes after new track requests.
//
// Changes to ZBigFile object indicate epochs. Epochs could be:
//
// - file creation or deletion,
// - change of ZBigFile.blksize,
// - change of ZBigFile.blktab to point to another BTree.
//
// Epochs represent major changes to file history where file is assumed to
// change so dramatically, that practically it can be considered to be a
// "whole" change. In particular, WCFS, upon seeing a ZBigFile epoch,
// invalidates all data in corresponding OS-level cache for the file.
//
// The only historical data, that ΔFtail maintains by itself, is history of
// epochs. That history does not need to be recomputed when more blocks become
// tracked and is thus easy to maintain. It also can be maintained only in
// ΔFtail because ΔBtail and ΔZtail does not "know" anything about ZBigFile.
//
//
// Concurrency
//
// In order to allow multiple Track and queries requests to be served in
// parallel, ΔFtail bases its concurrency promise on ΔBtail guarantees +
// snapshot-style access for vδE and ztrackInBlk in queries:
//
// 1. Track calls ΔBtail.Track and quickly updates .byFile, .byRoot and
// _RootTrack indices under a lock.
//
// 2. BlkRevAt queries ΔBtail.GetAt and then combines retrieved information
// about zblk with vδE and δZ.
//
// 3. SliceByFileRev queries ΔBtail.SliceByRootRev and then merges retrieved
// vδT data with vδZ, vδE and ztrackInBlk.
//
// 4. In queries vδE is retrieved/built in snapshot style similarly to how vδT
// is built in ΔBtail. Note that vδE needs to be built only the first time,
// and does not need to be further rebuilt, so the logic in ΔFtail is simpler
// compared to ΔBtail.
//
// 5. for ztrackInBlk - that is used by SliceByFileRev query - an atomic
// snapshot is retrieved for objects of interest. This allows to hold
// δFtail.mu lock for relatively brief time without blocking other parallel
// Track/queries requests for long.
//
// Combined this organization allows non-overlapping queries/track-requests
// to run simultaneously.
//
// See also "Concurrency" in ΔBtail organization for more details.
import (
"context"
"fmt"
"sort"
"sync"
"lab.nexedi.com/kirr/go123/xerr"
"lab.nexedi.com/kirr/neo/go/transaction"
"lab.nexedi.com/kirr/neo/go/zodb"
"lab.nexedi.com/kirr/neo/go/zodb/btree"
"lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/set"
"lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/xbtree"
"lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/xtail"
"lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/xzodb"
)
type setI64 = set.I64
type setOid = set.Oid
// ΔFtail represents tail of revisional changes to files.
//
// It semantically consists of
//
// []δF ; rev ∈ (tail, head]
//
// where δF represents a change in files space
//
// δF:
// .rev↑
// {} file -> {}blk | EPOCH
//
// Only files and blocks explicitly requested to be tracked are guaranteed to
// be present. In particular a block that was not explicitly requested to be
// tracked, even if it was changed in δZ, is not guaranteed to be present in δF.
//
// After file epoch (file creation, deletion, or any other change to file
// object) previous track requests for that file become forgotten and have no
// further effect.
//
// ΔFtail provides the following operations:
//
// .Track(file, blk, path, zblk) - add file and block reached via BTree path to tracked set.
//
// .Update(δZ) -> δF - update files δ tail given raw ZODB changes
// .ForgetPast(revCut) - forget changes ≤ revCut
// .SliceByRev(lo, hi) -> []δF - query for all files changes with rev ∈ (lo, hi]
// .SliceByFileRev(file, lo, hi) -> []δfile - query for changes of a file with rev ∈ (lo, hi]
// .BlkRevAt(file, #blk, at) -> blkrev - query for what is last revision that changed
// file[#blk] as of @at database state.
//
// where δfile represents a change to one file
//
// δfile:
// .rev↑
// {}blk | EPOCH
//
// See also zodb.ΔTail and xbtree.ΔBtail
//
//
// Concurrency
//
// ΔFtail is safe to use in single-writer / multiple-readers mode. That is at
// any time there should be either only sole writer, or, potentially several
// simultaneous readers. The table below classifies operations:
//
// Writers: Update, ForgetPast
// Readers: Track + all queries (SliceByRev, SliceByFileRev, BlkRevAt)
//
// Note that, in particular, it is correct to run multiple Track and queries
// requests simultaneously.
type ΔFtail struct {
// ΔFtail merges ΔBtail with history of ZBlk
δBtail *xbtree.ΔBtail
// mu protects ΔFtail data _and_ all _ΔFileTail/_RootTrack data for all files and roots.
//
// NOTE: even though this lock is global it is used only for brief periods of time. In
// particular working with retrieved vδE and Zinblk snapshot does not need to hold the lock.
mu sync.Mutex
byFile map[zodb.Oid]*_ΔFileTail // file -> vδf tail
byRoot map[zodb.Oid]*_RootTrack // tree-root -> ({foid}, Zinblk) as of @head
// set of files, which are newly tracked and for which byFile[foid].vδE was not yet rebuilt
ftrackNew setOid // {}foid
// set of tracked ZBlk objects mapped to tree roots as of @head
ztrackInRoot map[zodb.Oid]setOid // {} zblk -> {}root
}
// _RootTrack represents tracking information about one particular tree as of @head.
type _RootTrack struct {
ftrackSet setOid // {}foid which ZBigFiles refer to this tree
ztrackInBlk map[zodb.Oid]setI64 // {} zblk -> {}blk which blocks map to zblk
}
// _ΔFileTail represents tail of revisional changes to one file.
type _ΔFileTail struct {
root zodb.Oid // .blktab as of @head
vδE []_ΔFileEpoch // epochs (changes to ZBigFile object itself) ; nil if not yet rebuilt
rebuildJob *_RebuildJob // !nil if vδE rebuild is currently in-progress
btrackReqSet setI64 // set of blocks explicitly requested to be tracked in this file
}
// _ΔFileEpoch represent a change to ZBigFile object.
type _ΔFileEpoch struct {
Rev zodb.Tid
oldRoot zodb.Oid // .blktab was pointing to oldRoot before ; VDEL if ZBigFile deleted
newRoot zodb.Oid // .blktab was changed to point to newRoot ; ----//----
oldBlkSize int64 // .blksize was oldBlkSize ; -1 if ZBigFile deleted
newBlkSize int64 // .blksize was changed to newBlkSize ; ----//----
// snapshot of ztrackInBlk for this file right before this epoch
oldZinblk map[zodb.Oid]setI64 // {} zblk -> {}blk
}
// _RebuildJob represents currently in-progress vδE rebuilding job.
type _RebuildJob struct {
ready chan struct{} // closed when job completes
err error
}
// ΔF represents a change in files space.
type ΔF struct {
Rev zodb.Tid
ByFile map[zodb.Oid]*ΔFile // foid -> δfile
}
// ΔFile represents a change to one file.
type ΔFile struct {
Rev zodb.Tid
Epoch bool // whether file changed completely
Blocks setI64 // changed blocks
Size bool // whether file size changed
}
// NewΔFtail creates new ΔFtail object.
//
// Initial tracked set is empty.
// Initial coverage of created ΔFtail is (at₀, at₀].
//
// db will be used by ΔFtail to open database connections to load data from
// ZODB when needed.
func NewΔFtail(at0 zodb.Tid, db *zodb.DB) *ΔFtail {
return &ΔFtail{
δBtail: xbtree.NewΔBtail(at0, db),
byFile: map[zodb.Oid]*_ΔFileTail{},
byRoot: map[zodb.Oid]*_RootTrack{},
ftrackNew: setOid{},
ztrackInRoot: map[zodb.Oid]setOid{},
}
}
// (tail, head] coverage
func (δFtail *ΔFtail) Head() zodb.Tid { return δFtail.δBtail.Head() }
func (δFtail *ΔFtail) Tail() zodb.Tid { return δFtail.δBtail.Tail() }
// ---- Track/rebuild/Update/Forget ----
// Track associates file[blk]@head with zblk object and file[blkcov]@head with tree path.
//
// Path root becomes associated with the file, and the path and zblk object become tracked.
// One root can be associated with several files (each provided on different Track calls).
//
// zblk can be nil, which represents a hole.
// blk can be < 0, which requests not to establish file[blk] -> zblk
// association. zblk must be nil in this case.
//
// Objects in path and zblk must be with .PJar().At() == .head
func (δFtail *ΔFtail) Track(file *ZBigFile, blk int64, path []btree.LONode, blkcov btree.LKeyRange, zblk ZBlk) {
head := δFtail.Head()
fileAt := file.PJar().At()
if fileAt != head {
panicf("file.at (@%s) != δFtail.head (@%s)", fileAt, head)
}
if zblk != nil {
zblkAt := zblk.PJar().At()
if zblkAt != head {
panicf("zblk.at (@%s) != δFtail.head (@%s)", zblkAt, head)
}
}
// path.at == head is verified by ΔBtail.Track
foid := file.POid()
δFtail.δBtail.Track(path, blkcov)
rootObj := path[0].(*btree.LOBTree)
root := rootObj.POid()
δFtail.mu.Lock()
defer δFtail.mu.Unlock()
rt, ok := δFtail.byRoot[root]
if !ok {
rt = &_RootTrack{
ftrackSet: setOid{},
ztrackInBlk: map[zodb.Oid]setI64{},
}
δFtail.byRoot[root] = rt
}
rt.ftrackSet.Add(foid)
δftail, ok := δFtail.byFile[foid]
if !ok {
δftail = &_ΔFileTail{
root: root,
vδE: nil /*will need to be rebuilt to past till tail*/,
btrackReqSet: setI64{},
}
δFtail.byFile[foid] = δftail
δFtail.ftrackNew.Add(foid)
}
if δftail.root != root {
// .root can change during epochs, but in between them it must be stable
panicf("BUG: zfile<%s> root mutated from %s -> %s", foid, δftail.root, root)
}
if blk >= 0 {
δftail.btrackReqSet.Add(blk)
}
// associate zblk with root, if it was not hole
if zblk != nil {
if blk < 0 {
panicf("BUG: zfile<%s>: blk=%d, but zblk != nil", foid, blk)
}
zoid := zblk.POid()
inroot, ok := δFtail.ztrackInRoot[zoid]
if !ok {
inroot = make(setOid, 1)
δFtail.ztrackInRoot[zoid] = inroot
}
inroot.Add(root)
inblk, ok := rt.ztrackInBlk[zoid]
if !ok {
inblk = make(setI64, 1)
rt.ztrackInBlk[zoid] = inblk
}
inblk.Add(blk)
}
}
// vδEForFile returns vδE and current root for specified file.
//
// It builds vδE for that file if there is such need.
// The only case when vδE actually needs to be built is when the file just started to be tracked.
//
// It also returns δftail for convenience.
// NOTE access to returned δftail must be protected via δFtail.mu.
func (δFtail *ΔFtail) vδEForFile(foid zodb.Oid) (vδE []_ΔFileEpoch, headRoot zodb.Oid, δftail *_ΔFileTail, err error) {
δFtail.mu.Lock() // TODO verify that there is no in-progress writers
defer δFtail.mu.Unlock()
δftail = δFtail.byFile[foid]
root := δftail.root
vδE = δftail.vδE
if vδE != nil {
return vδE, root, δftail, nil
}
// vδE needs to be built
job := δftail.rebuildJob
// rebuild is currently in-progress -> wait for corresponding job to complete
if job != nil {
δFtail.mu.Unlock()
<-job.ready
if job.err == nil {
δFtail.mu.Lock()
vδE = δftail.vδE
}
return vδE, root, δftail, job.err
}
// we become responsible to build vδE
// release the lock while building to allow simultaneous access to other files
job = &_RebuildJob{ready: make(chan struct{})}
δftail.rebuildJob = job
δFtail.ftrackNew.Del(foid)
δBtail := δFtail.δBtail
δFtail.mu.Unlock()
vδE, err = vδEBuild(foid, δBtail.ΔZtail(), δBtail.DB())
δFtail.mu.Lock()
if err == nil {
δftail.vδE = vδE
} else {
δFtail.ftrackNew.Add(foid)
}
δftail.rebuildJob = nil
job.err = err
close(job.ready)
return vδE, root, δftail, err
}
// _rebuildAll rebuilds vδE for all files from ftrackNew requests.
//
// must be called with δFtail.mu locked.
func (δFtail *ΔFtail) _rebuildAll() (err error) {
defer xerr.Contextf(&err, "ΔFtail rebuildAll")
δBtail := δFtail.δBtail
δZtail := δBtail.ΔZtail()
db := δBtail.DB()
for foid := range δFtail.ftrackNew {
δFtail.ftrackNew.Del(foid)
δftail := δFtail.byFile[foid]
// no need to set δftail.rebuildJob - we are under lock
δftail.vδE, err = vδEBuild(foid, δZtail, db)
if err != nil {
δFtail.ftrackNew.Add(foid)
return err
}
}
return nil
}
// Update updates δFtail given raw ZODB changes.
//
// It returns change in files space that corresponds to δZ.
//
// δZ should include all objects changed by ZODB transaction.
func (δFtail *ΔFtail) Update(δZ *zodb.EventCommit) (_ ΔF, err error) {
headOld := δFtail.Head()
defer xerr.Contextf(&err, "ΔFtail update %s -> %s", headOld, δZ.Tid)
δFtail.mu.Lock()
defer δFtail.mu.Unlock()
// TODO verify that there is no in-progress readers/writers
// rebuild vδE for newly tracked files
err = δFtail._rebuildAll()
if err != nil {
return ΔF{}, err
}
δB, err := δFtail.δBtail.Update(δZ)
if err != nil {
return ΔF{}, err
}
δF := ΔF{Rev: δB.Rev, ByFile: make(map[zodb.Oid]*ΔFile)}
// take ZBigFile changes into account
δzfile := map[zodb.Oid]*_ΔZBigFile{} // which tracked ZBigFiles are changed
for _, oid := range δZ.Changev {
δftail, ok := δFtail.byFile[oid]
if !ok {
continue // not ZBigFile or file is not tracked
}
δ, err := zfilediff(δFtail.δBtail.DB(), oid, headOld, δZ.Tid)
if err != nil {
return ΔF{}, err
}
//fmt.Printf("zfile<%s> diff %s..%s -> δ: %v\n", oid, headOld, δZ.Tid, δ)
if δ != nil {
δzfile[oid] = δ
δE := _ΔFileEpoch{
Rev: δZ.Tid,
oldRoot: δ.blktabOld,
newRoot: δ.blktabNew,
oldBlkSize: δ.blksizeOld,
newBlkSize: δ.blksizeNew,
oldZinblk: map[zodb.Oid]setI64{},
}
rt, ok := δFtail.byRoot[δftail.root]
if ok {
for zoid, inblk := range rt.ztrackInBlk {
δE.oldZinblk[zoid] = inblk.Clone()
inroot, ok := δFtail.ztrackInRoot[zoid]
if ok {
inroot.Del(δftail.root)
if len(inroot) == 0 {
delete(δFtail.ztrackInRoot, zoid)
}
}
}
}
δftail.root = δE.newRoot
// NOTE no need to clone vδE: we are writer, vδE is never returned to
// outside, append does not invalidate previous vδE retrievals.
δftail.vδE = append(δftail.vδE, δE)
δftail.btrackReqSet = setI64{}
}
}
// take btree changes into account
//fmt.Printf("δB.ByRoot: %v\n", δB.ByRoot)
for root, δt := range δB.ByRoot {
//fmt.Printf("root: %v δt: %v\n", root, δt)
rt, ok := δFtail.byRoot[root]
// NOTE rt might be nil e.g. if a zfile was tracked, then
// deleted, but the tree referenced by zfile.blktab is still
// not-deleted, remains tracked and is changed.
if !ok {
continue
}
for file := range rt.ftrackSet {
δfile, ok := δF.ByFile[file]
if !ok {
δfile = &ΔFile{Rev: δF.Rev, Blocks: make(setI64)}
δF.ByFile[file] = δfile
}
for blk /*, δzblk*/ := range δt {
δfile.Blocks.Add(blk)
}
// TODO invalidate .size only if key >= maxkey was changed (size increase),
// or if on the other hand maxkey was deleted (size decrease).
//
// XXX currently we invalidate size on any topology change.
δfile.Size = true
}
// update ztrackInBlk according to btree changes
for blk, δzblk := range δt {
if δzblk.Old != xbtree.VDEL {
inblk, ok := rt.ztrackInBlk[δzblk.Old]
if ok {
inblk.Del(blk)
if len(inblk) == 0 {
delete(rt.ztrackInBlk, δzblk.Old)
inroot := δFtail.ztrackInRoot[δzblk.Old]
inroot.Del(root)
if len(inroot) == 0 {
delete(δFtail.ztrackInRoot, δzblk.Old)
}
}
}
}
if δzblk.New != xbtree.VDEL {
inblk, ok := rt.ztrackInBlk[δzblk.New]
if !ok {
inblk = make(setI64, 1)
rt.ztrackInBlk[δzblk.New] = inblk
inroot, ok := δFtail.ztrackInRoot[δzblk.New]
if !ok {
inroot = make(setOid, 1)
δFtail.ztrackInRoot[δzblk.New] = inroot
}
inroot.Add(root)
}
inblk.Add(blk)
}
}
}
// take zblk changes into account
for _, oid := range δZ.Changev {
inroot, ok := δFtail.ztrackInRoot[oid]
if !ok {
continue // not tracked
}
for root := range inroot {
rt := δFtail.byRoot[root] // must be there
inblk, ok := rt.ztrackInBlk[oid]
if !ok || len(inblk) == 0 {
continue
}
//fmt.Printf("root: %v inblk: %v\n", root, inblk)
for file := range rt.ftrackSet {
δfile, ok := δF.ByFile[file]
if !ok {
δfile = &ΔFile{Rev: δF.Rev, Blocks: make(setI64)}
δF.ByFile[file] = δfile
}
δfile.Blocks.Update(inblk)
}
}
}
// if ZBigFile object is changed - it starts new epoch for that file
for foid, δ := range δzfile {
δfile, ok := δF.ByFile[foid]
if !ok {
δfile = &ΔFile{Rev: δF.Rev}
δF.ByFile[foid] = δfile
}
δfile.Epoch = true
δfile.Blocks = nil
δfile.Size = false
//fmt.Printf("δZBigFile: %v\n", δ)
// update .byRoot
if δ.blktabOld != xbtree.VDEL {
rt, ok := δFtail.byRoot[δ.blktabOld]
if ok {
rt.ftrackSet.Del(foid)
if len(rt.ftrackSet) == 0 {
delete(δFtail.byRoot, δ.blktabOld)
// Zinroot -= δ.blktabNew
for zoid := range rt.ztrackInBlk {
inroot, ok := δFtail.ztrackInRoot[zoid]
if ok {
inroot.Del(δ.blktabOld)
if len(inroot) == 0 {
delete(δFtail.ztrackInRoot, zoid)
}
}
}
}
}
}
if δ.blktabNew != xbtree.VDEL {
rt, ok := δFtail.byRoot[δ.blktabNew]
if !ok {
rt = &_RootTrack{
ftrackSet: setOid{},
ztrackInBlk: map[zodb.Oid]setI64{},
}
δFtail.byRoot[δ.blktabNew] = rt
}
rt.ftrackSet.Add(foid)
}
}
//fmt.Printf("-> δF: %v\n", δF)
return δF, nil
}
// ForgetPast discards all δFtail entries with rev ≤ revCut.
func (δFtail *ΔFtail) ForgetPast(revCut zodb.Tid) {
δFtail.mu.Lock()
defer δFtail.mu.Unlock()
// TODO verify that there is no in-progress readers/writers
δFtail.δBtail.ForgetPast(revCut)
// TODO keep index which file changed epoch where (similarly to ΔBtail),
// and, instead of scanning all files, trim vδE only on files that is really necessary.
for _, δftail := range δFtail.byFile {
δftail._forgetPast(revCut)
}
}
func (δftail *_ΔFileTail) _forgetPast(revCut zodb.Tid) {
icut := 0
for ; icut < len(δftail.vδE); icut++ {
if δftail.vδE[icut].Rev > revCut {
break
}
}
// vδE[:icut] should be forgotten
if icut > 0 { // XXX workaround for ΔFtail.ForgetPast calling forgetPast on all files
δftail.vδE = append([]_ΔFileEpoch{}, δftail.vδE[icut:]...)
}
}
// ---- queries ----
// TODO if needed
// func (δFtail *ΔFtail) SliceByRev(lo, hi zodb.Tid) /*readonly*/ []ΔF
// _ZinblkOverlay is used by SliceByFileRev.
// It combines read-only Zinblk base with read-write adjustment.
// It provides the following operations:
//
// - Get(zblk) -> {blk},
// - AddBlk(zblk, blk),
// - DelBlk(zblk, blk)
type _ZinblkOverlay struct {
Base map[zodb.Oid]setI64 // taken from _RootTrack.ztrackInBlk or _ΔFileEpoch.oldZinblk
Adj map[zodb.Oid]setI64 // adjustment over base; blk<0 represents whiteout
}
// SliceByFileRev returns history of file changes in (lo, hi] range.
//
// it must be called with the following condition:
//
// tail ≤ lo ≤ hi ≤ head
//
// the caller must not modify returned slice.
//
// Only tracked blocks are guaranteed to be present.
//
// Note: contrary to regular go slicing, low is exclusive while high is inclusive.
func (δFtail *ΔFtail) SliceByFileRev(zfile *ZBigFile, lo, hi zodb.Tid) (/*readonly*/[]*ΔFile, error) {
return δFtail.SliceByFileRevEx(zfile, lo, hi, QueryOptions{})
}
// SliceByFileRevEx is extended version of SliceByFileRev with options.
func (δFtail *ΔFtail) SliceByFileRevEx(zfile *ZBigFile, lo, hi zodb.Tid, opt QueryOptions) (/*readonly*/[]*ΔFile, error) {
foid := zfile.POid()
//fmt.Printf("\nslice f<%s> (@%s,@%s]\n", foid, lo, hi)
vδf, err := δFtail._SliceByFileRev(foid, lo, hi, opt)
if err != nil {
err = fmt.Errorf("slice f<%s> (@%s,@%s]: %e", foid, lo, hi, err)
}
return vδf, err
}
// QueryOptions represents options for SliceBy* queries.
type QueryOptions struct {
// OnlyExplicitlyTracked requests that only blocks, that were
// explicitly tracked, are included into result.
//
// By default SliceBy* return information about both blocks that
// were explicitly tracked, and blocks that became tracked due to being
// adjacent to a tracked block in BTree bucket.
OnlyExplicitlyTracked bool
}
func (δFtail *ΔFtail) _SliceByFileRev(foid zodb.Oid, lo, hi zodb.Tid, opt QueryOptions) (/*readonly*/[]*ΔFile, error) {
xtail.AssertSlice(δFtail, lo, hi)
// query .δBtail.SliceByRootRev(file.blktab, lo, hi) +
// merge δZBlk history with that.
// merging tree (δT) and Zblk (δZblk) histories into file history (δFile):
//
// δT ────────·──────────────·─────────────────·────────────
// │ │
// ↓ │
// δZblk₁ ────────────────o───────────────────o─────────────────
// |
// ↓
// δZblk₂ ────────────x────────────────x────────────────────────
//
//
// δFile ────────o───────o──────x─────x────────────────────────
vδE, headRoot, δftail, err := δFtail.vδEForFile(foid)
if err != nil {
return nil, err
}
var vδf []*ΔFile
// vδfTail returns or creates vδf entry for revision tail
// tail must be <= all vδf revisions
vδfTail := func(tail zodb.Tid) *ΔFile {
if l := len(vδf); l > 0 {
δfTail := vδf[l-1]
if δfTail.Rev == tail {
return δfTail
}
if !(tail < δfTail.Rev) {
panic("BUG: tail not ↓")
}
}
δfTail := &ΔFile{Rev: tail, Blocks: setI64{}}
vδf = append(vδf, δfTail)
return δfTail
}
vδZ := δFtail.δBtail.ΔZtail().SliceByRev(lo, hi)
iz := len(vδZ) - 1
// find epoch that covers hi
le := len(vδE)
ie := sort.Search(le, func(i int) bool {
return hi < vδE[i].Rev
})
// vδE[ie] is next epoch
// vδE[ie-1] is epoch that covers hi
// loop through all epochs from hi till lo
for lastEpoch := false; !lastEpoch ; {
// current epoch
var epoch zodb.Tid
ie--
if ie < 0 {
epoch = δFtail.Tail()
} else {
epoch = vδE[ie].Rev
}
if epoch <= lo {
epoch = lo
lastEpoch = true
}
var root zodb.Oid // root of blktab in current epoch
var head zodb.Tid // head] of current epoch coverage
// state of Zinblk as we are scanning ← current epoch
// initially corresponds to head of the epoch (= @head for latest epoch)
Zinblk := _ZinblkOverlay{} // zblk -> which #blk refers to it
var ZinblkAt zodb.Tid // Zinblk covers [ZinblkAt,<next δT>)
if ie+1 == le {
// head
root = headRoot
head = δFtail.Head()
// take atomic Zinblk snapshot that covers vδZ
//
// - the reason we take atomic snapshot is because simultaneous Track
// requests might change Zinblk concurrently, and without snapshotting
// this might result in changes to a block being not uniformly present in
// the returned vδf (some revision indicates change to that block, while
// another one - where the block is too actually changed - does not
// indicate change to that block).
//
// - the reason we limit snapshot to vδZ is to reduce amount of under-lock
// copying, because original Zinblk is potentially very large.
//
// NOTE the other approach could be to keep blocks in _RootTrack.Zinblk with
// serial (!= zodb serial), and work with that _RootTrack.Zinblk snapshot by
// ignoring all blocks with serial > serial of snapshot view. Do not kill
// _ZinblkOverlay yet because we keep this approach in mind for the future.
ZinblkSnap := map[zodb.Oid]setI64{}
δZAllOid := setOid{}
for _, δZ := range vδZ {
for _, oid := range δZ.Changev {
δZAllOid.Add(oid)
}
}
δFtail.mu.Lock()
rt, ok := δFtail.byRoot[root]
if ok {
for oid := range δZAllOid {
inblk, ok := rt.ztrackInBlk[oid]
if ok {
ZinblkSnap[oid] = inblk.Clone()
}
}
}
δFtail.mu.Unlock()
Zinblk.Base = ZinblkSnap
} else {
δE := vδE[ie+1]
root = δE.oldRoot
head = δE.Rev - 1 // TODO better set to exact revision coming before δE.Rev
Zinblk.Base = δE.oldZinblk
}
//fmt.Printf("Zinblk: %v\n", Zinblk)
// vδT for current epoch
var vδT []xbtree.ΔTree
if root != xbtree.VDEL {
vδT, err = δFtail.δBtail.SliceByRootRev(root, epoch, head) // NOTE @head, not hi
if err != nil {
return nil, err
}
}
it := len(vδT) - 1
if it >= 0 {
ZinblkAt = vδT[it].Rev
} else {
ZinblkAt = epoch
}
// merge cumulative vδT(epoch,head] update to Zinblk, so that
// changes to blocks that were not explicitly requested to be
// tracked, are present in resulting slice uniformly.
//
// For example on
//
// at1 T/B0:a,1:b,2:c δDø δ{0,1,2}
// at2 δT{0:d,1:e} δD{c} δ{0,1,2}
// at3 δTø δD{c,d,e} δ{0,1,2}
// at4 δTø δD{c,e} δ{ 1,2}
//
// if tracked={0}, for (at1,at4] query, changes to 1 should be
// also all present @at2, @at3 and @at4 - because @at2 both 0
// and 1 are changed in the same tracked bucket. Note that
// changes to 2 should not be present at all.
Zinblk.Adj = map[zodb.Oid]setI64{}
for _, δT := range vδT {
for blk, δzblk := range δT.KV {
if δzblk.Old != xbtree.VDEL {
inblk, ok := Zinblk.Adj[δzblk.Old]
if ok {
inblk.Del(blk)
}
}
if δzblk.New != xbtree.VDEL {
inblk, ok := Zinblk.Adj[δzblk.New]
if !ok {
inblk = setI64{}
Zinblk.Adj[δzblk.New] = inblk
}
inblk.Add(blk)
}
}
}
// merge vδZ and vδT of current epoch
for ((iz >= 0 && vδZ[iz].Rev > epoch) || it >= 0) {
// δZ that is covered by current Zinblk
// -> update δf
if iz >= 0 && vδZ[iz].Rev > epoch {
δZ := vδZ[iz]
if ZinblkAt <= δZ.Rev {
//fmt.Printf("δZ @%s\n", δZ.Rev)
for _, oid := range δZ.Changev {
inblk, ok := Zinblk.Get_(oid)
if ok && len(inblk) != 0 {
δf := vδfTail(δZ.Rev)
δf.Blocks.Update(inblk)
}
}
iz--
continue
}
}
// δT -> adjust Zinblk + update δf
if it >= 0 {
δT := vδT[it]
//fmt.Printf("δT @%s %v\n", δT.Rev, δT.KV)
for blk, δzblk := range δT.KV {
// apply in reverse as we go ←
if δzblk.New != xbtree.VDEL {
Zinblk.DelBlk(δzblk.New, blk)
}
if δzblk.Old != xbtree.VDEL {
Zinblk.AddBlk(δzblk.Old, blk)
}
if δT.Rev <= hi {
δf := vδfTail(δT.Rev)
δf.Blocks.Add(blk)
δf.Size = true // see Update
}
}
it--
if it >= 0 {
ZinblkAt = vδT[it].Rev
} else {
ZinblkAt = epoch
}
}
}
// emit epoch δf
if ie >= 0 {
epoch := vδE[ie].Rev
if epoch > lo { // it could be <=
δf := vδfTail(epoch)
δf.Epoch = true
δf.Blocks = nil // should be already nil
δf.Size = false // should be already false
}
}
}
// vδf was built in reverse order
// invert the order before returning
for i,j := 0, len(vδf)-1; i<j; i,j = i+1,j-1 {
vδf[i], vδf[j] = vδf[j], vδf[i]
}
// take opt.OnlyExplicitlyTracked into account
// XXX epochs not handled (currently ok as epochs are rejected by wcfs)
if opt.OnlyExplicitlyTracked {
δblk := setI64{}
for _, δf := range vδf {
δblk.Update(δf.Blocks)
}
δFtail.mu.Lock()
for blk := range δblk {
if !δftail.btrackReqSet.Has(blk) {
δblk.Del(blk)
}
}
δFtail.mu.Unlock()
for i := len(vδf)-1; i >= 0; i-- {
δf := vδf[i]
if δf.Epoch {
continue
}
for blk := range δf.Blocks {
if !δblk.Has(blk) {
δf.Blocks.Del(blk)
}
}
if len(δf.Blocks) == 0 {
// delete @i
copy(vδf[i:], vδf[i+1:])
vδf = vδf[:len(vδf)-1]
}
}
}
return vδf, nil
}
// ZinblkOverlay
// Get_ returns set(blk) for o[zoid].
func (o *_ZinblkOverlay) Get_(zoid zodb.Oid) (inblk /*readonly*/setI64, ok bool) {
base, bok := o.Base[zoid]
adj, aok := o.Adj[zoid]
if !aok {
return base, bok
}
// combine base + adj
if bok {
inblk = base.Clone()
} else {
inblk = make(setI64, len(adj))
}
for blk := range adj {
if blk < 0 { // whiteout
inblk.Del(flipsign(blk))
} else {
inblk.Add(blk)
}
}
if len(inblk) == 0 {
return nil, false
}
return inblk, true
}
// DelBlk removes blk from o[zoid].
func (o *_ZinblkOverlay) DelBlk(zoid zodb.Oid, blk int64) {
if blk < 0 {
panic("blk < 0")
}
o._AddBlk(zoid, flipsign(blk))
}
// AddBlk adds blk to o[zoid].
func (o *_ZinblkOverlay) AddBlk(zoid zodb.Oid, blk int64) {
if blk < 0 {
panic("blk < 0")
}
o._AddBlk(zoid, blk)
}
func (o *_ZinblkOverlay) _AddBlk(zoid zodb.Oid, blk int64) {
adj, ok := o.Adj[zoid]
if !ok {
adj = make(setI64, 1)
o.Adj[zoid] = adj
}
adj.Add(blk)
adj.Del(flipsign(blk))
}
// flipsign returns x with sign bit flipped.
func flipsign(x int64) int64 {
return int64(uint64(x) ^ (1<<63))
}
// BlkRevAt returns last revision that changed file[blk] as of @at database state.
//
// if exact=False - what is returned is only an upper bound for last block revision.
//
// zfile must be any checkout from (tail, head]
// at must ∈ (tail, head]
// blk must be tracked
func (δFtail *ΔFtail) BlkRevAt(ctx context.Context, zfile *ZBigFile, blk int64, at zodb.Tid) (_ zodb.Tid, exact bool, err error) {
foid := zfile.POid()
defer xerr.Contextf(&err, "blkrev f<%s> #%d @%s", foid, blk, at)
//fmt.Printf("\nblkrev #%d @%s\n", blk, at)
// assert at ∈ (tail, head]
tail := δFtail.Tail()
head := δFtail.Head()
if !(tail < at && at <= head) {
panicf("at out of bounds: at: @%s, (tail, head] = (@%s, @%s]", at, tail, head)
}
// assert zfile.at ∈ (tail, head]
zconn := zfile.PJar()
zconnAt := zconn.At()
if !(tail < zconnAt && zconnAt <= head) {
panicf("zconn.at out of bounds: zconn.at: @%s, (tail, head] = (@%s, @%s]", zconnAt, tail, head)
}
vδE, headRoot, _, err := δFtail.vδEForFile(foid)
if err != nil {
return zodb.InvalidTid, false, err
}
// find epoch that covers at and associated blktab root/object
//fmt.Printf(" vδE: %v\n", vδE)
l := len(vδE)
i := sort.Search(l, func(i int) bool {
return at < vδE[i].Rev
})
// vδE[i] is next epoch
// vδE[i-1] is epoch that covers at
// root
var root zodb.Oid
if i == l {
root = headRoot
} else {
root = vδE[i].oldRoot
}
// epoch
var epoch zodb.Tid
i--
if i < 0 {
// i<0 - first epoch (no explicit start) - use δFtail.tail as lo
epoch = tail
} else {
epoch = vδE[i].Rev
}
//fmt.Printf(" epoch: @%s root: %s\n", epoch, root)
if root == xbtree.VDEL {
return epoch, true, nil
}
zblk, tabRev, zblkExact, tabRevExact, err := δFtail.δBtail.GetAt(root, blk, at)
//fmt.Printf(" GetAt #%d @%s -> %s(%v), @%s(%v)\n", blk, at, zblk, zblkExact, tabRev, tabRevExact)
if err != nil {
return zodb.InvalidTid, false, err
}
if tabRev < epoch {
tabRev = epoch
tabRevExact = true
}
// if δBtail does not have entry that covers root[blk] - get it
// through any zconn with .at ∈ (tail, head].
if !zblkExact {
xblktab, err := zconn.Get(ctx, root)
if err != nil {
return zodb.InvalidTid, false, err
}
blktab, err := vBlktab(xblktab)
if err != nil {
return zodb.InvalidTid, false, err
}
xzblkObj, ok, err := blktab.Get(ctx, blk)
if err != nil {
return zodb.InvalidTid, false, err
}
if !ok {
zblk = xbtree.VDEL
} else {
zblkObj, err := vZBlk(xzblkObj)
if err != nil {
return zodb.InvalidTid, false, fmt.Errorf("blktab<%s>[#%d]: %s", root, blk, err)
}
zblk = zblkObj.POid()
}
}
// block was removed
if zblk == xbtree.VDEL {
return tabRev, tabRevExact, nil
}
// blktab[blk] was changed to point to a zblk @tabRev.
// blk revision is max rev and when zblk changed last in (rev, at] range.
zblkRev, zblkRevExact := δFtail.δBtail.ΔZtail().LastRevOf(zblk, at)
//fmt.Printf(" ZRevOf %s @%s -> @%s, %v\n", zblk, at, zblkRev, zblkRevExact)
if zblkRev > tabRev {
return zblkRev, zblkRevExact, nil
} else {
return tabRev, tabRevExact, nil
}
}
// ---- vδEBuild (vδE rebuild core) ----
// vδEBuild builds vδE for file from vδZ.
func vδEBuild(foid zodb.Oid, δZtail *zodb.ΔTail, db *zodb.DB) (vδE []_ΔFileEpoch, err error) {
defer xerr.Contextf(&err, "file<%s>: build vδE", foid)
vδE = []_ΔFileEpoch{}
vδZ := δZtail.Data()
atPrev := δZtail.Tail()
for i := 0; i < len(vδZ); i++ {
δZ := vδZ[i]
fchanged := false
for _, oid := range δZ.Changev {
if oid == foid {
fchanged = true
break
}
}
if !fchanged {
continue
}
δ, err := zfilediff(db, foid, atPrev, δZ.Rev)
if err != nil {
return nil, err
}
if δ != nil {
δE := _ΔFileEpoch{
Rev: δZ.Rev,
oldRoot: δ.blktabOld,
newRoot: δ.blktabNew,
oldBlkSize: δ.blksizeOld,
newBlkSize: δ.blksizeNew,
oldZinblk: nil, // nothing was tracked
}
vδE = append(vδE, δE)
}
atPrev = δZ.Rev
}
return vδE, nil
}
// zfilediff returns direct difference for ZBigFile<foid> old..new .
type _ΔZBigFile struct {
blksizeOld, blksizeNew int64
blktabOld, blktabNew zodb.Oid
}
func zfilediff(db *zodb.DB, foid zodb.Oid, old, new zodb.Tid) (δ *_ΔZBigFile, err error) {
txn, ctx := transaction.New(context.TODO()) // TODO - merge in cancel via ctx arg
defer txn.Abort()
zconnOld, err := db.Open(ctx, &zodb.ConnOptions{At: old})
if err != nil {
return nil, err
}
zconnNew, err := db.Open(ctx, &zodb.ConnOptions{At: new})
if err != nil {
return nil, err
}
a, err1 := zgetFileOrNil(ctx, zconnOld, foid)
b, err2 := zgetFileOrNil(ctx, zconnNew, foid)
err = xerr.Merge(err1, err2)
if err != nil {
return nil, err
}
return diffF(ctx, a, b)
}
// diffF returns direct difference in between two ZBigFile objects.
func diffF(ctx context.Context, a, b *ZBigFile) (δ *_ΔZBigFile, err error) {
defer xerr.Contextf(&err, "diffF %s %s", xzodb.XidOf(a), xzodb.XidOf(b))
δ = &_ΔZBigFile{}
if a == nil {
δ.blksizeOld = -1
δ.blktabOld = xbtree.VDEL
} else {
err = a.PActivate(ctx); if err != nil { return nil, err }
defer a.PDeactivate()
δ.blksizeOld = a.blksize
δ.blktabOld = a.blktab.POid()
}
if b == nil {
δ.blksizeNew = -1
δ.blktabNew = xbtree.VDEL
} else {
err = b.PActivate(ctx); if err != nil { return nil, err }
defer b.PDeactivate()
δ.blksizeNew = b.blksize
δ.blktabNew = b.blktab.POid()
}
// return δ=nil if no change
if δ.blksizeOld == δ.blksizeNew && δ.blktabOld == δ.blktabNew {
δ = nil
}
return δ, nil
}
// zgetFileOrNil returns ZBigFile corresponding to zconn.Get(oid) .
// if the file does not exist, (nil, ok) is returned.
func zgetFileOrNil(ctx context.Context, zconn *zodb.Connection, oid zodb.Oid) (zfile *ZBigFile, err error) {
defer xerr.Contextf(&err, "getfile %s@%s", oid, zconn.At())
xfile, err := xzodb.ZGetOrNil(ctx, zconn, oid)
if xfile == nil || err != nil {
return nil, err
}
zfile, ok := xfile.(*ZBigFile)
if !ok {
return nil, fmt.Errorf("unexpected type: %s", zodb.ClassOf(xfile))
}
return zfile, nil
}
// Copyright (C) 2019-2021 Nexedi SA and Contributors.
// Kirill Smelkov <kirr@nexedi.com>
//
// This program is free software: you can Use, Study, Modify and Redistribute
// it under the terms of the GNU General Public License version 3, or (at your
// option) any later version, as published by the Free Software Foundation.
//
// You can also Link and Combine this program with other software covered by
// the terms of any of the Free Software licenses or any of the Open Source
// Initiative approved licenses and Convey the resulting work. Corresponding
// source of such a combination shall include the source code for all other
// software used.
//
// This program is distributed WITHOUT ANY WARRANTY; without even the implied
// warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
//
// See COPYING file for full licensing terms.
// See https://www.nexedi.com/licensing for rationale and options.
package zdata
// tests for δftail.go
//
// This are the main tests for ΔFtail functionality. The primary testing
// concern is to verify how ΔFtail merges ΔBtail and ΔZtail histories on Update
// and queries.
//
// We assume that ΔBtail works correctly (this is covered by ΔBtail tests)
// -> no need to exercise many different topologies and tracking sets here.
//
// Since ΔFtail does not recompute anything by itself when tracking set
// changes, and only merges δBtail and δZtail histories on queries, there is no
// need to exercise many different tracking sets(*). Once again we assume that
// ΔBtail works correctly and verify δFtail only with track=[-∞,∞).
//
// There are 2 testing approaches:
//
// a) transition a ZBigFile in ZODB through particular .blktab and ZBlk
// states and feed ΔFtail through created database transactions.
// b) transition a ZBigFile in ZODB through random .blktab and ZBlk
// states and feed ΔFtail through created database transactions.
//
// TestΔFtail and TestΔFtailRandom implement approaches "a" and "b" correspondingly.
//
// (*) except one small place in SliceByFileRev which handles tracked vs
// untracked set differences and is verified by TestΔFtailSliceUntrackedUniform.
import (
"context"
"fmt"
"reflect"
"sort"
"strings"
"testing"
"lab.nexedi.com/kirr/go123/exc"
"lab.nexedi.com/kirr/neo/go/transaction"
"lab.nexedi.com/kirr/neo/go/zodb"
"lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/set"
"lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/xbtree/xbtreetest"
)
type setStr = set.Str
const ø = "ø"
// T is environment for doing ΔFtail tests.
//
// it is based on xbtreetest.T .
type T struct {
*xbtreetest.T
foid zodb.Oid // oid of zfile
}
// ΔFTestEntry represents one entry in ΔFtail tests.
type ΔFTestEntry struct {
δblkTab map[int64]string // changes in tree part {} #blk -> ZBlk<name>
δdataTab setStr // changes to ZBlk objects {ZBlk<name>}
}
// TestΔFtail runs ΔFtail tests on set of concrete prepared testcases.
func TestΔFtail(t *testing.T) {
// δT is shorthand to create δblkTab.
type δT = map[int64]string
// δD is shorthand to create δdataTab.
δD := func(zblkv ...string) setStr {
δ := setStr{}
for _, zblk := range zblkv {
δ.Add(zblk)
}
return δ
}
const a,b,c,d,e,f,g,h,i,j = "a","b","c","d","e","f","g","h","i","j"
testv := []ΔFTestEntry{
{δT{1:a,2:b,3:ø}, δD(a)},
{δT{}, δD(c)},
{δT{2:c}, δD(a,b)},
// clear the tree
{δT{1:ø,2:ø}, δD()},
// i is first associated with file, but later unlinked from it
// then i is changed -> the file should no be in δF
{δT{5:i}, δD()},
{δT{5:e}, δD()},
{δT{}, δD(i)},
// delete the file
{nil, nil},
// ---- found by TestΔFtailRandom ----
{δT{1:a,6:i,7:d,8:e}, δD(a,c,e,f,g,h,i,j)},
// was including ≤ lo entries in SliceByFileRev
{δT{0:b,2:j,3:i,5:f,6:b,7:i,8:d}, δD(a,b,c,d,e,g,i,j)},
{δT{0:e,2:h,4:d,9:b}, δD(a,h,i)},
{δT{0:j,1:i,3:g,5:a,6:e,7:j,8:f,9:d}, δD()},
{δT{0:b,1:f,2:h,4:b,8:b}, δD(b,d,i)},
{δT{1:a,3:d,6:j}, δD(b,c,d,f,g,h,i,j)},
{δT{0:i,1:f,4:e,5:e,7:d,8:h}, δD(d,j)},
{δT{}, δD(a,b,c,e,f,g,h,i,j)},
// 0 was missing in δf
{nil, nil},
{δT{0:a}, δD()},
{δT{2:i,3:c,5:d,9:c}, δD(a,b,c,d,e,f,g,h,i)},
{δT{0:j,1:d,2:h,5:g,6:h,7:c,9:h}, δD(d,e,f,h,j)},
}
testq := make(chan ΔFTestEntry)
go func() {
defer close(testq)
for _, test := range testv {
testq <- test
}
}()
testΔFtail(t, testq)
}
// TestΔFtailRandom runs ΔFtail tests on randomly-generated file changes.
func TestΔFtailRandom(t *testing.T) {
n := xbtreetest.N(1E3, 1E4, 1E5)
nblk := xbtreetest.N(1E1, 2E1, 1E2) // keeps failures detail small on -short
// random-number generator
rng, seed := xbtreetest.NewRand()
t.Logf("# n=%d seed=%d", n, seed)
vv := "abcdefghij"
randv := func() string {
i := rng.Intn(len(vv))
return vv[i:i+1]
}
testq := make(chan ΔFTestEntry)
go func() {
defer close(testq)
for i := 0; i < n; i++ {
nδblkTab := rng.Intn(nblk)
nδdataTab := rng.Intn(len(vv))
δblkTab := map[int64]string{}
δdataTab := setStr{}
blkv := rng.Perm(nblk)
for j := 0; j < nδblkTab; j++ {
blk := blkv[j]
zblk := randv()
δblkTab[int64(blk)] = zblk
}
vv_ := rng.Perm(len(vv))
for j := 0; j < nδdataTab; j++ {
k := vv_[j]
v := vv[k:k+1]
δdataTab.Add(v)
}
testq <- ΔFTestEntry{δblkTab, δdataTab}
}
}()
testΔFtail(t, testq)
}
// testΔFtail verifies ΔFtail on sequence of testcases coming from testq.
func testΔFtail(t_ *testing.T, testq chan ΔFTestEntry) {
t := newT(t_)
X := exc.Raiseif
// data built via applying changes from testq
epochv := []zodb.Tid{} // rev↑
vδf := []*ΔFile{} // (rev↑, {}blk)
vδE := []_ΔFileEpoch{} // (rev↑, EPOCH)
blkTab := map[int64]string{} // #blk -> ZBlk<name>
Zinblk := map[string]setI64{} // ZBlk<name> -> which #blk refer to it
blkRevAt := map[zodb.Tid]map[int64]zodb.Tid{} // {} at -> {} #blk -> rev
// load dataTab
dataTab := map[string]string{} // ZBlk<name> -> data
for /*oid*/_, zblki := range t.Head().ZBlkTab {
dataTab[zblki.Name] = zblki.Data
}
// start δFtail when zfile does not yet exists
// this way we'll verify how ΔFtail rebuilds vδE for started-to-be-tracked file
t0 := t.Commit("øf")
t.Logf("# @%s (%s)", t0.AtSymb(), t0.At)
epochv = append(epochv, t0.At)
δFtail := NewΔFtail(t.Head().At, t.DB)
// create zfile, but do not track it yet
// vδf + friends will be updated after "load zfile"
δt1 := map[int64]string{0:"a"}
t1 := t.Commit(fmt.Sprintf("t%s D%s", xbtreetest.KVTxt(δt1), dataTabTxt(dataTab)))
δblk1 := setI64{}
for blk := range δt1 {
δblk1.Add(blk)
}
t.Logf("# → @%s (%s) δT%s δD{} ; %s\tδ%s *not-yet-tracked", t1.AtSymb(), t1.At, xbtreetest.KVTxt(δt1), t1.Tree, δblk1)
δF, err := δFtail.Update(t1.ΔZ); X(err)
if !(δF.Rev == t1.At && len(δF.ByFile) == 0) {
t.Errorf("wrong δF:\nhave {%s, %v}\nwant: {%s, ø}", δF.Rev, δF.ByFile, t1.At)
}
// load zfile via root['treegen/file']
txn, ctx := transaction.New(context.Background())
defer func() {
txn.Abort()
}()
zconn, err := t.DB.Open(ctx, &zodb.ConnOptions{At: t.Head().At, NoPool: true}); X(err)
zfile, blksize := t.XLoadZFile(ctx, zconn)
foid := zfile.POid()
// update vδf + co for t1
vδf = append(vδf, &ΔFile{Rev: t1.At, Epoch: true})
vδE = append(vδE, _ΔFileEpoch{
Rev: t1.At,
oldRoot: zodb.InvalidOid,
newRoot: t.Root(),
oldBlkSize: -1,
newBlkSize: blksize,
oldZinblk: nil,
})
epochv = append(epochv, t1.At)
for blk, zblk := range δt1 {
blkTab[blk] = zblk
inblk, ok := Zinblk[zblk]
if !ok {
inblk = setI64{}
Zinblk[zblk] = inblk
}
inblk.Add(blk)
}
// start tracking zfile[-∞,∞) from the beginning
// this should make ΔFtail to see all zfile changes
// ( later retrackAll should be called after new epoch to track zfile[-∞,∞) again )
retrackAll := func() {
for blk := range blkTab {
_, path, blkcov, zblk, _, err := zfile.LoadBlk(ctx, blk); X(err)
δFtail.Track(zfile, blk, path, blkcov, zblk)
}
}
retrackAll()
i := 1 // matches t1
delfilePrev := false
for test := range testq {
i++
δblk := setI64{}
δtree := false
delfile := false
// {nil,nil} commands to delete zfile
if test.δblkTab == nil && test.δdataTab == nil {
delfile = true
}
// new epoch starts when file is deleted or recreated
newEpoch := delfile || (!delfile && delfile != delfilePrev)
delfilePrev = delfile
ZinblkPrev := map[string]setI64{}
for zblk, inblk := range Zinblk {
ZinblkPrev[zblk] = inblk.Clone()
}
// newEpoch -> reset
if newEpoch {
blkTab = map[int64]string{}
Zinblk = map[string]setI64{}
δblk = nil
} else {
// rebuild blkTab/Zinblk
for blk, zblk := range test.δblkTab {
zprev, ok := blkTab[blk]
if ok {
inblk := Zinblk[zprev]
inblk.Del(blk)
if len(inblk) == 0 {
delete(Zinblk, zprev)
}
} else {
zprev = ø
}
if zblk != ø {
blkTab[blk] = zblk
inblk, ok := Zinblk[zblk]
if !ok {
inblk = setI64{}
Zinblk[zblk] = inblk
}
inblk.Add(blk)
} else {
delete(blkTab, blk)
}
// update δblk due to change in blkTab
if zblk != zprev {
δblk.Add(blk)
δtree = true
}
}
// rebuild dataTab
for zblk := range test.δdataTab {
data, ok := dataTab[zblk] // e.g. a -> a2
if !ok {
t.Fatalf("BUG: blk %s not in dataTab\ndataTab: %v", zblk, dataTab)
}
data = fmt.Sprintf("%s%d", data[:1], i) // e.g. a4
dataTab[zblk] = data
// update δblk due to change in ZBlk data
for blk := range Zinblk[zblk] {
δblk.Add(blk)
}
}
}
// commit updated zfile / blkTab + dataTab
var req string
if delfile {
req = "øf"
} else {
req = fmt.Sprintf("t%s D%s", xbtreetest.KVTxt(blkTab), dataTabTxt(dataTab))
}
commit := t.Commit(req)
if newEpoch {
epochv = append(epochv, commit.At)
}
flags := ""
if newEpoch {
flags += "\tEPOCH"
}
t.Logf("# → @%s (%s) δT%s δD%s\t; %s\tδ%s%s", commit.AtSymb(), commit.At, xbtreetest.KVTxt(test.δblkTab), test.δdataTab, commit.Tree, δblk, flags)
//t.Logf("# vδf: %s", vδfstr(vδf))
// update blkRevAt
var blkRevPrev map[int64]zodb.Tid
if i != 0 {
blkRevPrev = blkRevAt[δFtail.Head()]
}
blkRev := map[int64]zodb.Tid{}
for blk, rev := range blkRevPrev {
if newEpoch {
blkRev[blk] = commit.At
} else {
blkRev[blk] = rev
}
}
for blk := range δblk {
blkRev[blk] = commit.At
}
blkRevAt[commit.At] = blkRev
/*
fmt.Printf("blkRevAt[@%s]:\n", commit.AtSymb())
blkv := []int64{}
for blk := range blkRev {
blkv = append(blkv, blk)
}
sort.Slice(blkv, func(i, j int) bool {
return blkv[i] < blkv[j]
})
for _, blk := range blkv {
fmt.Printf(" #%d: %v\n", blk, blkRev[blk])
}
*/
// update zfile
txn.Abort()
txn, ctx = transaction.New(context.Background())
err = zconn.Resync(ctx, commit.At); X(err)
var δfok *ΔFile
if newEpoch || len(δblk) != 0 {
δfok = &ΔFile{
Rev: commit.At,
Epoch: newEpoch,
Blocks: δblk,
Size: δtree, // not strictly ok, but matches current ΔFtail code
}
vδf = append(vδf, δfok)
}
if newEpoch {
δE := _ΔFileEpoch{Rev: commit.At}
if delfile {
δE.oldRoot = t.Root()
δE.newRoot = zodb.InvalidOid
δE.oldBlkSize = blksize
δE.newBlkSize = -1
} else {
δE.oldRoot = zodb.InvalidOid
δE.newRoot = t.Root()
δE.oldBlkSize = -1
δE.newBlkSize = blksize
}
oldZinblk := map[zodb.Oid]setI64{}
for zblk, inblk := range ZinblkPrev {
oid, _ := commit.XGetBlkByName(zblk)
oldZinblk[oid] = inblk
}
δE.oldZinblk = oldZinblk
vδE = append(vδE, δE)
}
//fmt.Printf("Zinblk: %v\n", Zinblk)
// update δFtail
δF, err := δFtail.Update(commit.ΔZ); X(err)
// assert δF matches δfok
t.assertΔF(δF, commit.At, δfok)
// track whole zfile again if new epoch was started
if newEpoch {
retrackAll()
}
// verify byRoot
trackRfiles := map[zodb.Oid]setOid{}
for root, rt := range δFtail.byRoot {
trackRfiles[root] = rt.ftrackSet
}
filesOK := setOid{}
if !delfile {
filesOK.Add(foid)
}
RfilesOK := map[zodb.Oid]setOid{}
if len(filesOK) != 0 {
RfilesOK[t.Root()] = filesOK
}
if !reflect.DeepEqual(trackRfiles, RfilesOK) {
t.Errorf("Rfiles:\nhave: %v\nwant: %v", trackRfiles, RfilesOK)
}
// verify Zinroot
trackZinroot := map[string]setOid{}
for zoid, inroot := range δFtail.ztrackInRoot {
zblki := commit.ZBlkTab[zoid]
trackZinroot[zblki.Name] = inroot
}
Zinroot := map[string]setOid{}
for zblk := range Zinblk {
inroot := setOid{}; inroot.Add(t.Root())
Zinroot[zblk] = inroot
}
if !reflect.DeepEqual(trackZinroot, Zinroot) {
t.Errorf("Zinroot:\nhave: %v\nwant: %v", trackZinroot, Zinroot)
}
// verify Zinblk
trackZinblk := map[string]setI64{}
switch {
case len(δFtail.byRoot) == 0:
// ok
case len(δFtail.byRoot) == 1:
rt, ok := δFtail.byRoot[t.Root()]
if !ok {
t.Errorf(".byRoot points to unexpected blktab")
} else {
for zoid, inblk := range rt.ztrackInBlk {
zblki := commit.ZBlkTab[zoid]
trackZinblk[zblki.Name] = inblk
}
}
default:
t.Errorf("len(.byRoot) != (0,1) ; byRoot: %v", δFtail.byRoot)
}
if !reflect.DeepEqual(trackZinblk, Zinblk) {
t.Errorf("Zinblk:\nhave: %v\nwant: %v", trackZinblk, Zinblk)
}
// ForgetPast configured threshold
const ncut = 5
if len(vδf) >= ncut {
revcut := vδf[0].Rev
t.Logf("# forget ≤ @%s", t.AtSymb(revcut))
δFtail.ForgetPast(revcut)
vδf = vδf[1:]
//t.Logf("# vδf: %s", vδfstr(vδf))
//t.Logf("# vδt: %s", vδfstr(δFtail.SliceByFileRev(zfile, δFtail.Tail(), δFtail.Head())))
icut := 0;
for ; icut < len(vδE); icut++ {
if vδE[icut].Rev > revcut {
break
}
}
vδE = vδE[icut:]
}
// verify δftail.root
δftail := δFtail.byFile[foid]
rootOK := t.Root()
if delfile {
rootOK = zodb.InvalidOid
}
if δftail.root != rootOK {
t.Errorf(".root: have %s ; want %s", δftail.root, rootOK)
}
// verify vδE
if !reflect.DeepEqual(δftail.vδE, vδE) {
t.Errorf("vδE:\nhave: %v\nwant: %v", δftail.vδE, vδE)
}
// SliceByFileRev
for j := 0; j < len(vδf); j++ {
for k := j; k < len(vδf); k++ {
var lo zodb.Tid
if j == 0 {
lo = vδf[0].Rev - 1
} else {
lo = vδf[j-1].Rev
}
hi := vδf[k].Rev
vδf_ok := vδf[j:k+1] // [j,k]
vδf_, err := δFtail.SliceByFileRev(zfile, lo, hi); X(err)
if !reflect.DeepEqual(vδf_, vδf_ok) {
t.Errorf("slice (@%s,@%s]:\nhave: %v\nwant: %v", t.AtSymb(lo), t.AtSymb(hi), t.vδfstr(vδf_), t.vδfstr(vδf_ok))
}
}
}
// BlkRevAt
blkv := []int64{} // all blocks
if l := len(vδf); l > 0 {
for blk := range blkRevAt[vδf[l-1].Rev] {
blkv = append(blkv, blk)
}
}
blkv = append(blkv, 1E4/*this block is always hole*/)
sort.Slice(blkv, func(i, j int) bool {
return blkv[i] < blkv[j]
})
for j := 0; j < len(vδf); j++ {
at := vδf[j].Rev
blkRev := blkRevAt[at]
for _, blk := range blkv {
rev, exact, err := δFtail.BlkRevAt(ctx, zfile, blk, at); X(err)
revOK, ok := blkRev[blk]
if !ok {
k := len(epochv) - 1
for ; k >= 0; k-- {
if epochv[k] <= at {
break
}
}
revOK = epochv[k]
}
exactOK := true
if revOK <= δFtail.Tail() {
revOK, exactOK = δFtail.Tail(), false
}
if !(rev == revOK && exact == exactOK) {
t.Errorf("blkrev #%d @%s:\nhave: @%s, %v\nwant: @%s, %v", blk, t.AtSymb(at), t.AtSymb(rev), exact, t.AtSymb(revOK), exactOK)
}
}
}
}
}
// TestΔFtailSliceUntrackedUniform verifies that untracked blocks, if present, are present uniformly in returned slice.
//
// Some changes to untracked blocks, might be seen by ΔFtail, because those
// changes occur in the same BTree bucket that covers another change to a
// tracked block.
//
// Here we verify that if some change to such untracked block is ever present,
// SliceByFileRev returns all changes to that untracked block. In other words
// we verify that no change to untracked block is missed, if any change to that
// block is ever present in returned slice.
//
// This test also verifies handling of OnlyExplicitlyTracked query option.
func TestΔFtailSliceUntrackedUniform(t_ *testing.T) {
t := newT(t_)
X := exc.Raiseif
at0 := t.Head().At
δFtail := NewΔFtail(at0, t.DB)
// commit t1. all 0, 1 and 2 are in the same bucket.
t1 := t.Commit("T/B0:a,1:b,2:c")
δF, err := δFtail.Update(t1.ΔZ); X(err)
t.assertΔF(δF, t1.At, nil) // δf empty
t2 := t.Commit("t0:d,1:e,2:c Da:a,b:b,c:c2,d:d,e:e") // 0:-a+d 1:-b+e δc₂
δF, err = δFtail.Update(t2.ΔZ); X(err)
t.assertΔF(δF, t2.At, nil)
t3 := t.Commit("t0:d,1:e,2:c Da:a,b:b,c:c3,d:d3,e:e3") // δc₃ δd₃ δe₃
δF, err = δFtail.Update(t3.ΔZ); X(err)
t.assertΔF(δF, t3.At, nil)
t4 := t.Commit("t0:d,1:e,2:c Da:a,b:b,c:c4,d:d3,e:e4") // δc₄ δe₄
δF, err = δFtail.Update(t4.ΔZ); X(err)
t.assertΔF(δF, t4.At, nil)
// load zfile via root['treegen/file']
txn, ctx := transaction.New(context.Background())
defer func() {
txn.Abort()
}()
zconn, err := t.DB.Open(ctx, &zodb.ConnOptions{At: t.Head().At}); X(err)
zfile, _ := t.XLoadZFile(ctx, zconn)
xtrackBlk := func(blk int64) {
_, path, blkcov, zblk, _, err := zfile.LoadBlk(ctx, blk); X(err)
δFtail.Track(zfile, blk, path, blkcov, zblk)
}
// track 0, but do not track 1 and 2.
// blktab[1] becomes noticed by δBtail because both 0 and 1 are in the same bucket and both are changed @at2.
// blktab[2] remains unnoticed because it is not changed past at1.
xtrackBlk(0)
// assertSliceByFileRev verifies result of SliceByFileRev and SliceByFileRevEx(OnlyExplicitlyTracked=y).
assertSliceByFileRev := func(lo, hi zodb.Tid, vδf_ok, vδfT_ok []*ΔFile) {
t.Helper()
Tonly := QueryOptions{OnlyExplicitlyTracked: true}
vδf, err := δFtail.SliceByFileRev (zfile, lo, hi); X(err)
vδfT, err := δFtail.SliceByFileRevEx(zfile, lo, hi, Tonly); X(err)
if !reflect.DeepEqual(vδf, vδf_ok) {
t.Errorf("slice (@%s,@%s]:\nhave: %v\nwant: %v", t.AtSymb(lo), t.AtSymb(hi), t.vδfstr(vδf), t.vδfstr(vδf_ok))
}
if !reflect.DeepEqual(vδfT, vδfT_ok) {
t.Errorf("sliceT (@%s,@%s]:\nhave: %v\nwant: %v", t.AtSymb(lo), t.AtSymb(hi), t.vδfstr(vδfT), t.vδfstr(vδfT_ok))
}
}
// (at1, at4] -> changes to both 0 and 1, because they both are changed in the same bucket @at2
assertSliceByFileRev(t1.At, t4.At,
/*vδf*/ []*ΔFile{
&ΔFile{Rev: t2.At, Blocks: b(0,1), Size: true},
&ΔFile{Rev: t3.At, Blocks: b(0,1), Size: false},
&ΔFile{Rev: t4.At, Blocks: b( 1), Size: false},
},
/*vδfT*/ []*ΔFile{
&ΔFile{Rev: t2.At, Blocks: b(0 ), Size: true},
&ΔFile{Rev: t3.At, Blocks: b(0 ), Size: false},
// no change @at4
})
// (at2, at4] -> changes to only 0, because there is no change to 2 via blktab
assertSliceByFileRev(t2.At, t4.At,
/*vδf*/ []*ΔFile{
&ΔFile{Rev: t3.At, Blocks: b(0), Size: false},
},
/*vδfT*/ []*ΔFile{
&ΔFile{Rev: t3.At, Blocks: b(0), Size: false},
})
// (at3, at4] -> changes to only 0, ----/----
assertSliceByFileRev(t3.At, t4.At,
/*vδf*/ []*ΔFile(nil),
/*vδfT*/ []*ΔFile(nil))
}
// newT creates new T.
func newT(t *testing.T) *T {
t.Helper()
tt := &T{xbtreetest.NewT(t), zodb.InvalidOid}
// find out zfile's oid
txn, ctx := transaction.New(context.Background())
defer func() {
txn.Abort()
}()
zconn, err := tt.DB.Open(ctx, &zodb.ConnOptions{At: tt.Head().At})
if err != nil {
tt.Fatal(err)
}
zfile, _ := tt.XLoadZFile(ctx, zconn)
tt.foid = zfile.POid()
return tt
}
// XLoadZFile loads zfile from root["treegen/file"]@head.
func (t *T) XLoadZFile(ctx context.Context, zconn *zodb.Connection) (zfile *ZBigFile, blksize int64) {
t.Helper()
X := exc.Raiseif
xzroot, err := zconn.Get(ctx, 0); X(err)
zroot := xzroot.(*zodb.Map)
err = zroot.PActivate(ctx); X(err)
zfile = zroot.Data["treegen/file"].(*ZBigFile)
zroot.PDeactivate()
err = zfile.PActivate(ctx); X(err)
blksize = zfile.blksize
blktabOid := zfile.blktab.POid()
if blktabOid != t.Root() {
t.Fatalf("BUG: zfile.blktab (%s) != treeroot (%s)", blktabOid, t.Root())
}
zfile.PDeactivate()
return zfile, blksize
}
// assertΔF asserts that δF has rev and δf as expected.
func (t *T) assertΔF(δF ΔF, rev zodb.Tid, δfok *ΔFile) {
t.Helper()
// assert δF points to zfile if δfok != ø
if δF.Rev != rev {
t.Errorf("wrong δF.Rev: have %s ; want %s", δF.Rev, rev)
}
δfiles := setOid{}
for δfile := range δF.ByFile {
δfiles.Add(δfile)
}
δfilesOK := setOid{}
if δfok != nil {
δfilesOK.Add(t.foid)
}
if !δfiles.Equal(δfilesOK) {
t.Errorf("wrong δF.ByFile:\nhave keys: %s\nwant keys: %s", δfiles, δfilesOK)
return
}
// verify δf
δf := δF.ByFile[t.foid]
if !reflect.DeepEqual(δf, δfok) {
t.Errorf("δf:\nhave: %v\nwant: %v", δf, δfok)
}
}
// δfstr/vδfstr convert δf/vδf to string taking symbolic at into account.
func (t *T) δfstr(δf *ΔFile) string {
s := fmt.Sprintf("@%s·%s", t.AtSymb(δf.Rev), δf.Blocks)
if δf.Epoch {
s += "E"
}
if δf.Size {
s += "S"
}
return s
}
func (t *T) vδfstr(vδf []*ΔFile) string {
var s []string
for _, δf := range vδf {
s = append(s, t.δfstr(δf))
}
return fmt.Sprintf("%s", s)
}
// dataTabTxt returns string representation of {} dataTab.
func dataTabTxt(dataTab map[string]string) string {
// XXX dup wrt xbtreetest.KVTxt but uses string instead of Key for keys.
if len(dataTab) == 0 {
return "ø"
}
keyv := []string{}
for k := range dataTab { keyv = append(keyv, k) }
sort.Strings(keyv)
sv := []string{}
for _, k := range keyv {
v := dataTab[k]
if strings.ContainsAny(v, " \n\t,:") {
panicf("[%v]=%q: invalid value", k, v)
}
sv = append(sv, fmt.Sprintf("%v:%s", k, v))
}
return strings.Join(sv, ",")
}
// b is shorthand to create setI64(blocks).
func b(blocks ...int64) setI64 {
s := setI64{}
for _, blk := range blocks {
s.Add(blk)
}
return s
}
// Copyright (C) 2021 Nexedi SA and Contributors.
// Kirill Smelkov <kirr@nexedi.com>
//
// This program is free software: you can Use, Study, Modify and Redistribute
// it under the terms of the GNU General Public License version 3, or (at your
// option) any later version, as published by the Free Software Foundation.
//
// You can also Link and Combine this program with other software covered by
// the terms of any of the Free Software licenses or any of the Open Source
// Initiative approved licenses and Convey the resulting work. Corresponding
// source of such a combination shall include the source code for all other
// software used.
//
// This program is distributed WITHOUT ANY WARRANTY; without even the implied
// warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
//
// See COPYING file for full licensing terms.
// See https://www.nexedi.com/licensing for rationale and options.
package zdata_test
import (
_ "lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/xbtree/xbtreetest/init"
)
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment