Flush undo files after last block write (validation)

https://github.com/bitcoin/bitcoin/pull/17994

Host: vasild  -  PR author: kallewoof

The PR branch HEAD was ac94141af at the time of this review club meeting.

In this Review Club we will dive into how blocks are downloaded and stored on disk, how disk access is arranged and a bug it has caused.

Notes

Concepts

  • When a node first connects to the network, it does an Initial Block Download (IBD) to download all the blocks to the tip, validate them and connect them in its block chain.

  • Block undo data is all the information that is necessary to revert a block if the block needs to be disconnected during a reorg.

  • The blocks database is represented by two instances of FlatFileSeq - one for all blocks/blk*.dat files and another one for all blocks/rev*.dat files. FlatFilePos is used to locate objects within those files.

  • The meta information about a single block file and its corresponding undo file is represented by CBlockFileInfo.

  • The fflush library call moves all buffered data from the FILE* buffers to the OS (i.e. in kernel buffers). It may not necessarily hit the disk yet.

  • The fsync system call moves all modified data from the OS buffers to the disk.

Block and undo files

  • Blocks are stored in a custom format in the blocks directory. This directory consists of a series of blk*.dat files (currently 128 MiB each) that contain the raw blocks.

  • Each block is written to disk as it arrives from the network. Because blocks are downloaded in parallel from more than one peer during initial block download this means that a block with greater height can be received (and written to disk) before a block with lower height. We call these out-of-order blocks.

  • When initial block download finishes things calm down and new blocks arrive every 10 minutes on average (assuming the node is online to receive them). That means we’re much less likely to write blocks out of order.

  • In addition to the blk*.dat files, we also generate and store “undo” information for each block in corresponding rev*.dat files. This can be used to revert all the transactions from the block if we need to disconnect the block from the chain during a reorg. Unlike blocks, this information is always stored in block height order.

    [Edit 2021-02-03: Note: Within a single rev*.dat file, the undo data is ordered – a block’s undo data will never appear after the undo data for a descendant block. However, this is not true if the rev*.dat files are concatenated – a block’s undo data may appear in a later rev*.dat file than the rev*.dat file containing its descendant’s undo data.]

  • We put block and undo data in corresponding blk*.dat and rev*.dat files, but internally they may be in different order. For example, the undos for all blocks in blk1234.dat will be in rev1234.dat, but maybe the block at height 100 is somewhere near the beginning of blk1234.dat whereas its undo is somewhere near the end of rev1234.dat.

Questions

Overview

  • Can we create an undo file for a given block without having all prior blocks?

  • Do we ever modify existing block data in the blocks directory, or do we just append new blocks and their undos?

Dive

  • How is space allocated in the files when new data is appended? Why?

  • What does it mean to “finalize” a file?

  • What is the bug that the PR is fixing?

  • How would you reproduce the bug or show it exists?

  • How is the bug being fixed in the PR?

Aftermath

  • What can be improved further in this area?

Meeting Log

  113:00 <vasild> #startmeeting
  213:00 <jnewbery> hi
  313:00 <kanzure> hi
  413:00 <robot-visions> hi
  513:00 <emzy> hi
  613:00 <willcl_ark> hi
  713:00 <felixweis> hi
  813:00 <lightlike> hi
  913:00 <fjahr> hi
 1013:00 <pinheadmz> hi
 1113:00 <vasild> hi
 1213:01 <nehan_> hi
 1313:01 <vasild> Let's rock!
 1413:01 <vasild> Did you get a chance to review the pull request or take a look at it?
 1513:01 <jnewbery> y
 1613:01 <nehan_> y
 1713:01 <fjahr> y
 1813:01 <vasild> y
 1913:01 <lightlike> n
 2013:01 <amiti> hi
 2113:01 <amiti> n
 2213:01 <willcl_ark> Only a quick look this week :|
 2313:02 <felixweis> y
 2413:02 <emzy> n
 2513:02 <pinheadmz> n :-/ just lurking this week
 2613:02 <robot-visions> y, but have some questions before submitting review
 2713:02 <jonatack> hi
 2813:02 <vasild> robot-visions: yes?
 2913:02 <jonatack> y
 3013:03 <vasild> robot-visions: feel free to ask here now, or later.
 3113:03 <robot-visions> Is this bug fix concerned with data loss, or with wasting space?
 3213:03 <robot-visions> (Which is the primary goal of the fix)
 3313:03 <vasild> It is about wasting space.
 3413:03 <emzy> about 1MB per file.
 3513:03 <theStack> hi
 3613:03 <robot-visions> Great, thanks! That makes sense
 3713:04 <jkczyz> hi
 3813:04 <vasild> So, lets scratch the surface a little bit - Can we create an undo file for a given block without having all prior blocks?
 3913:05 <jules6> yes i think?
 4013:05 <oyster> Don't see why it wouldn't be possible
 4113:05 <robot-visions> I think we cannot, because without prior blocks, we would not be able to restore the UTXOs spent by the block (we'd have the `COutPoint`s, but not the `CTxOut`s)
 4213:06 <pinheadmz> the undo data is a set of coins SPENT by a block (so that we can un-spend if it gets disconencted)
 4313:06 <pinheadmz> you need the utxo set to determine which coins are spent by a block
 4413:06 <theStack> i would think the same as robot-visions: no, it's not possible
 4513:06 <pinheadmz> so actually a pruned node could do this: you need UTXO set, not prior blocks
 4613:07 <robot-visions> pinheadmz: that makes sense to me!
 4713:07 <vasild> right, actually, now when I think about it - the question is misleading. We need the utxo for which we need the prior blocks. So the answer would be "no", but however if we somehow have the utxo - then we are good.
 4813:08 <pinheadmz> its a good question :-)
 4913:08 <robot-visions> Agreed :) (good question), a lot of interesting nuance
 5013:08 <jnewbery> vasild: the undo data is written in ConnectBlock(), which we only call for the tip block, so we need to have processed all the blocks up to that point
 5113:08 <sipa> well, you need *all* utxos spent by a block, which may include utxos created the block before, so it can't be created before at least having validated the previous block
 5213:08 <sipa> and after the block itself is activated, those utxos are gone
 5313:09 <sipa> so the only time the undo data can be created for block B is in the narrow timespan betwee block B-1 and B+1
 5413:09 <pinheadmz> this is making me wonder what happens if a node is pruning with depth (say) 100, and 101 blocks get disconencted
 5513:10 <sipa> pinheadmz: hell breaks lose; a pruned node cannot reorg past its prune point
 5613:10 <jnewbery> perhaps it's a trick question. If you're pruning, then you may have discarded the old raw blocks when you're connected blocks at the tip
 5713:10 <pinheadmz> i suppose it doesnt make a difference right?
 5813:10 <pinheadmz> sipa is that really right?
 5913:10 <jnewbery> pinheadmz: it is. You'd need to redownload the entire chain
 6013:10 <pinheadmz> shit!
 6113:11 <pinheadmz> and its because we dont have the undo data for block tip-101
 6213:11 <sipa> that's why you cannot prune less than 288 blocks deep
 6313:11 <sipa> undo data is useless without the corresponding block data
 6413:11 <jnewbery> that's why we have a healthy margin of error: https://github.com/bitcoin/bitcoin/blob/5dcb0615898216c503e965a01d855a5999a586b5/src/validation.cpp#L3951
 6513:11 <sipa> as it doesn't contain the txids
 6613:11 <vasild> luckily the minimum prune data one has to keep is 500MB IIRC, so that would be like 500MB worth of blocks a reorg. 500*10 minutes = 3.5 days
 6713:11 <pinheadmz> i always thought the 288 rule was just so we could still relay recent blocks to peers
 6813:11 <pinheadmz> hadnt considered this
 6913:12 <sipa> pinheadmz: the pruned relay came much later; for several major releases pruning disabled all block fetching
 7013:12 <pinheadmz> sure makes sense
 7113:12 <vasild> Do we ever modify existing data in the blocks database, or do we just append new blocks and their undos?
 7213:12 <felixweis> to restore the the utxo we need key=(txid,index) value=(script,amount,blockheight,is_coincase). some info (like original txid) is inferred from the last block (in blk*.dat), the rest is stored in rev*.dat, inputs are in the same canonical order as the inputs in the block spending. (iirc)
 7313:13 <pinheadmz> was the decision about the 288 value controversial at the time?
 7413:13 <sipa> vasild: the blocks leveldb db?
 7513:13 <felixweis> i think if there is every a 288 block reorg, hell breaks lose outside of the software realm
 7613:14 <sipa> pruning modifies existing blocks in it
 7713:14 <vasild> I mean the blk*.dat and rev*.dat
 7813:14 <sipa> no, they're write once & delete when pruning
 7913:14 <robot-visions> vasild: I think we never modify data already written to the blk*.dat and rev*.dat files (but is it possible to prune / delete later)?
 8013:14 <robot-visions> thanks sipa: for going back in time and answering my question before I wrote it
 8113:15 <vasild> right, so we only append to blk*.dat and rev*.dat and never update/modify or delete (except deletion in prune)
 8213:15 <sipa> in which case files are deleted as a whole
 8313:16 <jnewbery> I found it confusing that ThreadImport is calling a function called SaveBlockToDisk (through AcceptBlock): https://doxygen.bitcoincore.org/validation_8cpp.html#a0042314b4feb61589ccfda9208b15306
 8413:17 <oyster> vasild how does "not modifying existing blk* and rev* data square with: "In addition to the blk*.dat files, we also generate and store “undo” information for each block in corresponding rev*.dat files. This can be used to revert all the transactions from the block if we need to disconnect the block from the chain during a reorg. Unlike blocks, this information is always stored in block height order."
 8513:17 <jnewbery> (actually I found the whole passing down of `const FlatFilePos* dbp` through that call stack confusing)
 8613:17 <oyster> Does it mean that in order to ensure "information is always stored in block height order." that the rev data isn't written until all the blocks are processed up until that point?
 8713:17 <vasild> oyster: you mean what happens with blk*.dat and rev*.dat when we disconnect a block?
 8813:18 <sipa> oyster: the undo (rev) data does not *exist* until a block is activated
 8913:18 <vasild> oyster: yes!
 9013:18 <sipa> and it can only be activated when its parent is active
 9113:18 <oyster> sipa ok that's what I'm thinking
 9213:18 <felixweis> im still stuggling to understand how blocks in blk* are out of order, but rev is in order. the pruning mechanism deletes blk and rev of the same number? what would happen if the tip is artificialy stored in blk000000.dat right after the genesis block?
 9313:18 <jnewbery> oyster: correct. See scrollback. A block's undo data can only be written as the block is connected
 9413:18 <sipa> felixweis: then blk00000.dat will not be pruned
 9513:19 <vasild> felixweis: it helps if you ignore pruning, it is not very relevant for this
 9613:19 <sipa> (and perhaps nothing would be pruned, unsure)
 9713:19 <robot-visions> Could heights in rev* be out of order if there was a disconnect?
 9813:19 <felixweis> thanks, sorry for derailing
 9913:19 <sipa> robot-visions: rev* is partially ordered (a block's parent always comes before the child)
10013:20 <robot-visions> Makes sense, thanks!
10113:20 <sipa> in the case of multiple branches it's possible that parent and child block's are not exactly adjacent
10213:20 <oyster> so if it's the case that rev data isn't written until block is activated/connected, how is the case that blk*.dat and rev*.dat files always contain the same block info, What if node starts and only hears about 128mb worth of blocks that are beyond it's tip? wouldn't those be written to blk* but couldn't be written to the corresponding rev number?
10313:20 <vasild> so, the point is - we dump blocks in blk*.dat as they come from the network, which is usually out of order, but we cannot do the same with undo - which we only can generate once all previous blocks are activated
10413:20 <pinheadmz> sipa: there is an index right? block hash => filename and start position ?
10513:21 <sipa> pinheadmz: yes, the block index
10613:21 <oyster> I think my question might be the same as felixweis's
10713:21 <pinheadmz> so file position doesn't really matter
10813:21 <robot-visions> oyster: if a block was written to blk_N.dat, then when we're writing the undo data, we open rev_N.dat and append it there
10913:22 <sipa> blk* files are size limited
11013:22 <robot-visions> rev_N are not size limited though?
11113:22 <vasild> oyster: we come back later, when we have the undo and append it to the proper rev*.dat file - the one which corresponds to the block's blk*.dat file. Notice that this is also always the last rev*.dat file.
11213:22 <sipa> rev* files just contain the undo data for the corresponding blocks in the same numbered blk fe
11313:22 <sipa> indeed, rev* are not size limited
11413:22 <oyster> ok thanks
11513:22 <sipa> vasild: not necessarily in case of reorgs
11613:22 <nehan_> sipa: they implicitly are because blk files are, right?
11713:23 <vasild> sipa: hmm, right!
11813:23 <sipa> nehan_: sure, but there is no explicit "rev files are limited to N MB" check in the code
11913:23 <vasild> How is space allocated in the files when new data is appended? Why?
12013:24 <felixweis> to have the data less fragmented
12113:24 <sipa> you could have in theory a rev file up to 250 MB or so
12213:24 <robot-visions> It's allocated in chunks (1 MB for undo, 16 MB for block), I'm guessing to reduce the number of allocations?
12313:24 <sipa> robot-visions: yeah, to reduce fragmentation
12413:25 <vasild> filesystem fragmentation
12513:25 <robot-visions> basic question: why does allocating data in chunks of 1 MB reduce filesystem fragmentation?
12613:25 <sipa> sorry, rev data for one block up to 250 MB, if it spent 25000 outputs all with a 10000 byte scriptPubKey
12713:25 <felixweis> i see how this is super useful with hdds, what if the underying is an ssd? have there been benchmarks done?
12813:25 <robot-visions> (what would happen if you always just extended by just enough to write the next piece of data)
12913:25 <sipa> robot-visions: it depends on the filesystem
13013:26 <oyster> seems like it depends on the FS block size
13113:26 <sipa> felixweis: certainly filesystems on SSDs wouldn't be impacted as much
13213:26 <vasild> filesystem != HDD vs SSD
13313:27 <sipa> but filesystem fragmentation just increases overhead for the filesystem
13413:27 <vasild> also, this pre-allocation guarantees that we will not get "disk full" error when writing the data later.
13513:27 <sipa> the preallocation is essentially us letting the OS know that this file will grow in the future
13613:28 <vasild> So, we preallocate some space in advance, even if we dont know if we will fully utilize it. What does it mean to "finalize" a file?
13713:30 <felixweis> vasild: crasing during pre-allocatin if disk is full is a nice sideeffect.
13813:30 <robot-visions> I believe "finalize" means truncate unused space that was preallocated (in addition to the usual flush + sync)
13913:30 <willcl_ark> am I right in thinking that blk*.dat files are never re-processed to re-order the blocks within them (after being finalised)?
14013:30 <nehan_> vasild: flush the stream to disk
14113:31 <nehan_> er, to the filesystem. i'm not sure where fsync() is called
14213:31 <vasild> felixweis: yes :) I don't know if this guarding from disk full is very useful. We can still get other IO errors and we have to be prepared for that anyway, even if disk full is not one of them.
14313:31 <jnewbery> willcl_ark: yes, that's right. They remain in the order in which they arrived.
14413:31 <vasild> willcl_ark: yes, we never modify them later
14513:32 <vasild> willcl_ark: notice that if we tried to reorder them later we may have to move blocks across blk files, if e.g. block 100 is in blk5 and block 101 is in blk4
14613:33 <willcl_ark> ah, interesting
14713:33 <sipa> felixweis: at the time is was written i doubt it was tested on SSDs
14813:34 <robot-visions> nehan_: does the `fsync` happen as part of the `fclose`?
14913:34 <vasild> nehan_: that is what the Flush() method does (fflush() + fsync()). Finalize is that + return the claimed space to the FS because we know that we will not need it because we will not append to this file again.
15013:34 <sipa> we fsync explicitly when flushing
15113:34 <jnewbery> sipa: has there ever been discussion about storing compressed blocks in the blk files? Any idea about how much space that could save?
15213:35 <vasild> btw, it is funny that we do fflush() right after fopen()
15313:35 <nehan_> ah, i see -- fsync() is in FileCommit which is always called during a flush. even if finalize is false?
15413:36 <vasild> jnewbery: compressed like in general compression algo, e.g. zlib?
15513:36 <theStack> if one would delete all blk*/rev* files from a (stopped) node and replace them by the copy from another node, would that cause any problems?
15613:36 <pinheadmz> jnewbery: I thought bitcoin data was too random to benefit from compression (all those hashes)
15713:36 <nehan_> robot-visio: fclose calls fflush but not fsync
15813:36 <vasild> nehan_: yes
15913:36 <robot-visions> Thanks nehan_!
16013:37 <vasild> theStack: hah! I guess it will be out of sync with blocks/index/
16113:37 <jnewbery> theStack: yes, because the leveldb blocks database stores where in the files the blocks are
16213:37 <theStack> vasild: jnewbery: would starting with -reindex help in this case?
16313:38 <vasild> I think it should fix the issue, but I am not very confident about that
16413:38 <jnewbery> theStack: I believe so, but I'm also not confident
16513:39 <jnewbery> pinheadmz: yeah you're right. Headers can be compressed by a bit but block data not very much
16613:39 <vasild> to be sure, I would `rm -fr blocks/index/` so at least if it bricks it bricks with some "file not found" error instead of some obscure trying to lookup a block in position X at file Y and finding something else there
16713:40 <fjahr> theStack: I was pretty confident until I saw jnewbery and vasild answers ;) But -reindex builds the blockindex so I don't know why it wouldn't work
16813:40 <theStack> i remember years ago people were offering blk/rev files via bittorrent, that's why i'm asking
16913:40 <pinheadmz> theStack: that was before headers-first sync i believe
17013:40 <theStack> i was always assuming that everynode has the exact same block files, i.e. their hashes would. today i learned that this is obviously not the case :)
17113:41 <sipa> jnewbery: sure, i even worked on a custom compression format with gmaxwell before
17213:41 <pinheadmz> now bitcoind pulls blocks in parallel just like bittorrent would
17313:41 <vasild> I am using zfs with lz4 compression for my blocks directory. "compressratio 1.11x"
17413:41 <sipa> jnewbery: savings are only 20-30% though; if block space os an issue, pruning is probably more important
17513:41 <felixweis> sipa: the notes for that would be so awesomly interesting xD
17613:42 <vasild> Main question: What is the bug that the PR is fixing?
17713:42 <sipa> felixweis: blockstream is using it for satellite block broadcast
17813:42 <felixweis> oh wow
17913:42 <vasild> sipa: it is 1.11x compression ratio in my case (lz4)
18013:43 <jnewbery> sipa: 20-30% saving on the header or the entire block?
18113:43 <pinheadmz> sipa: does it introsepct the bitcoin data? for example i can imagine pruning default values like nlocktime or nsequence, maybe squeeze some bytes out of DER format
18213:43 <vasild> That is counting everything inside ~/.bitcoin, but other stuff is too small
18313:43 <robot-visions> vasild: If we're receiving new black faster than the chain tip is moving, we could run into situations where we (1) finalize a rev* file, (2) write some more undo data to the file later, (3) don't finalize it again
18413:44 <sipa> pinheadmz: yes, exactly
18513:44 <pinheadmz> brilliant
18613:44 <robot-visions> block data*
18713:44 <theStack> jnewbery: the header consists mostly of hashes (64 out of 80 bytes), i guess there is not much possible with compression
18813:44 <sipa> jnewbery: whole block, but that number includes crazy stuff like exploiting pubkey recovery for p2pkh/p2wpkh
18913:45 <sipa> which makes decompression slower than full validation
19013:46 <jnewbery> sorry, I've derailed the conversation a bit. vasild's last question was: What is the bug that the PR is fixing?
19113:46 <vasild> robot-visions: right, except it happens during IBD when we receive blocks out of order and finalize revN whenever we finalize blkN, but later come back and append stuff to revN and forget to finalize it
19213:46 <robot-visions> (y)
19313:46 <felixweis> i got 1.427x with bzip2 -9 on blk00010.dat to blk00019.dat (earlier blk have lower ratio)
19413:48 <felixweis> but takes 7s to decompress
19513:48 <vasild> So, we should not finalize revN whenever finalizing blkN, unless we are sure that we will not come back to append more undo to it.
19613:48 <emzy> so blocks on btrfs fith lz4 compression is a good option?
19713:48 <vasild> How would you reproduce the bug or show it exists?
19813:49 <emzy> s/fith/with/
19913:49 <robot-visions> vasild: Is there harm in keeping the "finalize revN whenever finalizing blkN", but also add additional revN finalization as needed?
20013:49 <robot-visions> (I think the benefit would be making the code simpler, and the downside would be sometimes an extra finalization—I'm not sure how much that would affect fragmentation)
20113:50 <sipa> or we could benchmark how much the pre-allocation helps, and maybe get rid of ot entirely
20213:50 <jonatack> +1 on benchmarking
20313:50 <jnewbery> vasild: pedantically, we can't be *sure*, unless we've tried to connect every block in the blk file
20413:50 <vasild> robot-visions: this is what the fix does - but not unconditionally "finalize revN whenever finalizing blkN". It is possible to detect if we will be going back to append
20513:51 <vasild> I suspect the preallocation may turn out to have no effect on performance
20613:52 <vasild> hmm, we already answered the next question: "How is the bug being fixed in the PR?"
20713:52 <vasild> we natually moved to "how to improve things further" :)
20813:52 <vasild> remove preallocation! :)
20913:52 <robot-visions> :)
21013:52 <robot-visions> Just to make sure I understand correctly: Could you flush more often than needed in the rare edge case where the same file has multiple blocks with `nHeight == vinfoBlockFile[_pos.nFile].nHeightLast`?
21113:52 <jnewbery> (e.g. if there are two blocks of the same height in the blk file, there's always a chance that we might re-org to the other one)
21213:53 <robot-visions> s/flush/finalize
21313:53 <emzy> I think modern filesystems like zfs will cache / preallocate anyway in the background.
21413:53 <vasild> What about even removing rev*.dat files altogether? We only disconnect the block from the tip and at this point we have the utxo, so we can generate the undo on the fly as needed, no?
21513:54 <jnewbery> vasild: we can't do that. We need to recreate the utxos that were used up in the block we're disconnecting.
21613:54 <felixweis> vasild: you'd need a -txindex
21713:54 <nehan_> vasild: I don't see how you have the UTXO.
21813:55 <felixweis> or it will be painfully slow to find all those prevouts
21913:55 <sipa> vasild: after the block is connected, the ut os it spent are gome from the utxo set
22013:55 <sipa> so we need to keep them around somewhere
22113:55 <sipa> that's what the undo files do
22213:56 <sipa> *utxos it spent
22313:56 <vasild> hmm
22413:57 <sipa> with txindex and no pruning they could in theory be recovered from the block data itself, but it would be slow (as they're scattered all over), and not be generically applocable
22513:57 <sipa> *applicable
22613:58 <vasild> it would save 29GB of disk space from rev*.dat
22713:59 <vasild> what about prining then just the rev*.dat files, it is not like we will ever need the undo for some blocks from years ago
22813:59 <emzy> The reorg must be fast. So good to have it handy.
22913:59 <sipa> vasild: why not prune everything?
23014:00 <felixweis> vasild: even more on OSX pre 0.20 https://github.com/bitcoin/bitcoin/issues/17827
23114:00 <sipa> if you care about disk space, you should prune
23214:00 <vasild> well, yes, if we just prune rev* then we still have all the complications of maintaining them
23314:01 <vasild> Time is out. Take away - try to ditch preallocation completely, but not rev* files completely :)
23414:01 <sipa> well we need rev* data
23514:01 <felixweis> Undo data is awesome
23614:02 <jnewbery> that's time!
23714:02 <felixweis> ultraprune ftw! (listen to the podcast and read all there is on bitcoin.stackexchange.com)
23814:02 <felixweis> thanks everyone!
23914:02 <theStack> thanks for hosting
24014:02 <vasild> Thank you!
24114:02 <nehan_> thanks!
24214:02 <sipa> thanks!
24314:02 <jnewbery> thanks everyone!
24414:02 <willcl_ark> thanks!
24514:02 <emzy> Thanks!!
24614:03 <robot-visions> thanks!
24714:03 <jonatack> Thanks vasild and everyone! Great meeting
24814:03 <jnewbery> thanks vasild. Great job!
24914:03 <oyster> thanks vasild
25014:03 <vasild> #endmeeting
25114:04 <jnewbery> I'll post the log later today. If anyone wants to host in the upcoming weeks, please let me know
25214:04 <vasild> jnewbery: thanks for setting this whole thing up :)
25314:06 <emzy> I will try the blocks on bttfs with lz4. I hope it saves 11% and is maybe a little bit faster.
25414:07 <vasild> "so blocks on btrfs fith lz4 compression is a good option?"
25514:07 <vasild> emzy: you have to try
25614:09 <vasild> long time ago I enabled lz4 on everything because the decompression is so fast that "read compressed from disk + uncompress" is faster than "read uncompressed from disk". Of course it depends on the data itself, disk and CPU speed.
25714:10 <emzy> Yes my thinking. And I have more then one VM hat is on the disk space limit.
25814:11 <vasild> ah, yes, I forgot - it also saves disk space!
25914:11 <emzy> 11% will help a little
26014:14 <sipa> hmm, i wonder how well zopfli can compress block data
26114:17 <emzy> zpaq has super good compression ratios http://mattmahoney.net/dc/zpaq.html
26214:24 <sipa> -rw------- 1 pw pw 133925560 Apr 21 20:09 blk02043.dat
26314:24 <sipa> -rw-rw-r-- 1 pw pw 111908911 Apr 22 11:14 blk02043.dat.bz2
26414:24 <sipa> -rw-rw-r-- 1 pw pw 111091123 Apr 22 11:16 blk02043.dat.gzip.gz
26514:24 <sipa> -rw-rw-r-- 1 pw pw 110891285 Apr 22 11:18 blk02043.dat.zopfli.gz
26614:24 <sipa> -rw-rw-r-- 1 pw pw 100159172 Apr 22 11:12 blk02043.dat.xz
26714:24 <sipa> -rw-rw-r-- 1 pw pw 99945288 Apr 22 11:23 blk02043.dat.zpaq
26814:24 <emzy> wow you are fast. And zpaq won :)
26914:25 <sipa> -rw-rw-r-- 1 pw pw 113306837 Apr 22 11:24 blk02043.dat.lz4
27014:25 <sipa> actually, these are unfair numbers
27114:25 <sipa> if integrated into core, they'd be compressing block per block, rather than entire blk files
27214:26 <sipa> and if used at the filesystem level, they're compressiong per fs block
27314:26 <emzy> right.
27414:27 <emzy> but bz2 and lz4 are some how block based.
27515:07 <robot-visions> Hi, I have two follow up questions from today's PR review club:
27615:07 <robot-visions> 1) Is the UTXO set persisted to disk somewhere?
27715:08 <robot-visions> 2) Could bitcoind crash after updating the UTXO set but before writing the undo data for a block?
27815:12 <sipa> 1) yes, the chainstate directory (reconstructing it from the block data would take hours, and be impossible when pruned)
27915:13 <sipa> 2) shouldn't be, as the block data is always flushed before writing the block index, and the block index is always flushed before writing the chainstaye
28015:13 <sipa> (though now i want to go check)
28115:14 <robot-visions> Thanks! For (2) I'm looking at CChainState::ConnectBlock, and in particular where WriteUndoDataForBlock is called