Calculate UTXO set hash using Muhash (utils/log/libs)

https://github.com/bitcoin/bitcoin/pulls/19055

Host: fjahr  -  PR authors: fjahr , sipa

The PR branch HEAD was 4438aed09 at the time of this review club meeting.

Notes

PR history

  • The idea to use Muhash in Bitcoin Core was initially introduced by Pieter Wuille in this mailing list post from 2017.

  • Pieter proposed an implementation in PR #10434, which still comprises most of the code in this week’s PR.

  • Fabian Jahr then picked up the proposal and made further research in 2019. A snapshot of the work can be seen in this gist.

  • Based on further insights and feedback, the idea evolved into implementing an index for all coin statistics and not only for the hash of the UTXO set. This was implemented in PR #18000.

  • This week’s PR is the first in a series of PRs to incrementally implement PR #18000.

PR 19055

  • This PR modifies how the hash of the UTXO set is calculated. That hash, as well as other coin statistics, can be accessed through the gettxoutsetinfo RPC. It uses the Muhash algorithm which allows for incremental hashing.

  • The truncated hash SHA512/256 is used to prepare data for use in Muhash.

Questions

  1. What does “incremental hashing” mean? Which of its properties are interesting for our use case?

  2. What were the use cases for RPC gettxoutsetinfo described in the resource links? Can you think of others? Which use cases are the most relevant to you?

  3. What do you think could be the biggest downside to introducing Muhash for the UTXO set hash?

  4. This PR not only adds Muhash but also TruncatedSHA256Writer. Why?

  5. Why is the Muhash class implemented in Bitcoin Core and not in libsecp256k1?

  6. Did you look into the other proposed hashing algorithms in some of the resources? What were their drawbacks? Do you think Muhash is the right choice?

  7. What are your thoughts on the Muhash benchmarks, e.g. (a) the benchmarking code, and (b) your observations if you tested the code locally?

  8. Considering the trade-offs, should the old hashing algorithm be kept around and accessible (using a flag, for example)?

Meeting Log

  113:00 <fjahr> #startmeeting
  213:00 <fjahr> hi
  313:00 <adiabat> hi
  413:00 <raj_149> hi
  513:00 <thomasb06> hi
  613:00 <nehan> hi
  713:00 <troygiorshev> hi
  813:00 <r251d> hi
  913:00 <jnewbery> hi
 1013:00 <kanzure> hi
 1113:00 <sipa> ~hi
 1213:00 <fjahr> Hi everyone, welcome to this weeks pr review club. I am happy to share this one with you, since I have been working on this for a while.
 1313:00 <willcl_ark> hi
 1413:00 <fjahr> Feel free to ask questions any time, there can be multiple topics discussed at the same time.
 1513:00 <fjahr> Who had a chance to review the PR? (y/n)
 1613:01 <michaelfolkson> hi
 1713:01 <troygiorshev> n
 1813:01 <raj_149> y
 1913:01 <nehan> y
 2013:01 <adiabat> y
 2113:01 <sipa> y
 2213:01 <willcl_ark> y
 2313:01 <raj_149> what does bogosize mean?
 2413:02 <sipa> it's a vague metric for size of the utxo set, but it doesn't have any actual meaning
 2513:02 <sipa> it's arbitrarily chosen constants
 2613:02 <fjahr> "A meaningless metric for UTXO set size" says the help :p
 2713:02 <jkczyz> hi
 2813:03 <sipa> it's inspired by "bogomips" in the linux /proc/cpuinfo which is a meaningless indicator for processing speed
 2913:03 <raj_149> sipa: fjahr oh.. :p
 3013:03 <fjahr> but good question, definitely something to consider removing :)
 3113:03 <sipa> all it means is larger number -> larger utxo set, somewhat
 3213:03 <theStack> hi
 3313:03 <sipa> it's useful for comparison
 3413:03 <sipa> the number just doesn't have any physical meaning
 3513:04 <sipa> there is also a field for the on-disk size, but that's nondeterministic
 3613:04 <fjahr> Let's start with the questions I came up with, they are focussed on conceptual understanding, but definitely throw in more technical questions! What does “incremental hashing” mean? Which of its properties are interesting for our use case?
 3713:05 <fjahr> sipa: but is it used by anyone? I could not figure that out tbh
 3813:05 <michaelfolkson> Updating the hash value rather than calculating a new hash from scratch
 3913:06 <raj_149> fjahr: adding new items into the hash input set without redoing the full hash. Easy to add and remove items from the set seems disreable for this purpose.
 4013:06 <uproar> incremental hashing: change in work to recompute message digest is a function of how much the input message has changed
 4113:06 <uproar> is my attempted definition
 4213:07 <jnewbery> a hash function that takes a set of items and returns a digest, with operations to add and remove items from the set
 4313:07 <uproar> and that that change is proportional to input message delta
 4413:07 <gzhao408> particularly useful here, since the UTXO set is very large and unordered - we frequently add and remove items in no particular order
 4513:07 <fjahr> raj_149: not sure if you mean to say that but it does need to be a set in the strict definition :)
 4613:08 <sipa> i think a better term is homomorphic set hashing
 4713:08 <uproar> unpacking that phrase would help
 4813:08 <sipa> incremental just means you can easily add items, but doesn't imply anything about removal or the order in which you can do so
 4913:09 <fjahr> true, that is the better phrase that includes the properties I was hinting at in the second question
 5013:09 <sipa> probably not worth the semantics discussion :)
 5113:09 <willcl_ark> that you can recompute the hash based on the old hash, after adding/removing items
 5213:10 <theStack> does a merkle tree also count as incremental hash structure? it's quite easy to add a new leaf item and propagate up to the root hash
 5313:10 <fjahr> I think these answer were all good, lets move on: What were the use cases for RPC gettxoutsetinfo described in the resource links? Can you think of others? Which use cases are the most relevant to you?
 5413:10 <sipa> theStack: if you treat the entire merkle tree as the "hash", i'd say yes - but it's not a compact one
 5513:11 <ecurrencyhodler> Can be used to validate snapshots for AssumeUTXO and also used for BTCPayServer's FastSync.
 5613:11 <willcl_ark> I think it might be nice for an SPV wallet, along with checking valid headers (with most work) were provided, to also query multiple nodes' UTXO set hashes
 5713:11 <raj_149> sipa: does that then also imply collisons are much common in case of incremental hashing? or its collison is not related to incremeental property atall?
 5813:11 <uproar> Use case, hmm maybe: produce new soundness proofs based on the UTXO set?
 5913:12 <sipa> uproar: muhash/ecmh/... don't permit compact proofs of inclusion or exclusion, so no - all you can do is check easily whether two sets you already have are identical
 6013:12 <nehan> it can also serve as a checksum to check for corruption of the utxoset
 6113:12 <sipa> raj_149: i don't know how to answer that
 6213:12 <fjahr> How about total_amount? What can that be used for? :)
 6313:13 <sipa> raj_149: it turns out that it's much harder to construct a homomorphic set hash and have it still be collision resistant
 6413:13 <fjahr> total_amount is one of the values returned from gettxoutsetinfo
 6513:13 <uproar> fast auditing on total supply
 6613:14 <raj_149> fjahr: isn't that the total circulating supply?
 6713:14 <sipa> willcl_ark: i don't think this is useful for SPV wallets (unless they also maintain a full UTXO set, which would be kind of pointless...)
 6813:14 <uproar> raj_149 it's the total sum of UTXO maybe not circulating nor circulatable
 6913:14 <fjahr> uproar: raj_149: yes! and that was one of the motivations that auditing could actually be really fast using the index :)
 7013:15 <raj_149> fjahr: i think its very useful then to get normies get into Bitcoin. Thats the golden number to show. :D
 7113:16 <uproar> fjahr fast amount/supply auditing is something I'd like but on the other hand I question to what end does it serve nodes (rather than users of those nodes), is MuHash decreasing of node-operation costs when the command is run?
 7213:16 <fjahr> Next question: did you come across downsides from implementing muhash for gettxoutset?
 7313:17 <raj_149> uproar: yes right, thanks for pointing that.
 7413:17 <willcl_ark> sipa: hmmm yes, I guess you get nothing more from this as an SPV node than querying multiple full nodes for headers
 7513:17 <sipa> uproar: muhash itself is slower than the current hash, but the advantage is that it can be incrementally computed and maintained in the background (updating the hash at every block, rather than at RPC time)... making the RPC instantaneous
 7613:17 <fjahr> raj_149: yeah, and it's definitely much cooler if it takes a second rather than minutes :)
 7713:17 <uproar> the UX benefit is: look no surprising inflation! but from the node operation perspective is: There shouldn't have been unless some pretty fundamental aspect about validation are broken
 7813:18 <adiabat> how much slower is it if you're only running once after full IBD?
 7913:18 <uproar> sipa that's a valuable datum
 8013:18 <sipa> adiabat: my very rough guess is 5x or so
 8113:18 <sipa> (of course, if you're I/O bottlenecked it's much less)
 8213:18 <adiabat> oh that's not too bad. Once you run it more than a few times you're ahead
 8313:18 <uproar> gettxoutsetinfo is painfully slow even on high resource machines
 8413:19 <uproar> currently, that is >:]
 8513:19 <sipa> with this PR it becomes a lot worse
 8613:20 <sipa> fjahr: i guess you have more recent performance numbers
 8713:20 <raj_149> sipa: i dont understand, how is it worse?
 8813:20 <sipa> raj_149: because MuHash is many times slower than SHA256
 8913:20 <fjahr> it definitely exceedes the standard rpc timeout on lower powered machines, which was the main driver to get concerned for me.
 9013:20 <sipa> (and the current hash uses SHA256)
 9113:21 <willcl_ark> Ok so it's slower, but happens in the background, at each new block?
 9213:21 <raj_149> sipa: but the node will do that hashing internally for each block right? so getutxosetinfo will be very fast, right?
 9313:21 <sipa> willcl_ark: not with this PR
 9413:21 <nehan> raj_149: this change doesn't actually implement a rolling UTXOset hash. It just uses MuHash to calculate it when requested
 9513:21 <willcl_ark> ah ok
 9613:21 <sipa> willcl_ark: this just swaps out the SHA256 based hash for MuHash
 9713:21 <sipa> but it paves the way for a background-updated utxo set hash
 9813:21 <uproar> sipa is it that the first gettxoutsetinfo is going to get substantially slower but the incremental calls on gettxoutsetinfo will get faster relative to old gettxoutsetinfo subsequent calls?
 9913:21 <willcl_ark> 1 step back...
10013:21 <sipa> uproar: no, all of them
10113:22 <raj_149> nehan: ah ok.. thats true.
10213:22 <adiabat> I guess for future PRs there are several options: increment at each block, or update a cache every time gettxouset is called and increment then
10313:22 <fjahr> I don't have them at hand right now but I think I posted them in an older pr version
10413:22 <sipa> uproar: again, this PR doesn't do any background hashing
10513:22 <willcl_ark> we need the next PR to make it rolling
10613:22 <adiabat> sipa: is later intent to make it happen every block or only update when requested? (or something else?_
10713:23 <sipa> adiabat: ask fjahr :)
10813:23 <jnewbery> it also adds a flag so you can still get the legacy SHA256 hash from gettxoutset if you want
10913:23 <willcl_ark> Would it not be wise to add a commit for the rolling hash as part of this PR? Why would you split this way (with a performance degredation)?
11013:23 <raj_149> basic question: utxo set is updated currently at each new block? or by some other trigger?
11113:23 <michaelfolkson> There's another downside in terms of time taken to brute force the pre-image right? Once the pre-image is known from the original hash an attacker can brute force the pre-image for the new hash in a shorter time than normal?
11213:23 <fjahr> will_clark: yes, that was 18000 which had everything, now i have split it up and this is the first part.
11313:23 <adiabat> fjahr: is the intent.. :)
11413:23 <willcl_ark> fjahr: I see
11513:24 <sipa> michaelfolkson: it is supposed to have 128-bit collision resistance security, just like SHA256
11613:24 <fjahr> adiabat: The plan is to update with every block/reorg and have an index
11713:24 <sipa> adiabat: updating only when requested requires iterating over the whole utxo still, so that wouldn't give any gain
11813:24 <fjahr> so older states can be queried quickly as well
11913:25 <michaelfolkson> When you say "supposed to" sipa what does that mean? Makes it sound like you doubt that claim ;)
12013:26 <fjahr> Keep those questions coming :) But here is one of mine as well: This PR not only adds Muhash but also TruncatedSHA256Writer. Why?
12113:26 <adiabat> I don't see why updating when requested needs to look at the whole utxo set? Couldn't it replay blocks since last call and perform the additions / deletions that way?
12213:26 <fjahr> TruncatedSHA512Writer actually :D
12313:26 <jnewbery> michaelfolkson: that's because it's a hash function. We assume that it has that much security until someone breaks it
12413:27 <adiabat> (though I guess that could be even slower if it's very infrequently called)
12513:27 <raj_149> fjahr: wild guess. to add extra layer of security?
12613:27 <sipa> michaelfolkson: assuming the discrete logarithm in GF(2^3072 - 1103717) takes is 128-bit secure, that ChaCha20 and truncated SHA512 have their intended security, ...
12713:27 <jnewbery> adiabat: we don't have those utxos any more after the block has been connected
12813:27 <raj_149> and may be added collison resistance..
12913:27 <michaelfolkson> Ok gotcha thanks sipa jnewbery
13013:27 <sipa> adiabat: oh sure, it could, but that'd be incompatible with pruning
13113:28 <uproar> maybe I'm missing something but if currently SHA256 hasing in gettxoutsetinfo makes calls take ~5 minutes, and MuHash increases the time, what's the benefit? being able to do it in the background or that the total sum of updates costs less because of the "homomorphic set" properties?
13213:28 <adiabat> ah right that makes sense
13313:28 <sipa> uproar: updating is super fast
13413:28 <nehan> sipa: can your replay blocks to update the rolling utxoset hash without the utxoset at the point you start replaying?
13513:28 <sipa> nehan: sure
13613:28 <jnewbery> right, you could look up each utxo in the previous block files, but that would be painfully slow, and incompatible with pruning
13713:28 <sipa> if you still have the block
13813:29 <sipa> and the undo data
13913:29 <uproar> sipa ok, thanks I think i mistook "Muhash is slower" to mean "the entire update process for post PR will be slower" which I take be not the case
14013:29 <nehan> sipa: are you suggesting you'd have to reconstruct the UTXOset at that point in the past using the undo data?
14113:29 <sipa> uproar: yes it will be much slower, but it paves the way for incrementally computing the hash, which will make the RPC instant
14213:29 <jnewbery> oh sorry I'm wrong - of couse you can use the undo data
14313:29 <sipa> uproar: however this PR doesn't do any of the incrementality
14413:29 <uproar> now I understand the bigger picture, thanks
14513:30 <sipa> nehan: no
14613:30 <fjahr> ray_149: so the truncated hash does not decrease security but i don't think it increases it either. It is an efficient way to turn the raw data of utxos into a 256bit number.
14713:30 <nehan> sipa: k, thanks
14813:31 <raj_149> fjahr: but why trucncated 512? why not sha256 it directly?
14913:31 <sipa> nehan: blocks are "patches" to the UTXO set; if you have the hash state as of block N, you you roll that state forward by applying the blocks after it to it
15013:31 <sipa> you don't need the actual UTXO set anymore at that point
15113:31 <fjahr> well, sipa made the choice but from the research papers I saw it is faster
15213:32 <willcl_ark> faster to SH512 and then truncate?
15313:32 <sipa> yeah, it's because most UTXOs are of a size that would require 2 SHA256 compressions, but only one SHA512 compression
15413:32 <sipa> so SHA512 may be faster for such inputs
15513:32 <sipa> it's probably worth benchmarking that again, now that we have AVX2 and SHA-NI optimized SHA256
15613:32 <fjahr> There is a link to a SHA512/256 research paper somewhere in a #18000 comment
15713:33 <fjahr> I think that was the best resource I found if you want to dig deeper
15813:33 <fjahr> https://eprint.iacr.org/2010/548.pdf
15913:34 <fjahr> found it :)
16013:34 <willcl_ark> StackExchange is tellign me the max message size for SHA256 is 2 million terabytes
16113:34 <willcl_ark> thanks fjahr
16213:34 <fjahr> sipa: correct me if that is not the right resource
16313:34 <sipa> willcl_ark: that's correct, but i don't think that's relevant?
16413:34 <sipa> fjahr: i think the right resource is benchmarking yourself :)
16513:34 <fjahr> :)
16613:35 <sipa> UTXOs are limited to 10000 bytes ish
16713:35 <willcl_ark> sipa what is " most UTXOs are of a size that would require 2 SHA256 compressions" in reference to then?
16813:35 <uproar> I think it's indicating the mechanics of how 256 blocks are compressed
16913:35 <sipa> willcl_ark: SHA256 transforms an internal 32-byte state by consuming 64 byte blocks at a time; the runtime is proportional to have many of those 64-byte blocks you need to consume
17013:36 <uproar> max message size and how many blocks you have to compress aren't proportional
17113:36 <fjahr> Ok, I hope it's not too obvious for everyone, but: Why is the Muhash class implemented in Bitcoin Core and not in libsecp256k1?
17213:36 <willcl_ark> hmmm ok. that's definitely a good one for me to read up on :)
17313:36 <sipa> SHA512 transforms an internal 64-byte state by consuming 128 byte blocks at a time; the runtime per consumption is longer than SHA256's, but typically faster per byte
17413:36 <sipa> willcl_ark: as UTXOs are often larger than 64 bytes, but less than 128 bytes, they'd need 2 SHA256 consumptions, or 1 SHA512 consumption
17513:36 <sipa> so the question is which of those two is faster
17613:37 <raj_149> sipa: correct me if i am wrong, but its are not hashing the utxos directly, its hashing the finalised muhash, which is 3072 bit.
17713:37 <willcl_ark> sipa: ah ok. that makes it clear now!
17813:37 <fjahr> or why would ECMH be in lipsecp and not Muhash/
17913:37 <fjahr> ?
18013:37 <sipa> raj_149: no, the individual UTXOs are hashed, to produce a 256-bit key; that key is than expanded into a 3072-bit number using ChaCha20
18113:38 <sipa> the 3072-bit numbers are then multiplied modulo 2^3072-1103717
18213:38 <sipa> and then the final 3072-bit output is again SHA512 hashed to produce a 256-bit output
18313:38 <sipa> *then
18413:38 <jonatack> hi
18513:39 <fjahr> hi jonatack
18613:39 <jonatack> great PR, but separated from 18000 it's a bit neither here nor there ;)
18713:39 <jonatack> i'll explain:
18813:39 <jnewbery> the final 3072-bit output could be hashed using SHA256, but I suppose you use 512 for consistency?
18913:39 <sipa> jnewbery: yeah
19013:39 <sipa> that one doesn't matter
19113:40 <raj_149> sipa: ok got it. i thought we were talking about the final one here.
19213:40 <jonatack> fjahr: it is good that you added the legacy_hash bool to the RPC, but i would suggest making MuHash opt-in rather than the default until the index is merged
19313:40 <fjahr> well, I hope most know libsecp does eliptic curve crypto and there is not EC used in muhash but ECMH does use it :)
19413:40 <jnewbery> currently SHA512 is only used for encrypting the wallet and seeding the PRNG as far as I can tell
19513:40 <sipa> fjahr: if you're going to cache the hash for every block... that's actually an argument in favor of ECMH, as the minimal "state" to keep for ECMH is 33 bytes only, while for MuHash it's 384 bytes
19613:41 <jonatack> because otherwise gettxoutsetinfo is essentially broken for me on testnet and mainnet until then
19713:41 <uproar> willcl_ark this is a nice (slowish) breakdown of how sha 256 works. see the video of it verbally annotated https://github.com/in3rsha/sha256-animation
19813:41 <fjahr> sipa; I am only writing the finalized hash into the index otherwise there is one muhash that is update with every block
19913:42 <willcl_ark> thanks uproar, I will take a look at that later
20013:42 <sipa> fjahr: ah, gotcha
20113:42 <sipa> that makes sense
20213:42 <jnewbery> jonatack: by broken do you mean 'i have to pass a flag'?
20313:42 <fjahr> jonatack: you mean it's too slow, right?
20413:42 <thomasb06> jonatack: as expected, the archwiki admins are not interested to make a new page on how to compile and test the Bitcoin core. Your page will remain the reference for Linux builds for another while...
20513:43 <jonatack> jnewbery: yes, without the flag it times out and raises... i haven't found the counterflag for that yet :)
20613:43 <jnewbery> thomasb06: that's off-topic. Perhaps you can message jonatack after the meeting
20713:43 <jonatack> fjahr: right
20813:43 <willcl_ark> jonatack: I think I'd agree that defaulting to the faster impl. makes sense, with a flag for the new.
20913:43 <fjahr> yeah, I am still a bit undecided what the right way to go is concerning the old way, do we want it temporary or permanently
21013:44 <jnewbery> jonatack: I agree that the default should be switched to maintain existing behaviour
21113:44 <jonatack> i'm building the index now... ~100k blocks/hour to build so far
21213:44 <jonatack> (with #18000)
21313:45 <jonatack> jnewbery: yes
21413:45 <willcl_ark> I didn't try the PR on my server, but current performance is 60 seconds exactly for gettxoutset so at ~5x slower that will be an issue (that a flag can solve, but still...)
21513:45 <nehan> fjahr: at the very least don't enable it by default until you have the index, since this change just makes gettxoutsetinfo strictly worse
21613:45 <jonatack> nehan: ^
21713:46 <fjahr> nehan: for the people who don't run the index it will always be worse, so I guess it would need to stay the default then :)
21813:46 <uproar> nehan is your comment about optimal migration or something else I didn't understand?
21913:47 <fjahr> noted that default should be the old one
22013:47 <nehan> uproar: i think migration? not sure what you mean. but yeah, whether to turn something on by default and in what order to do so
22113:47 <uproar> e.g. "keep using the old method until the new method has caught up" or something else
22213:47 <nehan> fjahr: hmm that's tricky. and if it's off by default you've got two hashes for the utxoset in the world.
22313:47 <jnewbery> fjahr: I don't agree. Once there's an optional index, then I think it's possible we might want to remove the SHA256 version
22413:48 <sipa> agree
22513:48 <sipa> maybe not immediately, but certainly with a deprecation window that should be ok
22613:48 <jonatack> yes
22713:48 <nehan> jnewbery: sipa: how did you reach that conclusion? cause the rpc is not used very much?
22813:48 <jnewbery> you have two options: run an index and have fast access to the utxo hash, or don't and be a little bit patient
22913:48 <fjahr> hm, ok, I will have to think about it and test more on slower machines
23013:49 <sipa> nehan: i suspect that once the index code exist, everyone who more than very exceptionally uses gettxoutsetinfo will enable the index (as it's pretty much free)
23113:49 <willcl_ark> is the RPC timeout 2 minutes?
23213:49 <jnewbery> if you're someone who wants to query the utxo set hash frequently, then build the index. If you're not, a one-off hit on the rare ocassion you do run it is acceptable
23313:49 <fjahr> 15 is the default i think
23413:49 <willcl_ark> oh, right
23513:50 <nehan> jonatack: ah. i see i just restated what you said earlier with the comment on the default flag
23613:51 <fjahr> Ok, I have another question on hashing algos but I would skip that unless someone has thoughts on it.
23713:51 <thomasb06> (do you have the doc page for RPC under the arm by the way?)
23813:51 <fjahr> For those that tested the code already: What are your thoughts on the Muhash benchmarks, e.g. (a) the benchmarking code, and (b) your observations if you tested the code locally?
23913:51 <jonatack> nehan: i was seconding your comment :)
24013:52 <nehan> fjahr: depends onif anyone is relying on it. how do people usually use this rpc?
24113:52 <sipa> tbh, i mostly use it for statistics like number of UTXOs and circulating supply
24213:52 <uproar> same
24313:53 <sipa> but once there is a fast way to compute a utxo set hash (or access it), i think that aspect of it will become much more useful
24413:53 <nehan> if no one is relying on the old hash format it seems quite reasonable to get rid of it!
24513:54 <sipa> we generally have a policy of not breaking RPC compatibility without deprecation cycle
24613:54 <jnewbery> fjahr: my high-level thought about the PR is that it's good that you've split this from #18000, but there's still a lot of code there to review. It's a mix of python cryptography, C++ crypto, ASM, RPC code,... If you could split off smaller parts to review, it'd be easier to make progress
24713:54 <jonatack> same, and #18000 is very welcome to me because it's that RPC is nigh unuseable at the moment
24813:54 <fjahr> The two use cases I met mostly were: a) "for fun" to check if the node was running and to se the stats and b) checking circulating supply regularly (Bitmex publishes the numbers for example)
24913:54 <jnewbery> (that assumes that there is consensus ACK for the overall concept and approach)
25013:54 <fjahr> jnewbery: yeah, it has grown the last two days, agree
25113:55 <sipa> fjahr: one thing that could be done if you have the index is make the startup consistency check roll back the utxo hash a few blocks, and compare it with the index
25213:55 <sipa> which would prove correctness of the block and undo data on disk
25313:56 <jonatack> jnewbery: fjahr: i agree, it's a long and hard review because it's so wide ... multi-day if you get into the crypto algo implementation and the assembly optimising
25413:56 <fjahr> sipa: interesting, will conisder adding it when adding the index
25513:57 <jonatack> nice
25613:57 <fjahr> ok, three minutes to go, it went way too fast as always. Any last comments?
25713:57 <fjahr> or questions?
25813:58 <fjahr> I am always open for DMs if you have questions on this btw
25913:58 <sipa> fjahr: thanks for taking this up, and helping push it forward :)
26013:58 <willcl_ark> yes thanks fjahr, it's a nice PR IMO, once it's fully-formed
26113:58 <uproar> much thanks fjahr !
26213:59 <jnewbery> If there's a split PR on the python implementation, it'd be great to have a review club on that where we can really get into the weeds on the crypto :)
26313:59 <nehan> thanks fjahr!
26413:59 <raj_149> thanks fjahr for hosting the review sesion too
26513:59 <troygiorshev> thanks fjahr
26613:59 <jonatack> thanks fjahr and sipa!
26713:59 <raj_149> jnewbery: that sounds very fun.
26813:59 <uproar> jnewbery +!
26913:59 <uproar> +1*
27014:00 <fjahr> Thanks everyone for attending, especially sipa :)
27114:00 <jnewbery> thanks fjahr!
27214:00 <fjahr> #endmeeting
27314:00 <thomasb06> thanks fjahr
27414:00 <theStack> thanks for hosting fjahr, that was an interesting read today