Make orphan processing interruptible (p2p)

https://github.com/bitcoin/bitcoin/pull/15644

Host: narula  -  PR author: sipa

The PR branch HEAD was 866c805 at the time of this review club meeting.

Notes

  • PR 15644 was merged over a year ago. It changes the way a node looks through its orphan set to figure out if orphans can now be processed and added to the mempool.

  • A transaction is an orphan if we’re missing one or more of its inputs (we refer to the transactions that create the transaction outputs spent by the orphan as parent transactions). This might be a valid transaction, but we don’t know yet. We have to be careful with orphan management because orphans cannot be validated.

  • Orphans are stored in a map called mapOrphanTransactions by their txid. We limit the number of orphan transactions to -maxorphantx which is by default 100.

Questions

  1. Did you review the PR? Concept ACK, approach ACK, tested ACK, or NACK? You’re always encouraged to put your PR review on GitHub, even after it has been merged.

  2. Why might we want a node to keep track of orphans at all? What does this help with?

  3. Look at the way orphans are added to the orphan maps and how orphans are re-evaluated when we accept new transactions to the mempool. What happens when a node receives a transaction where it has not seen the transaction’s inputs?

  4. Observe that mapOrphanTransactionsByPrev is a map of outpoints to a set of iterators over the orphan transactions map. Why do you think it’s written the way it is, instead of storing, say, orphan txids?

  5. Can you think of a problem with the pre-PR version of the code? What is it and how does the PR fix it? Hint: look at what we are iterating over when processing orphans pre- and post- PR.

  6. How might the adoption of something like SIGHASH_NOINPUT affect orphan processing?

Further Reading

Meeting Log

  117:00 <jnewbery> #startmeeting
  217:00 <jnewbery> hi everyone!
  317:00 <emzy> hi
  417:00 <troygiorshev> hi
  517:00 <willcl_ark> hi
  617:00 <nehan> hi everyone!
  717:00 <swapna> hi eveyone!
  817:00 <michaelfolkson> hi
  917:00 <ariard> hi
 1017:00 <felixweis> hiu
 1117:00 <lightlike> hi
 1217:00 <theStack> hi
 1317:00 <factoid> hi
 1417:00 <gzhao408> hi!
 1517:00 <adiabat> hi
 1617:00 <nehan> thanks for joining today. notes and questions here: https://bitcoincore.reviews/15644.html
 1717:01 <ajonas> hi
 1817:01 <amiti> hi
 1917:01 <nehan> reminder, feel free to ask ANY questions. and let us know if this is your first time at review club!
 2017:01 <nehan> today we're going to talk about orphan processing. who's had a chance to review the pr? (y/n)
 2117:01 <amiti> y
 2217:01 <adiabat> y
 2317:02 <willcl_ark> y
 2417:02 <troygiorshev> y
 2517:02 <theStack> n
 2617:02 <felixweis> n
 2717:02 <jnewbery> y
 2817:02 <lightlike> y
 2917:02 <emzy> y/n
 3017:02 <michaelfolkson> y
 3117:02 <swapna> n
 3217:02 <gzhao408> y
 3317:02 <factoid> y
 3417:03 <nehan> ok. so as stated in the notes, an orphan transaction is one where we are missing one or more of its inputs
 3517:03 <pinheadmz> hi
 3617:03 <nehan> first question: Why might we want a node to keep track of orphans at all? What does this help with?
 3717:03 <nehan> one could imagine just dropping transactions if we can't validate them immediately
 3817:04 <adiabat> It seems like if you don't, you still have to keep track of the fact that you saw it, and not request it again
 3917:04 <willcl_ark> if we already have the orphan then as soon as we have the parent we can validate immediately without needing to receive/request again
 4017:04 <adiabat> but if you only keep track of "this is bad/invalid" then when it's no longer an orphan you won't get it until it's in a block
 4117:04 <emzy> could be that the order they arive are different. So the missing input will arive later.
 4217:04 <michaelfolkson> Though it is removed after 20 mins. Is that right? Seems short time
 4317:05 <nehan> good points. adiabat: given that the orphan map is limited in size, we cannot guarantee we will never request something again
 4417:05 <adiabat> would a different strategy be to hang up on or ban nodes that give you an orphan tx? Are there problems or DoS attacks with doing that?
 4517:06 <amiti> also helps enable CPFP (child pays for parent). if the mempool conditions means the parent isn't accepted, the child would need to be accepted first in order to include both into a block
 4617:06 <jnewbery> adiabat: we can only add txs to 'recent rejects' if we know that they're definitely invalid. Otherwise, there are tx censorship attacks
 4717:06 <gzhao408> seems reasonable to only keep for a short period of time, I'm imagining the case where they're both relayed and you just happen to receive the child first
 4817:06 <nehan> adiabat: good question! what do people think?
 4917:06 <ariard> well the sending node may not be the one responsible for the tx being an orphan at reception, parent might have been replaced
 5017:06 <nehan> i think that would be a bad idea, especially on startup, when your mempool is empty and you might receive transactions out of order
 5117:06 <factoid> ariard +1
 5217:07 <jnewbery> nehan: +1. Orphans are most common at startup or when making new connections
 5317:07 <sipa> orphan transactions are not a sign of misbehavior; they arise naturally due to variations in network connections
 5417:07 <adiabat> it does seem to put a lot of responsibility on the sending node, they'd need to make sure to send all invs in order
 5517:07 <nehan> and a sending node just doesn't know what its peer has seen/not seen
 5617:07 <ariard> they do send inv in topology-order in SendMessages IIRC
 5717:07 <nehan> unless it sent it itself!
 5817:07 <sipa> and the sending node cannot know if perpahs they already gave you the parent, but you replaced it!
 5917:07 <nehan> or that ^
 6017:07 <nehan> ok let's move on to how orphans are handled
 6117:08 <amiti> also a node might request a parent from one peer, child from another & the child is delivered first. doesn't seem like a fault of the peer that delivered!
 6217:08 <nehan> what happens regarding orphans when we accept a new transaction to the mempool?
 6317:08 <jnewbery> adiabat: that is indeed what we do: https://github.com/bitcoin/bitcoin/blob/2c0c3f8e8ca6c64234b1861583f55da2bb385446/src/net_processing.cpp#L4203-L4206
 6417:08 <lightlike> also, if we just discarded the orphans, we might request and discard them again multiple times from different peers even before getting a parent, which seems ineffective.
 6517:08 <gzhao408> a clarification question: how sure are we that an orphan is a valid orphan as opposed to invalid? what validation has it passed before it's called a TX_MISSING_INPUTS?
 6617:09 <sipa> jnewbery, adiabat: but all we can do is _announce_ in topology order; it doesn't guarantee that things are requested/received in the same order when multiple nodes are involved (as amiti points out)
 6717:09 <sipa> gzhao408: none
 6817:09 <sipa> syntactic validity
 6917:09 <ariard> gzhao408: every check in PreChecks before hitting TX_MISSING_INPUTS, like transaction standard size or nLocktime finality
 7017:09 <pinheadmz> nehan when a tx is accpted ot mempool we check it against the map of known orphans
 7117:09 <pinheadmz> to see if it resolves anything
 7217:10 <adiabat> maybe make a new type of INV message that has clumps, like these 4 txids should be requested at once (this is probably a bad idea; just making stuff up :) )
 7317:10 <pinheadmz> with this PR, IIUC, we only resolve one orphan at a time
 7417:10 <ariard> requesting/reception might be biased by your connection type and sending nodes shouldn't make assumptions on receiving peer topology
 7517:10 <sipa> adiabat: there is a very long term plan for that, it's called package relay, but it's nontrivial
 7617:11 <nehan> pinheadmz: yes!
 7717:11 <sipa> adiabat: and it's indeed probably the only complete solution to orphan processing and a number of other issues
 7817:11 <jnewbery> adiabat: we're getting a bit off-topic, but the idea you're referring to is called 'package relay'
 7917:11 <factoid> gzhao408 sipa ariard, it's the case, isn't it, that we can be certain a transaction is invalid, we can't be certain it is valid -- right?
 8017:11 <michaelfolkson> It depends how "expensive" certain operations are. Rejecting it and then requesting the orphan transaction again has to be compared to storing and retrieving from memory
 8117:11 <pinheadmz> factoid somethings maybe like MAXMONEY etc
 8217:11 <pinheadmz> but most TX checks require context (utxo set)
 8317:11 <adiabat> sipa, jnewbery: oh cool that it isn't inherenly a bad idea but yes sounds complicated & getting of topic, thanks
 8417:11 <nehan> what happens if the orphan we are checking is now accepted to the mempool?
 8517:12 <nehan> (it has been un-orphaned)
 8617:12 <willcl_ark> it waits for a parent to arrive which validates it
 8717:12 <ariard> factoid: validity might change due to order of transaction reception or your block tip, but once it's a valid one it should be guaranteed to stay so
 8817:12 <nehan> willcl_ark: not exactly. that's what just happened (a parent arrived that enabled us to validate it; it was valid; we put it in the mempool)
 8917:13 <pinheadmz> nehan do we check the orphanmap again for more decesndants?
 9017:13 <ariard> adiabat: fyi https://bitcoincore.reviews/15644.html
 9117:13 <nehan> pinheadmz: yes! this newly unorphaned transaction might be a parent to _other_ orphan transactions!
 9217:13 <ariard> pinheadmz: yes we do recursively call ProcessOprhanTx IIRC
 9317:13 <amiti> pinheadmz: +1
 9417:14 <factoid> uh oh >:)
 9517:14 <factoid> the harbinger to resource exhaustion?
 9617:14 <nehan> yeah so we see that we need to be careful here.
 9717:15 <nehan> what happens when a node receives a transaction and it hasn't seen one or more of the transaction's inputs?
 9817:15 <nehan> (basically, how do orphans get processed and saved)
 9917:16 <michaelfolkson> Stored in mapOrphanTransactions
10017:16 <jnewbery> nehan: we add them to a global map, but there are also a few other data structures involved
10117:17 <nehan> michaelfolkson: jnewbery: yep
10217:17 <pinheadmz> and the map is limited to size of 100
10317:18 <sipa> and each orphan is limited in size
10417:18 <nehan> https://github.com/bitcoin/bitcoin/blob/master/src/net_processing.cpp#L916
10517:19 <pinheadmz> sipa ah didnt know - so if a tx is too big we wont even store it in the map?
10617:19 <nehan> that is done in AddOrphanTx. it ignores large orphans. the other important datastructure is mapOrphanTransactionsByPrev. how does that work?
10717:19 <pinheadmz> just a lost cause
10817:19 <pinheadmz> 100k is pretty fair i guess
10917:19 <nehan> pinheadmz: we could process it again once we get its parents
11017:20 <michaelfolkson> But removed at random rather than by lowest fee when full and also removed after 20 minutes. Was unsure why
11117:20 <michaelfolkson> 20 minutes by default, obviously can change that if you want
11217:20 <sipa> orphan transactions are primarily just a way to resolve things that are sent out of order
11317:21 <sipa> usually, when they happen, they resolve immediately
11417:21 <pinheadmz> nehan mapOrphanTransactionsByPrev is used to match new incoming TX with orphans it resolves
11517:21 <sipa> (as we request the parent)
11617:21 <nehan> i *think* that saving orphans is not required for correct operation. it's an optimization.
11717:21 <nehan> someone should check me on that
11817:21 <pinheadmz> nehan the reason a tx is orhpaned is bc we dont know the coins (prevs) its spending - so when a tx comes in that creates those prevs, we can find the orphan and REUNITE THE FAMILY
11917:21 <sipa> nehan: well that depends on how you define correct operation; ignoring every tx announcement is arguably equally correct
12017:22 <adiabat> yeah blocksonly works fine, as long as not everyone does that
12117:22 <nehan> sipa: so it would require some other way to hear about invs again
12217:22 <jnewbery> nehan: you're correct. We don't make any guarantees with orphans. They do help you get a better view of the mempool more quickly, and they may help with compact block relay
12317:22 <sipa> nehan: but sure, orphan processing is a best effort, and its primary motivation is making sure compact block reconstruction doesn't need a roundtrip because of a just-missed out of order relay
12417:22 <nehan> anyone want to answer the mapOrphanTransactionsByPrev question? Why is it a map of iterators?
12517:23 <sipa> iterators are smaller than uint256s
12617:23 <sipa> (8 vs 32 bytes)
12717:23 <aj> nehan: (map of sets of iterators)
12817:23 <nehan> sipa: yes. iterators are 8 bytes (on my platform) vs 32
12917:24 <factoid> If we don't know the the market is for transactions bidding into blocks, they other fee estimations will break -- so that's important to make sure orphans are represent in those stats
13017:24 <amiti> nehan: mapOrphanTransactionsByPrev is a map of outpoint -> to iterators, so it can point to the relevant entries on the orphan list. I thought for quicker access
13117:24 <nehan> amiti: yes that is also true! it saves you a lookup into mapOrphanTransactions. but that is limited in size to 100 so imho that's not necessarily worth optimizing
13217:25 <amiti> yeah fair
13317:25 <jnewbery> if it were txids, we'd just be using those txids to index into mapOrphanTransactions, so we just store the iterators
13417:25 <nehan> aj: yes good point
13517:25 <sipa> the point of having the byPrev map is to quickly find potential dependent orphans that can get reprocessed, without needing to walk the entire set
13617:25 <ariard> factoid: I think orphans aren't processed by fee estimator as we can't' know their validity and otherwise that would be a vector to manipulate your feerate
13717:26 <sipa> it being a map of iterators is both a size savings, and also avoids another lookup in the primary map (compared to it storing a uint256)
13817:27 <nehan> there are a couple of other interesting things to point out: if we receive an orphan from a peer, that peer is the best one to ask about the parents. cause it shouldn't be relaying a txn if it can't validate it!
13917:27 <jnewbery> important to note: mapOrphanTransactions is a std::map, so inserting or erasing elements doesn't invalidate iterators into the map (except an iterator to the erased element)
14017:28 <factoid> ariard ah yeah -- I suppose I meant if we have a delayed (maybe it's negligible amount of time) processing of a txn from orphan to valid, that'll delay a node's view of what the block-space market might be
14117:28 <nehan> jnewbery: good point
14217:28 <sipa> ariard: damn, i just realized you've responding to the nickname factoid here, instead of just dropping random factoids
14317:29 <factoid> ok fixed
14417:29 <factoid> oops
14517:29 <factoid> maybe now
14617:29 <nehan> now the fun part. Can you think of a problem with the pre-PR version of the code? What is it and how does the PR fix it?
14717:29 <sipa> factoid: nothing to fix, i was just confused :)
14817:29 <ariard> sipa: ah googling factoid :)
14917:30 <pinheadmz> nehan well i suppose another peer could be realyign the missing parent?
15017:30 <nehan> <sipa is not allowed to answer>
15117:30 <pinheadmz> but becuase we stick with the loop before moving on to the next peer's cue, we could be wasting time
15217:31 <nehan> there's something really bad with the pre-PR code
15317:31 <nehan> it helps to take a look at what the pre-PR code is iterating over vs the post-PR code
15417:31 <pinheadmz> the workQueue
15517:32 <pinheadmz> ooh i see something that adds an element back to the queue
15617:32 <nehan> yes. there is a workQueue, and what type is in the workqueue?
15717:32 <factoid> an attacker can tie up our node processing orphan decendents by stacking a big list of orphaned transaction?
15817:32 <pinheadmz> std::deque<COutPoint>
15917:32 <theStack> so basically a denial of service attack was possible?
16017:33 <nehan> pinheadmz: yes! we are iterating over outpoints. each of these outpoints *might* be an input to an orphaned transaction. in fact, it could be an input to MULTIPLE orphan transactions...
16117:33 <nehan> factoid: sort of!
16217:33 <emzy> I also think it is a anti DOS fix.
16317:33 <factoid> I was thinking the attack would be something like: create transaction{A..ZZZ}, send over transaction{B..ZZZ}, then send transaction{A} force node to iterate through the rest of the set
16417:34 <nehan> and multiple outpoints might be consumed by the same orphan
16517:34 <nehan> does anyone know the limit on # of inputs for a valid transaction?
16617:35 <pinheadmz> heh, 0xffffff ?
16717:35 <emzy> I think there is none.
16817:35 <theStack> in practice it's only limited by the total size i guess?
16917:35 <felixweis> ~ 2700 in pracise
17017:35 <sipa> an input is at least 41 vbytes
17117:35 <nehan> emzy: theStack: yes! i think it's in the thousands
17217:35 <pinheadmz> oops ffffff would be max output count i guess
17317:35 <troygiorshev> felixweis: i think you missed a 0
17417:36 <sipa> (well, 41 non-witness bytes)
17517:36 <felixweis> ok ~ 2400
17617:36 <pinheadmz> ah so even though the map is limited to 100, if all 100 of those txs have max # inputs, it can really hurt
17717:36 <nehan> each orphan could be as large as 100KB
17817:37 <nehan> but it's even worse than that!
17917:37 <nehan> in the pre-PR code, how many times might a single orphan be considered?
18017:37 <aj> pinheadmz: you mean ffffffff (8 f's, not 6) right?
18117:37 <pinheadmz> aj ty.
18217:39 <pinheadmz> nehan as many times as its input count ? per incoming potential parent
18317:39 <factoid> nehan (N^2-n)/2?
18417:39 <amiti> before the pr, the same orphan could be considered again for each output of a txn. so ~3000 times I believe?
18517:39 <nehan> hint: we're iterating over outputs. an orphan might refer to many of the outputs in the workQueue, but maybe when we consider it (for a single output) it still has unknown inputs
18617:39 <nehan> factoid: what's N and n?
18717:39 <factoid> sorry both lowercase
18817:39 <factoid> n = 100 or whatever our limit is
18917:40 <pinheadmz> so an orphan can have as many missing parents as it has inputs
19017:40 <nehan> amiti: right! an orphan might be considered # of output times in the loop! and there could be thousands of outputs...
19117:41 <pinheadmz> so for each output in work q we checked each input of each orphan
19217:41 <pinheadmz> oh no not quite bc there is amap
19317:41 <nehan> pinheadmz: close! we check each orphan that refers to this output
19417:41 <nehan> so an attacker could set up some transactions that are very bad
19517:43 <nehan> she could make 100 100KB orphans with, let's say, k+1 inputs where k is in the thousands (I don't know the exact number but if someone does feel free to chime in). Let's say k=2000
19617:43 <nehan> as a node, i'd receive those orphans and happily put them in my orphan map. note that this doesn't cost the attacker anything other than the bandwidth of sending those out
19717:44 <nehan> all those orphans take the same k inputs plus one input which is different for each orphan
19817:45 <nehan> then, the attacker crafts a transaction that will be accepted to the mempool and triggers the work of looking in the orphan map
19917:45 <thomasb06> https://bitcoin.stackexchange.com/questions/85752/maximum-number-of-inputs-per-transaction
20017:45 <nehan> thomasb06: thanks!
20117:45 <thomasb06> ;p
20217:46 <nehan> that exact number doesn't matter as much for this attack as long as it's greater than the number of max outputs
20317:47 <pinheadmz> nehan why the k+1 scheme?
20417:47 <nehan> let's say the special attacker transaction has k outputs, which are referenced by every orphan in the orphan map
20517:47 <nehan> what happens?
20617:47 <michaelfolkson> "The maximum number of inputs that can fit in a valid transaction is 27022."
20717:47 <pinheadmz> why couldnt all the orphns just have 2000 totally random prevs ?
20817:47 <nehan> pinheadmz: we're trying to craft a resource-intensive attack
20917:47 <felixweis> this feels off by an order of magnitue
21017:48 <nehan> pinheadmz: we want to make the node process all the orphans. if it's a random prev, then the special transaction won't match any orphans in the orphan map
21117:48 <sipa> felixweis: validity vs standardness
21217:48 <pinheadmz> i see
21317:48 <troygiorshev> felixweis: it's a very contrived example. no security or anything
21417:48 <felixweis> oh if its a mined block. not having to adhere to isStandard()
21517:48 <pinheadmz> do we call acceptToMemoryPool on every single input match?
21617:48 <nehan> felixweis: yeah i'm not trying to get my attack transactions mined in a block, just accepted to the mempool! important note, thanks
21717:49 <pinheadmz> so an orphan with 1000 inputs. a single parent comes in. we call ATMP(orphan) which fails bc the other 999 are still missing... ?
21817:50 <nehan> i actually wrote a test to conduct this attack here, it might be easier to look at that: https://github.com/narula/bitcoin/blob/pr15644test/test/functional/p2p_orphan.py#L69
21917:50 <nehan> * wrote with amiti!
22017:50 <michaelfolkson> Oh wow
22117:50 <amiti> :)
22217:50 <emzy> Hacker ;)
22317:51 <gzhao408> but now we also don't consider orphans bigger than MAX_STANDARD_TX_WEIGHT right?
22417:51 <gzhao408> oh or this is still within that restriction
22517:51 <nehan> the key thing to notice is that the pre-PR version of the code did two things: 1) it looped through all the outputs and orphans without stopping and 2) it would potentially process one orphan many times
22617:52 <gzhao408> ok yeah sorry, kinda off topic :sweat_smile:
22717:53 <nehan> gzhao408: not at all! you are right it checks that before adding something to the orphan map: https://github.com/bitcoin/bitcoin/blob/master/src/net_processing.cpp#L930
22817:54 <pinheadmz> when you say loop thgouh all the orphans, you mean as a result of mapOrphanTransactionsByPrev.find() ?
22917:54 <nehan> ok so in the worst case, an attacker could make a node process 100*100KB*orphans*k amount of data. if k=2000, that's 20 GB!!!
23017:55 <nehan> oops 100 orphans * 100 KB max orphan size * k outpoints
23117:55 <pinheadmz> while (!vWorkQueue.empty()) I understand loops through all the new outputs generated by valid incoming (potential) parents right?
23217:56 <nehan> pinheadmz: yes! that lookup might return every orphan in the orphan map: https://github.com/bitcoin/bitcoin/pull/15644/commits/9453018fdc8f02d42832374bcf1d6e3a1df02281#diff-eff7adeaec73a769788bb78858815c91L2388
23317:56 <michaelfolkson> This test will be opened as a separate PR to Core to check that orphan processing is actually interruptible right?
23417:56 <pinheadmz> ohhhhhhh light bulb! thank you
23517:56 <nehan> pinheadmz: answering your previous question about mapOrphanTransactionByPrev.find()
23617:56 <lightlike> how long would that take in the worst case? Is that the runtime of the test you linked?
23717:56 <jnewbery> and the attack is almost free for the attacker. They have to provide 10MB of transactions, but apart from the parent, none of those need to actually be valid, so they don't cost anything
23817:57 <michaelfolkson> Plus we should chat NOINPUT/ANYPREVOUT. aj is here ;)
23917:57 <nehan> lightlike: the test case hung for a long time and crashed, so i'm not sure how long it takes! good question!
24017:58 <pinheadmz> so, sipa -- was this PR a stealthy fix?
24117:58 <nehan> oh right shoot. last question: how is orphan processing affected by SIGHASH_NOINPUT / etc?
24217:58 <nehan> michaelfolkson: thank you!
24317:59 <michaelfolkson> We can expect there to be many more orphans right? And delays in processing "justice transactions" with eltoo will be problematic for Lightning
24417:59 <ariard> in fact that's just a sighash flag so tx parent can be fulfilled by initial sender to favor propagation
24517:59 <nehan> i don't totally know the answer to this question, except that it's helpful right now that you can't create an orphan without first creating its parent. SIGHASH_NOINPUT changes that.
24617:59 <jnewbery> mapOrphanTransactionsByPrev is no longer enough, since a SIGHASH_NOPREVOUT input doesn't refer to a single outpoint
24717:59 <pinheadmz> nehan oh dang hadn't thought of this - do we actually need to check all the scripts? I would think that sighashnoinput just shouldnt be allowed to be an orphan
24817:59 <ariard> michaelfolkson: not only justice txn almost all LN txn are time-sensitive
24917:59 <sipa> nehan: i don't see how that is related
25017:59 <sipa> you still need the parent to construct the tx
25117:59 <sipa> you don't need the parent to construct a signature
25218:00 <sipa> but does that change anything materially?
25318:00 <nehan> sipa: i might be getting this wrong. but the orphan tx doesn't necessarily commit to one parent
25418:00 <jnewbery>
25518:00 <ariard> nehan: I think you're blurring p2p relay and script validation here
25618:00 <nehan> er, one output. it could be satisfied by many outputs!
25718:00 <pinheadmz> oh the sig doesnt cover the txid of the coin being spent
25818:00 <pinheadmz> but it is still there in teh tx message
25918:00 <ariard> your noinput tx can still include an outpoint at tx-relay
26018:00 <nehan> ok wrapping up. but feel free to continue the conversation here, i woudl love to understand the last question better
26118:00 <sipa> nehan: a tx's input still refers to explicit txids being spent from
26218:00 <michaelfolkson> Indeed ariard. Assuming CLTVs aren't really high right? If they are non-justice transactions aren't time pressured
26318:01 <nehan> thanks everyone!
26418:01 <troygiorshev> thanks nehan!
26518:01 <pinheadmz> 👏👏👏
26618:01 <theStack> thanks to nehan and everyone
26718:01 <factoid> thanks nehan
26818:01 <thomasb06> thanks
26918:01 <lightlike> thanks!
27018:01 <emzy> Thanks nehan and everyone else. I learned much!
27118:01 <ariard> michaelfolkson: would say no, with current deployed timelocks it's easier to target CLTV ones than justice CSV ones
27218:02 <felixweis> thanks
27318:02 <jnewbery> ariard sipa: ah you're right. The tx does still refer to the outpoint. It's just the sighash that changes. Sorry nehan - I got myself confused when we talked about this earlier.
27418:02 <jnewbery> thanks for hosting nehan. Great meeting!
27518:02 <michaelfolkson> Interesting... can you explain why ariard?
27618:02 <nehan> jnewbery: np! helpful to understand this better
27718:02 <michaelfolkson> Thanks nehan!
27818:03 <nehan> so nothing would need to change because the orphan still refers to a referrable parent?
27918:03 <sipa> yes
28018:03 <ariard> michaelfolkson: https://arxiv.org/pdf/2006.01418.pdf, look on 4.5 A3, that's lowest timelock and that's a CLTV one
28118:04 <ariard> nehan: yes tx refers to a parent but the sigs in its witness doesn't commit to this parent
28218:04 <nehan> right, ty. so the transaction could be malleated to refer to invalid inputs, making it an orphan?
28318:04 <pinheadmz> sipa is there a stealthy security story around this PR? since the attack is so cheap? The PR description isnt too explicit in this sense
28418:04 <ariard> but matt as some kind of blind package relay proposal which would reintroduce this issue
28518:04 <sipa> pinheadmz: i'd rather not comment at this point
28618:05 <pinheadmz> ok
28718:05 <jnewbery> oh, if anyone is in the mood for more orphan handling code, there's a small PR here: https://github.com/bitcoin/bitcoin/pull/19498 which fixes up some of the loose ends from this PR.
28818:05 <ariard> nehan: depends by whom, an infrastructure attacker yes, but why a honest relay node would do this??
28918:06 <ariard> and it shouldn't be pinned in recentRejects, malleating inputs change txid ofc
29018:08 <emzy> Is this only an optimation? chnaging -maxorphantx to a very low number (3) would not hurt the network?
29118:09 <emzy> The whole caching of orphans
29218:09 <sipa> it'd potentially affect block propagation
29318:10 <emzy> Is it not only relevant for the mempool?
29418:10 <sipa> compact block reconstruction depends on having the block's transactions ahead of time
29518:10 <sipa> in the mempool, or "extra pool" (which is indirectly populated by orphans too)
29618:10 <emzy> Ok. I see.
29718:11 <emzy> can result in slower block propagation
29818:11 <emzy> tnx
29918:12 <sipa> and it's really an all or nothing thing- to get the best propagation speed, a node needs to have _all_ the transactions
30018:12 <sipa> otherwise an additional roundtrip is needed
30118:12 <emzy> but how could be an orphan included in a block?
30218:13 <sipa> because the miner has the parent, and you don't
30318:13 <emzy> ok but then you need the additional roundtrip anyway
30418:14 <sipa> i think you're confused, but i don't know about what :)
30518:14 <aj> nehan: SIGHASH_NOINPUT doesn't matter for orphan processing -- you don't look at validating the sigs until you've got all the parents. what it means is you could take a tx with a parent [txid,n] and change it to a different parent [txid',n'] while keeping the same sig, but that would produce a different transaction with different txid and wtxid, and it may not even be a doublespend at that
30618:14 <aj> point, so it's really just another tx
30718:15 <emzy> sipa: ok. I will try to figure it out myself. Good for lerning :)
30818:15 <sipa> emzy: the orphan pool helps with tx propagation- without out, we'll just miss certain transactions for slightly longer
30918:15 <sipa> missing a transaction is bad if it is included in a block, because that means an additional roundtrip at _block propagation_ time
31018:15 <emzy> oh, now I get it!
31118:15 <sipa> and latency of block propagation is critical; for transactions it isn't
31218:17 <emzy> I got it. Tnx again.