Make stalling timeout adaptive during IBD (p2p)

https://github.com/bitcoin/bitcoin/pull/25880

Host: glozow  -  PR author: mzumsande

The PR branch HEAD was 48e5385 at the time of this review club meeting.

Notes

  • During Initial Block Download (IBD), after downloading the full headers chain, nodes have a BLOCK_DOWNLOAD_WINDOW=1024-block window within their best chain during which blocks are downloaded in parallel.

    • Blocks still need to be validated sequentially, since their validity depends on all outputs they spend being confirmed in previous blocks. When a node receives a block, the node attempts to connect it to the current tip, validate it, and calls ActivateBestChain().

    • The node may have up to MAX_BLOCKS_IN_TRANSIT_PER_PEER=16 in-flight requests to each peer. The node will never send more than one request for the same block out at a time.

    • It only uses outbound peers unless that’s not possible.

  • Peers are not trusted to just always serve correct data in a timely manner. The node tries to detect if block download is stalling based on the following criteria:

    • The node is unable to connect a new chain of blocks past the current tip, e.g. if the tip is at height i and blocks [i+2: i+1024] have arrived, but block i+1 hasn’t.

    • The node cannot make any more requests; all of the 1024 blocks have already been received or are currently being requested.

    • Hint: Try not to confuse the block download stalling logic with headers sync timeout logic.

  • Once the node detects that it is being stalled, it starts a stalling_since timer and gives each peer by which it is “bottlenecked” two more seconds to fulfil the block request(s) before disconnecting them. Then, it connects to a new peer and continues requesting the blocks it needs.

  • However, the node will still consider itself to be stalled because the criteria are still met; the new peer will be the “bottleneck”. On master, this peer is also given only 2 seconds to fulfil to the block request before disconnection. This is a problem since, if our internet connection is very slow, we might end up needlessly churning through peers thinking they are “stalling” us.

Questions

  1. Without looking at the PR, what solutions would you consider? Feel free to be creative.

  2. Did you review the PR? Concept ACK, approach ACK, tested ACK, or NACK?

  3. Under what circumstances does a node consider block download to be stalling?

  4. What problem is this PR trying to address? How might you be able to observe this issue, and how common could it be?

  5. If a node has such a slow internet connection that it cannot download blocks in less than 2 seconds, will it be continuously stalled and churning through peers? Why or why not?

  6. What approach does this PR take? Is it an appropriate solution?

  7. Why use a std::atomic<std::chrono::seconds> for m_block_stalling_timeout?

Meeting Log

  117:00 <stickies-v> #startmeeting
  217:00 <larryruane_> hi!
  317:00 <alecc> hi
  417:00 <amovfx> hello
  517:01 <stickies-v> welcome everyone! Unfortunately, glozow isn't able to host the meeting today so dergoegge and I will be guiding you through the wonderful world of block download stalling
  617:01 <stickies-v> two hosts for the price of one!
  717:01 <amovfx> osom
  817:01 <dergoegge> Hi!
  917:01 <adam2k> 🎉
 1017:02 <lightlike> hi
 1117:02 <stickies-v> the PR we're looking at is #25880, authored by lightlike (mzumsande), who luckily is here as well! The notes and questions are available on https://bitcoincore.reviews/25880
 1217:02 <stickies-v> anyone joining us for the first time today? even if you're just lurking, feel free to say hi!
 1317:03 <brunoerg> hi
 1417:03 <michaelfolkson> hi
 1517:03 <kouloumos> hi
 1617:05 <stickies-v> who got the chance to review the PR or read the notes? (y/n) And for those that were able to review it, would you give it a Concept ACK, approach ACK, tested ACK, or NACK?
 1717:05 <sipa> hi
 1817:05 <amovfx> y
 1917:05 <amovfx> concept ack, tested ack
 2017:05 <alecc> y
 2117:06 <larryruane_> review 0.5, at least concept ACK
 2217:06 <alecc> concept ack, didn't get a chance to test
 2317:06 <yashraj> y
 2417:06 <dergoegge> Concept ACK, did a rough first pass on the code
 2517:06 <brunoerg> concept ack
 2617:06 <stickies-v> amovfx: awesome that you tested it too - could you share a bit about how you did that?
 2717:07 <adam2k> concept ACK
 2817:07 <BlueMoon> Hello!!
 2917:07 <amovfx> I just got the branch and ran the functional test
 3017:07 <amovfx> maybe that doesn't count?
 3117:07 <amovfx> would a proper test be me creating my own test?
 3217:08 <stickies-v> alright that's a great start! usually for tested ack we mean that we've done testing that goes beyond the functional tests though, because the CI runs the functional tests anyway - so them failing would get picked up pretty quickly
 3317:09 <amovfx> ah okay, good to know
 3417:09 <stickies-v> what "testing" means depends on the PR, your understanding of it, and how creative you can get in trying to poke holes in it. so there's no one definition. In general, it's also good to describe on your review how you tested something, so people can replicate and/or test other angles
 3517:09 <amovfx> great tip, ty
 3617:09 <lightlike> what I did to test this at some time is to locally reduce MAX_BLOCKS_IN_TRANSIT_PER_PEER=2 and BLOCK_DOWNLOAD_WINDOW=16 (or similar(), and run with -onlynet=tor - that way, I would get into stalling situations very soon.
 3717:09 <amovfx> I'm going to have to get a lot better before I can do that
 3817:09 <larryruane_> I'm running the PR now, and one fun thing to do is, during IBD, `bitcoin-cli getpeerinfo` and you can see how the outstanding block requests are more or less interleaved among the peers (no 2 peers downloading the same block)
 3917:10 <amovfx> what network did you do that on lightlike? reg or test?
 4017:10 <larryruane_> to get a simplified view: `bitcoin-cli getpeerinfo | jq '.[]|.inflight`
 4117:11 <dergoegge> Alright lets get started with the first question: Without looking at the PR, what solutions would you consider? (Feel free to be creative)
 4217:11 <lightlike> amovfx: I did it on mainnet (to have blocks that are large and take more than 2s to download )
 4317:11 <amovfx> ty
 4417:11 <amovfx> I think thompson sampling would be good for something like this
 4517:11 <amovfx> prolly way overkill though
 4617:11 <amovfx> https://en.wikipedia.org/wiki/Thompson_sampling
 4717:12 <amovfx> Helps with Multi arm bandit problems
 4817:13 <dergoegge> amovfx: haven't heard about that before, is there a simple explanation of how that would apply to our problem here?
 4917:13 <stickies-v> oh dear, I didn't know we had armed bandits on the bitcoin network, leave alone multi-armed
 5017:13 <larryruane_> there's a great podcast on this topic, featuring one of our esteemed attendees here today! https://podcast.chaincode.com/2020/01/27/pieter-wuille-1.html
 5117:14 <larryruane_> (podcast isn't about Thompson sampling, it's about IBD initial block download)
 5217:14 <alecc> some clarification on the PR, why does it keep stalling on newly connected peers after only one stalls? i.e. is it intentional to assume new peers are stalling? i was thinking a possible alternative to adjusting the stall window would be to change this assumption? maybe i'm misinterpreting some part
 5317:14 <amovfx> I think it would help with selecting nodes that have good behavior, it randomly samples and starts building a confidence metric on the nodes that are returning what you want, in this case blocks
 5417:15 <amovfx> so a node that stalls out, would have a drop in confidence, one that doesn't gains increased confidence
 5517:15 <amovfx> the higher the confidence, the more you sample from that node
 5617:16 <larryruane_> alecc: I think you've hit upon an alternate design idea, maybe when we hit this stall situation, temporarily increase the 1024 window ... then somehow reduce it gradually
 5717:16 <larryruane_> amovfx: really interesting, thanks
 5817:17 <stickies-v> dergoegge: perhaps one alternative solution would be to instead of disconnecting the peer, allowing one (or more) extra peers to download the same block that's being stalled? that would increase bandwidth usage though, so probably less preferable than what this PR proposes
 5917:17 <stickies-v> larryruane_: alecc interesting idea. you'd still have to eventually force something if the peer keeps not sending the block until infinity though, I think?
 6017:18 <dergoegge> stickies-v: yea that might work in some cases , but if you are on a slow connection it would probably make things worse
 6117:18 <amovfx> yea, this PR helps when the operator happens to have a bad connection, correct?
 6217:18 <lightlike> I think if we have a peer that is significantly slower than the rest of the peers, such that it stalls the download, we probably want to disconnect it and exchange it for a better peer. It's not like we want to keep our current peers at all costs.
 6317:19 <dergoegge> a really simple alternative to the PR is to just increase the stalling timeout instead of making it adaptive
 6417:19 <stickies-v> (the question didn't state to come up with GOOD solutions :-D )
 6517:19 <larryruane_> well, I guess I was thinking, when we hit the stall condition, still boot that peer (as we do today), but when we then connect the replacement peer, have the window be larger ... (not sure how to shrink it again)
 6617:19 <amovfx> Yea, it seems like this PR would find the fastest peers faster
 6717:20 <amovfx> this would converge on a stall time correct?
 6817:20 <stickies-v> alright I'll launch the next question already, but as always - feel free to continue talking about previous/side points. we're async whoo
 6917:20 <stickies-v> under what circumstances does a node consider block download to be stalling?
 7017:20 <larryruane_> I think that's answered in the notes? (maybe there's more to it)
 7117:21 <stickies-v> in your own words, maybe? quick summary? always helpful to distill info!
 7217:21 <amovfx> if it hist the timeout?
 7317:21 <amovfx> hits*
 7417:21 <stickies-v> what's it? what's the timeout?
 7517:21 <yashraj> block window does not move?
 7617:21 <larryruane_> stickies-v: yes, of course, sorry! :) ... one peer is holding us back from making progress
 7717:22 <amovfx> +1 yashraj
 7817:22 <larryruane_> so if it wasn't for the ONE peer, we'd be able to jump ahead 1024 blocks
 7917:22 <alecc> for the actual code, when it's processing a getdata message in `net_processing.cpp`, in `FindNextBlocksToDownload` it sets the stalling bit, and then sets `m_stalling_since` to current time based on that
 8017:23 <alecc> and then checks `m_stalling_since` with the current time - timeout to determine whether to disconnect
 8117:23 <stickies-v> larryruane_: yes, except that there could be more than one stalling peer
 8217:24 <larryruane_> I did have a question about the text in the notes tho, which is "The node is unable to connect a new chain of blocks past the current tip, e.g. if the tip is at height i and blocks [i+2: i+1024] have arrived, but block i+1 hasn’t." .... my question is, couldn't there be other missing block in [i+2: i+1024] BUT, all those blocks are assigned to the same peer (as block i+1 is assigned to)? ... small difference
 8317:24 <stickies-v> or rather, more than one peer that hasn't yet delivered all blocks in time
 8417:25 <yashraj> rookie question - what does tip mean exactly? highest validated block?
 8517:25 <stickies-v> yashraj: yeah exactly the window doesn't move. To add to that: we've *requested* all blocks in the window, but we've not fully received all blocks in the window yet
 8617:25 <amovfx> yashraj: yea
 8717:26 <larryruane_> stickies-v: "more than one peer" ... okay I definitely know less than I thought! (not usual!)
 8817:26 <lightlike> larryruane_: yes, that is possible (and probably the most usual case) that a couple of blocks from the 1024 window were assigned to staller.
 8917:26 <stickies-v> larryruane_: yeah I think there could be more missing blocks after that, but would they all need to be assigned to the same peer?
 9017:27 <larryruane_> stickies-v: well I thought if they weren't all assigned to the same peer, then we wouldn't be considering ourselves stalled?
 9117:27 <sipa> lightlike: Agree, there is no reason to keep our peers at all costs, though this is dependent on the fact that this is only relevant during IBD, when we mostly care about bandwidth and less about other properties (like partition resistance, latency, ...). After IBD, we're much more careful about when to disconnect.
 9217:28 <amovfx> thats neat, like IBD is a state of the node
 9317:28 <stickies-v> larryruane_: interesting, I thought we did. lightlike, what do you think?
 9417:29 <yashraj> stickies-v: if i+1 arrived but some of the rest have not, we still consider it stalled?
 9517:29 <sipa> And because I see people talking about increasing the 1024 window: I don't think that helps. For most of IBD, 1024 blocks is way too much; that value was set at a time when blocks were tiny. It still makes sense early in the chain, but arguably we should be shrinking it.
 9617:29 <larryruane_> amovfx: yes, and just as a side note, a node *always* starts in IBD state, but if it's synced with the block chain (or nearly so) then it's only in IBD momentarily
 9717:29 <amovfx> ty
 9817:29 <lightlike> larryruane_, stickies-v: we consider ourselves stalled if we have a peer with an open slot to which we want to assign a block, but can't becasue all of the blocks in thr 1024 windows are already either present or assigned to someone else. So yes, I think it's possible to have multiple stallers.
 9917:30 <stickies-v> ty for clearing it up!
10017:30 <larryruane_> lightlike: excellent explanation, thanks
10117:30 <dergoegge> Next question: What problem is this PR trying to address? How might you be able to observe this issue, and how common could it be?
10217:31 <larryruane_> very basically, we disconnect peers who are not necessarily that bad! so lots of churn, we stall more then we should
10317:31 <alecc> yashraj: i think it's the opposite - if all but i+1 arrived (i+2 -> i+1024) and we haven't seen i+1 for a certain amount of time we consider ourselves stalled, i guess i'm not super sure exactly what happens if i+1 arrives and then the others are taking a while
10417:31 <amovfx> I think previously we could fill up with really slow peers
10517:31 <amovfx> and we are slowed for a long time
10617:31 <amovfx> oh nm
10717:32 <sipa> lightlike: The staller is the peer we have requested the first block in the window from, so there can only be one at a time.
10817:32 <sipa> (because that's the one that prevents the window from moving forward)
10917:32 <larryruane_> you can observe this issue by looking at `debug.log` for "disconnecting" messages during IBD
11017:32 <alecc> dergoegge: i saw in the pr notes it mentioned using tor was one way to observe, as one example of a situation with slower network speeds
11117:32 <amovfx> larryruane_: so previously to this PR, there would be far more disconnecting peers
11217:33 <dergoegge> alecc: yes! if you run a node on a slow network you will likely run into the issue the PR is trying to address
11317:33 <larryruane_> amovfx: yes that's my understanding
11417:34 <lightlike> sipa: yes, that is true, I meant that it could be a possible situation that multiple peers are actually stalling us (withholding blocks from us), that will get marked as stallers one after another - but only one at the same time.
11517:34 <dergoegge> larryruane_: i think the bigger issue is having a slow connection yourself and then churning through peers, not necessarily individual slow peers
11617:36 <larryruane_> dergoegge: yes because that 2 seconds for the new (replacement) peer to respond might always be exceeded if our own network is slow!
11717:36 <amovfx> so this would be an attack vector too, a bad actor could eclipse a new node and just stalls sending blocks
11817:36 <lightlike> I ran into this when I attempted IBD earlier this year on a slow connection with debug=net enabled - the log consisted of minutes of peers being connected and disconnected for stalling.
11917:37 <larryruane_> is there an easy way to artificially give your node a slow connection?
12017:37 <larryruane_> (for testing)
12117:37 <dergoegge> amovfx: would you say that the PR makes that attack vector worse?
12217:38 <amovfx> I dont have the understanding to say weather it does or if it could be exploited. I think it would make it more secure as we have an adaptive time out, so the attacker could use the max time out consistently
12317:38 <amovfx> my intuition tells me this is an improvement
12417:39 <alecc> dergoegge: it seems the PR is generally leaning towards assuming some nodes are not stalling, so I'd imagine if you were eclipsed, by extending the allowed stall window it would take longer to maybe escape the eclipse im thinking
12517:39 <stickies-v> larryruane_: perhaps one options is to give all the peers of your node (assuming you control them in your test setup) a low `-maxuploadtarget`?
12617:39 <dergoegge> amovfx: you can do much worse than stalling IBD if you are able to eclipse someone, so i think this not really a concern
12717:39 <amovfx> cool, good to know
12817:39 <alecc> that makes sense
12917:40 <stickies-v> next question: before this PR, if a node has such a slow internet connection that it cannot download blocks in less than 2 seconds, will it be continuously stalled and churning through peers? Why or why not?
13017:40 <lightlike> alecc: I would say this PR leans towards assuming not __all of our peers__ are stalling. The first staller will be disconnected as fast as before, just subsequent peers will be given more time if that happened.
13117:41 <lightlike> so yes, you are right.
13217:41 <amovfx> stickies: I believe so, because all peers hit the max time out window of 2s
13317:41 <larryruane_> stickies-v: I would say no, because if our connection to ALL peers is more or less uniformly slow, then we wouldn't get into this situation where we're waiting for only ONE peer
13417:42 <alecc> lightlike: makes sense, good to clarify that
13517:43 <stickies-v> larryruane_: the uniformly slow is a pretty specific assumption though
13617:43 <stickies-v> in a window of 1024 blocks and 16 blocks per peer, I'd say at the end of the window there's likely quite a bit of variation?
13717:43 <larryruane_> stickies-v: sorry, I meant if *our* network is slow, then all of our peers would look slow to us
13817:44 <lightlike> but not equally slow. some peers might be even slower than we are in this situation.
13917:44 <amovfx> larryruane_: thats what I thought this PR would fix, if the controller is on potato interent, they get more time to connect
14017:44 <larryruane_> to be clear, I was trying to say why, even without this PR, we *wouldn't* continuously stall and churn, that we *would* make progress (but i'm not entirely sure)
14117:45 <stickies-v> yeah, but still I think there would be enough variation to generally end up with at least one peer being marked as stalling
14217:45 <yashraj> if we increased timeout to 4s, downloaded the block, so reduced to 2s again wouldn't we likely get stalled again?
14317:45 <stickies-v> so in every case but the one where we have virtually identical download timings across our peers, would you say we'd keep stalling and churning peers?
14417:46 <yashraj> sorry if I side-tracked a bit there
14517:47 <lightlike> yashraj: That's a gread question! I think it's possible, but we would have made some progress along the way, connected some blocks and moved the window forward a bit.
14617:47 <dergoegge> yashraj: the current state of the PR decreases the timeout by a factor of 0.85 not 0.5 (the PR description should be updated)
14717:48 <lightlike> if we have a slow connection, that doesn't mean we are more likely to get into stalling situations - for that to happen, we need some peers that are much slower compared to others.
14817:48 <alecc> yashraj: to elaborate on what lightlike said, the stall timeout also decreases at a slower rate than it increases so it would try to go against the situation where you just flip back and forth between increasing the window then decreasing
14917:48 <stickies-v> if we can't download any block in < 2 seconds, then as soon as we end up with one stalling peer, we would disconnect that peer. afterwards, we would give every new peer 2 seconds to catch up, but that would never happen, so we would be stuck forever I think?
15017:48 <yashraj> dergoegge: oh yeah sorry forgot about that comment
15117:49 <lightlike> but if we have a slow connection, we would have a hard time getting out of a stalling situation.
15217:49 <larryruane_> lightlike: yes I agree, remember that the 2s (or with the PR, can be greater) timeout comes into play ONLY if we've entered this "stalled" condition ... with similarly slow peers, (or like if our own network connection is slow), we may never enter that state
15317:49 <larryruane_> lightlike: ah interesting observation
15417:49 <yashraj> great point alecc: and dergoegge: thanks
15517:49 <stickies-v> bonus question: are there any (edge) cases where we wouldn't detect we have a stalling node?
15617:50 <alecc> someone might've answered this already, but when considering whether we're stalling, does it consider the download times of all other peers (compares if one is relatively slower)?
15717:50 <larryruane_> alecc: i don't think so
15817:50 <stickies-v> no
15917:50 <amovfx> I cant see one
16017:51 <dergoegge> alecc: that could have been an alternative to the approach of the PR
16117:51 <lightlike> alecc: this data isn't currently tracked. It would be a (more complicated) alternative approach to gather this data, and have some algorithm based on that.
16217:51 <larryruane_> stickies-v: on the bonus question, you mean with the PR?
16317:52 <yashraj> can a peer serve other blocks within 2s but not the i+1?
16417:52 <stickies-v> larryruane_: the edge case I had in mind applies to both with and without this PR, but either would be interesting to hear!
16517:52 <alecc> dergoegge: lightlike: makes sense
16617:53 <dergoegge> yashraj: yes but the stalling timeout is only reset if a requested block is received
16717:54 <larryruane_> stickies-v: that's a toughie .. i can't think of it
16817:55 <stickies-v> I think if all of our nodes are just not sending us any blocks at all (probably malicious, or some serious technical issues), then we would just not make any progress at all, and no single node would be marked as stalling
16917:55 <stickies-v> it's a very unlikely scenario though, just... theoretical edge case
17017:55 <amovfx> if you are eclipsed, that could happen
17117:55 <larryruane_> good one!
17217:56 <alecc> stickies-v: would churning through not eventually find an honest node? or in the case that you're eclipsed, there's not much you could even do right
17317:56 <alecc> or i guess does churning through all you to repeat connections to the same nodes (that would never be marked stalling)?
17417:57 <dergoegge> stickies-v: we have a separate timeout for this case (Hint: have a look at m_downloading_since), after which we disconnect those peers.
17517:57 <dergoegge> Last question: What approach does this PR take? Is it an appropriate solution?
17617:57 <yashraj> can we run out of peers by churning? like for outbound-only?
17717:58 <stickies-v> alecc: so we wouldn't disconnect them for stalling, but luckily as dergoegge points out there are other mechanisms that disconnect nodes when they're not sending us anything - and hopefully eventually we'd stumble upon a node that's able to send us something! that second mechanism is slower than the stalling disconnect, though
17817:58 <amovfx> dergoegge: adding an adaptive timer? I think it is an appropriate solution.
17917:58 <dergoegge> yashraj: your node will make new outbound connections continuously, so running out is not really possible i think
18017:58 <amovfx> Sure hope we get to that atomic question
18117:58 <alecc> stickies-v: ahh
18217:59 <amovfx> my concurrency game is weak, I need that explained
18317:59 <lightlike> yashraj: unlikely, if we know about enough peers in our addrman. also, I don't think we ban these peers, so nothing prevents us from re-connecting to a previously disconnected peer.
18417:59 <dergoegge> amovfx: yes, how does it adapt though?
18517:59 <amovfx> when a peer is dropped, the disconnect time increases
18617:59 <amovfx> for the next peer
18717:59 <amovfx> ?
18818:00 <larryruane_> dergoegge: "Last question" -- we still disconnect the peer that's stalling us, as before, but we increase the 2s timeout for the replacement peer .. and increase again if it happens again, etc. (exponentially)
18918:00 <yashraj> thanks dergoegge: and lightlike:
19018:00 <dergoegge> larryruane_ right!
19118:00 <amovfx> and if a peer gives a block, the window shrinks?
19218:00 <alecc> amovfx: from what i could find, using std::atomic allows for some atomic operations that allows different threads to edit memory at the same time without unexpected behavior (compare_exchange_strong is used in the pr for this reason). i think this is easier than using another mutex/locking? there's probably more to it
19318:00 <amovfx> window = timeout window
19418:01 <dergoegge> amovfx: the timeout shrinks (by a factor of 0.85) if a new block is connected
19518:01 <amovfx> rgr
19618:01 <amovfx> alecc: ty
19718:02 <stickies-v> alecc: you're right that we use std::atomic to protect against UB during threading, but std::atomic doesn't allow multiple threads to edit memory at the same time - it just prevents data races from happening by using atomic operations, i.e. makes sure that one thread waits for the other one to finish before proceeding
19818:03 <dergoegge> #endmeeting