In C++, unsigned integer overflow is well defined and often used by design.
Signed integer overflow, on the other hand, is undefined behavior.
In practice, signed integer overflow will usually manifest by “wrapping
around”. For example, adding two postive values int{0x7ffffffd} and
int{10} will result in a negative value of -2147483641.
This doesn’t mean that signed integers should be avoided. In fact, signed
integers should normally be preferred for arithmetic calculations such as
addition or subtraction. Care should be taken to pick an integer width that
is large enough to fit all possible values at runtime. Commonly, this is
int, int32_t, or int64_t. User provided values should also be checked
to be in range as early as possible.
Compilers such as gcc and clang can instrument the binary to detect signed
integer overflow with the flag
-fsanitize=signed-integer-overflow.
In Bitcoin Core it can be set via ./configure --with-sanitizers=signed-integer-overflow.
Were you able to compile Bitcoin Core with the signed-integer-overflow sanitizer?
Were you able to reproduce the bug manually by calling the RPC or by running the fuzz
test?
Why is it impossible to fix this issue by limiting the range of possible
values a user can enter?
<svav> prioritisetransaction Gives a fee value in satoshis to add or subtract to the existing fee value. It’s a value to modify the absolute fee of the TX. It is not paid initially itself, but only considered by the algorithm that selects transactions into a block as it means the transaction will pay a higher or lower absolute fee.
<larryruane> I think it's a way to change a mempool transaction's effective fee (thus feerate) ... hence the name, you can increase it's probability of being included in a block (if you're a miner)
<michaelfolkson> I'd guess it wouldn't be particularly helpful for relay. Sure you propagate it but your peers still won't include it in their mempool or propagate it further?
<svav> prioritisetransaction gives the value of the fee adjustment to reprioritise the transaction, right? It is considered by the transaction selection algorithm, because it means an adjusted absolute fee will be taken, if the transaction is actually selected.
<Amirreza> It may seem very basic question, so sorry to ask. Why we should use an RPC to modify the TRX fee? Why not sending a second TRX with higher fee? If we are lucky, the first TRX would accept (which has less fee) and if no, the second would be included in the block.
<afmencken> I'm trying to imagine a use case for this - the first one that comes to mind is if a miner has some out of band incentive to (de)prioritize the transaction.
<larryruane> If someone submits a tx and it's not getting mined, I suppose the tx sender could pay a miner on the side to mine it, and the miner would do that using this RPC?
<Amirreza> Sorry I don't get it. What I interpret by the name, it should sort TRXs by fee rat in descending order. Because miners want TRXs with higher fees. Is this correct?
<michaelfolkson> Amirreza: They generally do want transactions with the highest fee rates yes. But occasionally like in examples described above they'll want to force a transaction into a block even if it has lower fee rate
<MacroFake> Let's go on: Were you able to compile Bitcoin Core with the signed-integer-overflow sanitizer? Were you able to reproduce the bug manually by calling the RPC or by running the fuzz test?
<svav> So is it just a way to make it more or less likely a transaction gets included in a block, after you have submitted the transaction for processing, i.e. its in the mempool already?
<MacroFake> larryruane: There won't be a bitcoind if you pass --enable-fuzz (to link a fuzz engine), however the fuzz tests are also built (by default) along with all other stuff without a fuzz engine
<afmencken> svav: I think at better way to say it is that it is a way to increase the likelihood that the transaction stays in _your_ mempool (but not necessarily into a block).
<svav> Is the problem essentially that you don't know how many times the fee delta will be applied?? If it's applied too many times it can cause an overflow??
<michaelfolkson> When you say "leave it up to the user to use the RPC endpoint responsibly" you essentially mean don't introduce the fee by something ludicrous?
<MacroFake> Next q: Enumerate the different approaches that were discussed in the pull request and summarize each in one sentence. See abort-approach, return-early-approach, saturating-approach.
<afmencken> One additional approach that I didn't see mentioned would be to have the RPC return an error code. Is that a non-starter for some obvious reason that I'm not seeing?
<MacroFake> afmencken: Good question. The thing is that it is not possible to determine whether an overflow will (eventually) occur, as the overflow may happen after the RPC finished sucessfully.
<MacroFake> To recap Q4: Limiting the range is not sufficient. For example, you can prioritise by +1 in a loop on the same txid. This will eventually overflow.
<larryruane> by the way, it's interesting how the prioritisation map can contain txids that don't exist yet in the mempool! maybe they just haven't arrived yet!
<MacroFake> Amirreza: Good question. I think it would be another possible solution. However, you'd need to limit both the allowed range (based on maximum allowed ancestors/descendants), and the number of allowed calls.
<larryruane> Also just to be clear, it's not like we're worried that this could actually happen in practice, right? It's more that we have the fuzzers and sanitizers, and we want them to run cleanly as much as possible, without having to declare exceptions (the ubsan file that this PR removes a line from)
<larryruane> MacroFake: (saturation, I agree!) If it wouldn't take too long to explain, why is signed integer overflow considered undefinied behavior? Especially while unsigned is not?
<MacroFake> unsigned integer overflow is used commonly to implement low level cryptographic primitives. I think this wouldn't be possible if the language didn't have unsigned integer overflow