Add Python implementation of Minisketch, part 2 (math and cryptography)

Host: sipa  -  PR author: sipa


This week, we’ll continue our review of the Python implementation of Minisketch.

See the notes from our previous review club on Minisketch.


  1. (previously question 6) To actually find the roots, the Berlekamp Trace Algorithm is used. It uses the trace function t(x) = x + x^2 + x^4 + ... + x^(fieldsize/2) which maps every element of a field of size 2^n to 0 or 1. In our 8-bit field that means t(x) = x + x^2 + x^4 + x^8 + x^16 + x^32 + x^64 + x^128. This means that for any non-zero field element p, tr(p*x) also has this property, and every choice of p will map a different subset of field elements to 0 (and the others to 1). How is this property used to recursively split the polynomial into smaller and smaller ones?

Meeting Log

  118:00 <jnewbery> #startmeeting
  218:00 <jnewbery> hi folks!
  318:00 <larryruane_> hi
  418:00 <pinheadmz> hi
  518:00 <felixweis> hi
  618:00 <andozw> yoooo
  718:00 <emzy> hi
  818:00 <kouloumos> hi!
  918:00 <jnewbery> feel free to say hi to let everyone know you're here
 1018:00 <jnewbery> anyone here for the first time?
 1118:00 <tkc> hi-new here-first time
 1218:00 <eoin> hi, my third review
 1318:01 <jnewbery> tkc: welcome!
 1418:01 <kouloumos> first time of 2021 ;)
 1518:01 <jonatack> hi
 1618:01 <jnewbery> Today we'll be continuing the discussion about Minisketch from two weeks ago
 1718:01 <tkc> thanks-initial steps in learn to program the core-long road ahead. :)
 1818:01 <jnewbery> Notes, questions and logs from that meeting are here: We got up to question 5 last time I think.
 1918:01 <jnewbery> ok, over to sipa!
 2018:02 ℹ ma is now known as Guest8267
 2118:02 <sipa> sorry, laptop troubles!
 2218:02 <sipa> be right there
 2318:02 <eoin> hi, my third review
 2418:02 <jnewbery> anyone know any good minisketch puns while sipa sorts out his laptop?
 2518:03 <jonatack> since i warned about flakey internet, it will probably work flawlessly this time
 2618:03 <sipa> ok!
 2718:03 <pinheadmz> I know this many miniskettch puns: x^6 + x^5 + x^4 + x^3 + x^2 + x
 2818:04 <sipa> i heard there are an infinite number of papers to write about real analysis, but today we don't need to worry about that because we're working in a finite field
 2918:04 ⚡ pinheadmz *cough*
 3018:04 <sipa> so
 3118:04 <glozow> when u trying to solve polynomial it can get kinda berlekamp messy
 3218:04 <sipa> everybody awake?
 3318:05 <emzy> *g*
 3418:05 <sipa> let's go over where we were
 3518:05 <felixweis> minisketch helps every bitcoiner to get a GF(2)!
 3618:06 <sipa> we're working in a finite field, which is like numbers, but addition and subtraction multiplication and division are all defined in a weird way
 3718:06 <glozow> i was in the minisketchy part of town and someone made me do a set reconciliation with the bills in my wallet
 3818:06 <sipa> a+b and a-b are the same thing
 3918:07 <Guest8267> XOR
 4018:07 <pinheadmz> lets all just pay attention to sipa. if you miss anything, its really easy and fast to get whatever you missed from someone else
 4118:07 <sipa> and then we define "sketches" of a set {a,b,c} as just the list {a+b+c, a^3+b^3+c^3, a^5+b^5+c^5, ...}
 4218:07 <sipa> up to its capacity
 4318:07 <glozow> welcome to the frobenius hospital, we only treat odd syndromes because the even ones can be cured by squaring the odd ones
 4418:07 <jnewbery> thanks glozow
 4518:07 <glozow> ok sorree i'm done now
 4618:08 <jonatack> 🤣
 4718:08 <sipa> today we'll let Berlekamp be our guide in tracing a euclidean ring through the galois fields, in the hope of finding some hidden roots...
 4818:09 <sipa> and in general set reconciliation works by sending such a sketch, letting the other side "add" (or subtract, same thing) from it their own sketch
 4918:10 <sipa> which results in a sketch of the symmetric difference of the two parties' sets
 5018:11 <sipa> as you can see that say Alice has {a,b} and Carol has {b,c}, the sketches are {a+b,a^3+b^3,...} and {b+c,b^3+c^3,...} and pairwise adding those gives you {a+c,a^3+c^3,...}, the sketch of {a,c}
 5118:11 <sipa> makes sense?
 5218:12 <jnewbery> 👍
 5318:12 <pinheadmz> very cool yes
 5418:12 <sipa> now, the hard part is... given this {a+c,a^3+c^3,...} sketch, find {a,c} again
 5518:13 <larryruane_> so (a+b) + (b+c) = a+2b+c = a+c ?
 5618:13 <pinheadmz> larryruane_ yes bc + and - cancel out like XOR!
 5718:13 <sipa> we've already used the Berlekamp-Massey algorithm to turn this sequence into a polynomial that generates the sequence, which in our case would be (1+ax)*(1+cx)
 5818:13 <pinheadmz> er I shoudl say +b and +b cancel out
 5918:13 <sipa> larryruane_: yes, characteristic 2 field, so a+a = 0
 6018:14 <sipa> but of course we don't get the polynomial in that form, we get just the coefficients of it: [1, a+c, ac]
 6118:15 <sipa> and if we had more terms, it'd be even more complex
 6218:16 <sipa> for 3 terms it'd be [1, a+b+c, ab+bc+ac, abc] etc
 6318:16 <pinheadmz> Berlekamp-Massey turns this: {a+c,a^3+c^3,...} into this: (1+ax)*(1+cx) ?
 6418:16 <sipa> indeed
 6518:16 <sipa> and it does so regardless of how big your sketch was
 6618:17 <sipa> that polynomial defines the pattern underlying the infinite sequence of {a+c,a^3+c^3,a^5+c^5,a^7+c^7,...}
 6718:18 <sipa> so we work in the "sketch" form because there it's easy to use the cancelling-out property
 6818:18 <sipa> and that's what we send
 6918:18 <sipa> but we then use BM to discover the pattern behind it, and that pattern (the polynomial) helps us find what the actual input terms are
 7018:19 <pinheadmz> (1+ax)*(1+cx) = 1 + cx + ax + ac(x^2) ?
 7118:19 <sipa> there is a bit deeper explanation here:
 7218:19 <sipa> yes, 1 + (a+c)x + (ac)x^2, so BM finds the coefficients [1,a+c,ac]
 7318:19 <sipa> what are our next steps?
 7418:19 <pinheadmz> oh right (a+c)x
 7518:20 <sipa> assuming we actually want to find a and c
 7618:20 <glozow> factor it
 7718:20 <sipa> yeah, of course
 7818:20 <lightlike> check that it has n roots and can be completely factored
 7918:21 <sipa> indeed, that too
 8018:21 <sipa> we could use a fully-fledged factoring algorithm like cantor-zassenhaus, but we don't actually care about factoring it
 8118:21 <sipa> we want to find its roots
 8218:21 <sipa> which is the same as saying: factoring into 1st degree factors
 8318:22 <sipa> but if the polynomial somehow can't be factored fully into 1st degree factors, or those factors aren't all unique, we don't really care at all
 8418:22 <felixweis> because it can't be fully decoded
 8518:22 <felixweis> all or nothing
 8618:23 <sipa> right, we know that if that polynomial is the result of BM on a sketch of sufficient size, it will be factorizable into distinct 1st degree factors
 8718:23 <sipa> and so we can actually use a significantly simpler algorithm that just finds the roots, and we can even use one that only works if all roots are distinct
 8818:24 <sipa> but we do need to check ahead of time that this is the case, as the algorithm we'll use would run into an infinite loop or otherwise behave incorrectly if it isn't
 8918:24 <sipa> how did we do that?
 9018:24 <sipa> we briefly discussed it last time
 9118:24 <lightlike> if we find that the polynomial isn't factorizable, can we deduce that the sketch size was too small? or could that have other causes as well?
 9218:24 <sdaftuar> sipa: under what circumstances would BM give you a polynomial with duplicate roots that live in the field?
 9318:25 <sipa> lightlike: yeah, it must mean that
 9418:25 <glozow> use `poly_frobeniusmod` - compute x^{2^b} mod poly, which should be exactly x if it has unique roots
 9518:26 <felixweis> I was able to generate a sketch(8,4) that decodes 5 numbers. whats the origin of this?
 9618:26 <sipa> sdaftuar: if the list (sketch) appears to have a polynomial with repeated roots as shortest LFSR that generates it
 9718:27 <sipa> i believe that e.g. [1,0,1,0,1,0,1,0,...] has (x^2 + 1) as LFSR
 9818:27 <sipa> which has double root 1
 9918:27 <sdaftuar> thanks, will ponder
10018:27 <sipa> given that we start from a set where every element can only occur once, that must mean our sketch was just too small
10118:28 <sipa> glozow: indeed, why?
10218:29 <pinheadmz> sipa guess: some element wrapped around the modulus? and repeated some lower element?
10318:29 <glozow> in an order q galois field, x^q - x always has every field element as roots, so if the polynonmial does too, then x^q = x mod poly
10418:29 <sipa> right, or in other words: we want to know if poly is a divisor of x^q - x
10518:30 <sipa> and if that's the case, (x^q - x) mod poly = 0
10618:30 <sipa> and if poly has degree > 1, that's the same as x^q mod poly = x
10718:30 <sipa> because (x^q - x) = x*(x+1)*(x+2)*(x+3)*...*(x+(q-1))
10818:30 <sipa> (i.e. it's the product of all possible 1st degree factors, and thus has every element of the field exactly once as root)
10918:31 <sipa> ok, moving on!
11018:31 <glozow> yeet
11118:31 <sipa> actually finding the root
11218:31 <sipa> s
11318:31 <sipa> again: we're really trying to find 1st degree factors of our polynomial, but we *know* ahead of time that it factors completely, without any duplicates
11418:32 <lightlike> do we do this check only as an optimization to save time, or because the root finding algorithm could run indefinitely?
11518:32 <glozow> both?
11618:32 <sipa> lightlike: done naively, it could run indefinitely
11718:32 <sipa> ok
11818:32 <sipa> what's the trace function?
11918:32 <sipa> (because it's the Berlekamp Trace algorithm, there must be a trace, right?)
12018:34 <sipa> tr(x) = x + x^2 + x^4 + x^8 + ... + x^(q/2)
12118:34 <sipa> is the trace function for our field, w.r.t. GF(2)
12218:34 <sipa> in general, a trace function is a function that maps elements of some bigger structure to a smaller one
12318:34 <glozow> I'm not 100% clear on this. So the Trace function gives us a GF(2)-linear map from the polynomials to GF(2)?
12418:35 <sipa> it maps every GF(2^b) element to a GF(2) element, in our case
12518:35 <sipa> and more precisely, it maps exactly half of them to 0 and the other half to 1
12618:35 <glozow> right, it'd have to
12718:35 <sipa> you can see that this is the case by computing what tr(x) * (1 + tr(x)) is
12818:37 <sipa> this also means that for any field element a (not 0), tr(a*x) also has this property
12918:37 <sipa> it maps half of the elements to 0, and the other half to 1... but which half that is depends on a
13018:38 <sipa> so we start by picking a random a, and trying to separate the roots from poly(x) into the ones for which tr(a*x)=0 and the ones for which tr(a*x)=1
13118:38 <sipa> and then do that recursively with different a values, until everything is split into 1st degree factors
13218:39 <sipa> what do you know about poly(x) mod tr(a*x) ?
13318:39 <glozow> wait sorry, so the Trace function is defined on GF(2^b), how do we apply it to the polynomial itself?
13418:39 <sipa> we'll get there
13518:40 <sipa> tr(a*x) is a polynomial in x
13618:40 <sipa> if you think about it symbolically
13718:40 <sipa> it's a*x + (a^2)*x^2 + (a^4)*x^4 + ...
13818:41 <sipa> right?
13918:41 <glozow> right, mhm
14018:41 <sipa> now: think about what a modulus means:
14118:41 <sipa> if r(x) = poly(x) mod tr(x), that means that r(x) equals poly(x) + somefunction(x)*tr(x)
14218:42 <sipa> in the same way that 19 mod 8 = 3 means that 3 = 19 + (some number)*8
14318:42 <sipa> (where some number here specifically is -2)
14418:42 <sipa> for polynomial modulus it's the same, except it's now some unknown polynomial
14518:43 <sipa> plz ask questions if things aren't clear :)
14618:43 <glozow> is that polynomial = gcd(poly, trace)?
14718:43 <sipa> no no, no gcd yet
14818:43 <sipa> just a simple modulus
14918:43 <sipa> this is the definition of a modulus
15018:44 <glozow> okok ye
15118:44 <sipa> or even the definition of division as you learned it in high school probably: when computing a/b, you find a-q*b=r
15218:44 <sipa> quotient and residue
15318:45 <sipa> the modulus operation is finding the residue and ignoring the quotient
15418:45 <glozow> ye
15518:45 <sipa> so!
15618:45 <sipa> if r(x) = poly(x) mod tr(x), that must mean that r(x) = poly(x) + quotient(x)*tr(x)
15718:46 <sipa> but we know tr(x) is 0 for half of the field
15818:46 <sipa> (now thinking about concrete x values)
15918:46 <sipa> this means that r(x) must have all roots that poly(x) has which coincide with roots of tr(x) (and that's half the field)
16018:47 <sipa> because evaluating r(x) = poly(x) + quotient(x)*tr(x) in such an x just gives r(x) = 0 + quotient(x)*0
16118:47 <glozow> r(x)'s roots = shared roots of poly(x) and tr(x)
16218:47 <glozow> ?
16318:47 <sipa> it must have _at least_ those roots
16418:47 <sipa> it can have others
16518:48 <glozow> mm okay
16618:48 <sipa> but to weed those out, you use a gcd
16718:48 <sipa> the (polynomial) gcd of r(x) and poly(x) gives you something which exactly has the shared roots of poly(x) and tr(x)
16818:49 <sipa> gcd for polynomials = literally a way to computing the polynomial consisting of all shared factors
16918:49 <amiti> if tr(x) is 0, how do we know that poly(x) is 0?
17018:49 <sipa> amiti: we don't, we're trying to find that
17118:50 <amiti> oh, I see
17218:50 <sipa> it's just the case that for all values v for which tr(v)=0, we know that poly(x) mod tr(x), evaluated in v, also gives 0
17318:51 <sipa> if v is a root of poly(x)
17418:51 <sipa> and thus poly(x) mod tr(x) must be a polynomial with v as root if it's a root of both tr(x) and poly(x)
17518:51 <eoin> what does tr(x) mean?
17618:51 <glozow> trace function evaluated on x
17718:51 <sipa> 09:34:02 < sipa> tr(x) = x + x^2 + x^4 + x^8 + ... + x^(q/2)
17818:52 <sipa> we don't know anything about the other roots of (poly(x) mod tr(x)), though
17918:52 <sipa> but after doing a gcd with poly(x), you get a polynomial with exactly the shared roots of poly(x) and tr(x)
18018:52 <eoin> gcd?
18118:52 <sipa> greatest common divisor
18218:53 <eoin> is it greatest common denominator?
18318:53 <sipa> no
18418:54 <sipa> we're not working with numbers but with polynomials
18518:54 <sipa> divisor is generic
18618:54 <generic> i am? ;P
18718:54 <sipa> hahaha
18818:54 <jonatack>
18918:55 <sipa> so: we're almost done!
19018:55 <sipa> we have f1(x) = gcd(r(x),poly(x)), which is the polynomial with all shared roots of poly(x) and tr(x)
19118:55 <sipa> and f2(x) = poly(x) / f1(x), which has all the other roots
19218:55 <sipa> f1,f2 are a factorization of poly
19318:56 <sipa> f1 has all the roots for which tr(x)=0, f2 has all the roots for which tr(x)=1
19418:57 <sipa> so then you can repeat the process, but with a different a (i've been writing tr(x) everywhere, but i really meant tr(a*x))
19518:57 <sipa> to split f1 and f2 up further
19618:57 <sipa> until you hit 1st degree polynomials, and if you have a 1st degree polynomial (x+m), its root is kind of obvious
19718:58 <sipa> ok, question! what would happen if there was a duplicate root?
19818:58 <sipa> and you'd naively do this?
19918:59 <amiti> you'd never hit the 1st degree polynomial, but you'd keep repeating the process tryna get there?
20018:59 <glozow> you wouldn't end up with linear factors?
20118:59 <sipa> yeah, say you have x^2+1, which has duplicate root 1
20218:59 <sipa> if tr(a*1)=0, you'd always get f1(x)=x^2+1 and f2(x)=1
20319:00 <sipa> if tr(a*1)=1, you'd always get f1(x)=1 and f2(x)=x^2+1
20419:00 <sipa> so both factors would always land on the same side of the split
20519:00 <sdaftuar> since you use a special lookup for quadratics anyway, this would only be an issue if you had a triple (or more) root -- is that right?
20619:00 <sipa> indeed
20719:00 <glozow> why doesn't the loop halt after trying 2^field_size times? would that do anything?
20819:00 <sipa> glozow: ah!
20919:01 <sipa> well, if you pick the a values randomly, you can't ever stop
21019:01 <sipa> it turns out, you can do much better than picking them randomly
21119:01 <glozow> is this the randv = gf.mul2(randv) line?
21219:01 <sipa> yeah
21319:01 <sipa> if you pick (a, 2a, 4a, 8a, 16a, ..., {q/2}a), you'll always find every root
21419:02 <jonatack> set {2^i*a for i=0..fieldsize-1}
21519:02 <sipa> which just needs b (=log2(q)) steps
21619:02 <sipa> and you could indeed just stop after recursing b times
21719:02 <sipa> and if you do that, the frobenius check at the start just becomes an optimization
21819:03 <sipa> those a values are optimally orthogonal in a way
21919:03 <sipa> i discovered that, and was very proud of it, because i couldn't find any implementations of BTA that used this
22019:03 <sipa> until i learned that it's actually mentioned in berlekamp's 1967 paper that introduced the algorithm
22119:04 <amiti> =P
22219:04 ⚡ sdaftuar is still impressed
22319:05 <sipa> another optimization we have: if you've computed tr(a*x) mod poly(x), you can compute x^q mod poly(x) from that too
22419:06 <sipa> if r(x) = tr(a*x) mod poly(x), then r(x)*(r(x)+1) = (a*x)^q i believe (or (a*x)^q + (a*x), something like that
22519:07 <sipa> and in the C++ code we use that to do the frobenius check simultaneously with the first recursion level of the factoring algorithm
22619:08 <sipa> sdaftuar: and yeah, with the explicit formula for 2nd degree polynomials, duplicate roots specifically is not a problem (we'd detect them when trying to solve the 2nd factor)
22719:09 <sipa> but you could have irreducible 3rd degree polynomials too, for example
22819:09 <sipa> it's not just duplicate roots that form a problem
22919:09 <sdaftuar> right, makes sense
23019:09 ⚡ jonatack clapping 👏
23119:09 <sipa>
23219:10 <sipa> that explains how to do the frobenius check simultaneously with splitting
23319:10 <sipa> randv there is what i've called a here
23419:11 <felixweis> whats randv stand for?
23519:11 <sipa> random value
23619:11 <sipa> :)
23719:11 <felixweis> ok
23819:11 <eoin> what is BCH?
23919:12 <sipa> eoin: Bose–Chaudhuri–Hocquenghem
24019:12 <sipa>
24119:12 <jonatack> probably good to add this link to the log too:
24219:13 <sipa> oh, i guess one last thing: we started from the polynomial (1+ax)(1+cx), and found its roots 1/a and 1/c
24319:13 <sipa> but we want a and c
24419:13 <sipa> a very simple trick helps with that: reverse the coefficients of polynomials before trying to find the roots
24519:14 <sipa> roots of (a*x^3 + b*x^2 + c*x + d) = inverses of roots of (d*x^3 + c*x^2 + b*x + a)
24619:14 <glozow> woah nice
24719:14 <sipa> there also exists something called a batch inverse, where with 1 inverse + 3*n multiplications you can find the inverses of n elements at once
24819:14 <sipa> but we don't even need it here
24919:15 <sipa> inverses are many times more expensive than multiplications, fwiw
25019:15 <sipa> so... any questions? :)
25119:15 <jonatack> sipa: trick used here? roots = poly_find_roots(list(reversed(poly)), self._gf)
25219:15 <sipa> jonatack: yes
25319:16 <glozow> wait why is the roots of reversed = inverses of roots?
25419:16 <sipa> aha!
25519:16 <sipa> ok
25619:16 <sipa> substitute y = 1/x
25719:17 <sipa> so you start with a*x^3 + b*x^2 + c*x + d
25819:17 <sipa> and get a*y^-3 + b*y^-2 + c*y^-1 + d
25919:17 <sipa> now multiply everything with y^3
26019:17 <glozow> ohhhhhhhh
26119:17 <sipa> and you get a + b*y + c*y^2 + d*y^3
26219:17 <sipa> and doing so doesn't change the roots
26319:18 <felixweis> nice
26419:19 <glozow> this is true in general for polynomials?
26519:19 <sipa> yes
26619:19 <glozow> why didn't they teach this in school
26719:19 <glozow> jeez
26819:19 <sipa> haha
26919:20 <sipa> what if 0 is one of the roots?
27019:20 <sdaftuar> no constant term?
27119:20 <sipa> indeed
27219:20 <sdaftuar> probably shouldn't do that trick til you factor it out
27319:20 <sipa> it still works actually
27419:20 <sipa> but you get a polynomial of degree one less
27519:21 <sipa> which excludes the infinity root
27619:21 <sipa> thankfully, not a problem for us, because we know all our roots are nonzero
27719:21 <sipa> oh, question: why did we need the property that we encode our set elements as nonzero field elements?
27819:21 <felixweis>
27919:22 <jonatack> so in the C++ code: std::reverse(poly.begin(), poly.end());
28019:22 <sipa> jonatack: indeed
28119:24 <glozow> if they're zero,
28219:24 <sipa> what would the sketches be for sets {a} and {a,0} ?
28319:24 <glozow> they'd be the same :O
28419:24 <felixweis> same
28519:24 <sipa> exactly
28619:25 <felixweis> adding 0 is a NOP
28719:25 <sipa> and it's kind of related to this inversion at the end: in the BM representation, the elements become inverses of the roots
28819:25 <sipa> but 0 can't be the inverse of a root
28919:26 <sipa> it'd be possible to just add 1 bit to every sketch to indicate "contains 0", and xor that along with everything else
29019:26 <sipa> but just avoiding 0 is easier
29119:26 <sdaftuar> sipa: so are you saying it's impossible for BM to spit out a polynomial with 0 as a root? or just that it means the sketch obviously won't decode?
29219:26 <glozow> the sketches would still work, but you wouldn't be able to reconcile an element represented as 0?
29319:27 <sipa> sdaftuar: i believe that's the case yes, BM
29419:27 <sipa> iirc BM will always produce a polynomial with constant term 1
29519:28 <sipa> not entirely sure, but even if not, it can't even have a constant term 0
29619:28 <sipa> because then by definition it's not a minimal LFSR
29719:28 <sdaftuar> ah
29819:29 <jonatack> for the log
29919:29 <glozow> is that everything for today sipa? :)
30019:30 <sipa> seems like a good place to end :)
30119:30 <glozow> #endmeeting


30219:37 <sipa> minisketch started as an attempt to get an implementation of NTL's algorithms needed for erlay that wasn't GPL
30319:37 <sipa> NTL is sort of the library people use for low-level finite field arithmetic stuff
30419:38 <sipa> (and it was used by the authors of the PinSketch paper for their implementation)
30519:38 <sipa> so i learned most of this just from reading the NTL source code...
30619:38 <sipa> you can't find much about BTA online
30719:39 <sipa> the standard algorithm for factoring polynomials over finite fields is but as described there, it doesn't work for GF(2^b) fields