I noticed that the little/big endian hex string functions for Sha256dHash
did not match my intuition. What we should have is that the raw bytes
correspond to a little-endian representation (since we convert to Uint256
by transmuting, and Uint256's have little-endian representation) while
the reversed raw bytes are big-endian.
This means that the output from `sha256sum` is "little-endian", while the
standard "zeros on the left" output from bitcoind is "big-endian". This
is correct since we think of blockhashes as being "below the target" when
they have lots of zeros on the left, and we also notice that when hashing
Bitcoin objects with sha256sum that the output hashes are always reversed.
These two functions le_hex_string and be_hex_string should really not be
used outside of the library; the Encodable trait should give access to a
"big endian" representation while ConsensusEncodable gives access to a
"little endian" representation. That way we describe the split in terms
of user-facing/consensus code rather than big/little endian code, which
is a better way of thinking about it. After all, a hash is a collection
of bytes, not a number --- it doesn't have an intrinsic endianness.
Oh, and by the way, to compute a sha256d hash from sha256sum, you do
echo -n 'data' | sha256sum | xxd -r -p | sha256dsum
A pretty serious oversight :) this was not noticed because I was
simultaneously dealing with a serious tcp connection bug in rustc,
and I had thought bitcoind's angry disconnects were a further
symptom of that.
This is a massive simplification, fixes a couple endianness bugs (though
not all of them I don't think), should give a speedup, gets rid of the
`serialize_iter` crap.
I think this is what I want to do for everything json-visible...perhaps
I will not be able to keep the macro for it though, since there are
some clever variations on it (e.g. blocks should have their header's
hash as a field, txes should appear as txids unless vebose output is
requested, etc.)
We get a speed up (~5%) and memory savings (~10%) on initial sync from
using a HashMap, though it's hard to tell precisely how much savings
because it's quite nonlinear.
I haven't tested de/serialization. Some work needs to be done there to
split up the UTXO set since it takes forever to saveload.
We were conflicting with the Rust stdlib trait Hash, which is used
by various datastructures which need a general hash. Also implement
Hash for Sha256dHash so that we can use bitcoin hashes as keys for
such data structures.