Reconnecting an existing socket simply was not working; the Rust socket
did not expose any methods for reconnection, so I simply tried calling
connect() again. As near as I can tell, this was a no-op --- which makes
sense because both the sending and receiving threads had their own copy
of the Socket, and it's not clear what the synchronization behaviour
should have been.
Instead if the connection fails, we relay this information to the main
thread, wait for an acknowledgement, then simply destroy the listening
thread. The caller can then simply call `start()` again.
`verify` cannot handle illegally padded signatures because it takes an object
of type `Signature`, which is a fixed-size type. This should have been part
of the previous commit --- an important lesson about running the unit tests
before every push!
Sorry, this is needed to enable proper txid/vout lookups for the address index.
This means any users of wizards-wallet need to rebuild their utxo sets, and
will also mean an increase in RAM usage.
I was trying to do something clever by making sure that the numeric
bounds were consistent with whatever ordering relation we were checking,
AND that the boolean values were also consistent...this is Wrong is the
case of negative numbers, and pointless anyway since I recently fixed
`set_bool_value`, `set_num_lo` and `set_num_hi` to update both numeric
and boolean information if possible, so they will always contain the
same info.
Now unspendable outs are determined by attempting to create a minimal
satisfying input script. If this can't be done, the output is unspendable.
(Unfortunately this "minimal satisfying script" is not (yet) something
that can be shown to the user, since it is more a bundle of constraints
than actual data pushes.)
Current limitations:
- OP_ADD and friends mean the checker gives the script a free pass.
There is no fundamental reason for this, I just didn't get to it
yet.
- Pubkeys are checked for DER encoding but signatures aren't. This
is because secp256k1 exposes a method for pubkeys, but not one
for sigs :). Signatures are loosely length checked.
We no longer confirm that chained transactions occur in the correct order
in blocks, which is a minor consensus regression and should be dealt with
in future.
Looks like to implement the crypto opcodes I may need to switch from
rust-crypto to rust-openssl.. or implement RIPEMD-160 for rust-crypto.
In either case I will need to generalize the hash.rs stuff to support
other hashes, so I'm committing here as a checkpoint before doing all
that.
I noticed that the little/big endian hex string functions for Sha256dHash
did not match my intuition. What we should have is that the raw bytes
correspond to a little-endian representation (since we convert to Uint256
by transmuting, and Uint256's have little-endian representation) while
the reversed raw bytes are big-endian.
This means that the output from `sha256sum` is "little-endian", while the
standard "zeros on the left" output from bitcoind is "big-endian". This
is correct since we think of blockhashes as being "below the target" when
they have lots of zeros on the left, and we also notice that when hashing
Bitcoin objects with sha256sum that the output hashes are always reversed.
These two functions le_hex_string and be_hex_string should really not be
used outside of the library; the Encodable trait should give access to a
"big endian" representation while ConsensusEncodable gives access to a
"little endian" representation. That way we describe the split in terms
of user-facing/consensus code rather than big/little endian code, which
is a better way of thinking about it. After all, a hash is a collection
of bytes, not a number --- it doesn't have an intrinsic endianness.
Oh, and by the way, to compute a sha256d hash from sha256sum, you do
echo -n 'data' | sha256sum | xxd -r -p | sha256dsum
This is a massive simplification, fixes a couple endianness bugs (though
not all of them I don't think), should give a speedup, gets rid of the
`serialize_iter` crap.
I think this is what I want to do for everything json-visible...perhaps
I will not be able to keep the macro for it though, since there are
some clever variations on it (e.g. blocks should have their header's
hash as a field, txes should appear as txids unless vebose output is
requested, etc.)
We get a speed up (~5%) and memory savings (~10%) on initial sync from
using a HashMap, though it's hard to tell precisely how much savings
because it's quite nonlinear.
I haven't tested de/serialization. Some work needs to be done there to
split up the UTXO set since it takes forever to saveload.
We were conflicting with the Rust stdlib trait Hash, which is used
by various datastructures which need a general hash. Also implement
Hash for Sha256dHash so that we can use bitcoin hashes as keys for
such data structures.