ExtendedPubKey and ExtendedPrivKey implemented `ToString` directly but
Rust documentation says to implement `Display` and get the `ToString`
implementation for free.
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
Addresses #96.
Turns out it was being used for hex encoding/decoding, so replaced that with the `hex` crate.
i chose to import the `decode` method as:
```
use hex::decode as hex_decode
```
so that it is clear to the reader what is being decoded when it is called. "decode" is such a generic sounding function name that it would get confusing otherwise.
In a project of mine I needed to check the merkle root before
moving some Vec<Transaction>s around, so need to be able to
calculate the merkle root on a Vec<Sha256dHash> directly.
This resolves an very unergonomic API by allowing iteration over a
Transaction being signed's inputs without needing to take a
conflicting reference to the transaction.
The API is still relateively unsafe in that its very easy to
generate bogus sighashes with it, but this is much better than it
was, and its not clear how to fix it further.
This is a rather large breaking API change, but is significantly
more sensible. In the "do not allow internal representation to
represent an invalid state" category, this ensures that witness
cannot have an length other than the number of inputs. Further,
it reduces vec propagation, which may help performance in some
cases by reducing allocs. Fianlly, this just makes more sense (tm).
Witness are a per-input field like the scriptSig, placing them
outside of the TxIn is just where they are serialized, not where
they logically belong.
This is needed to for a sane BIP143 implementation. Should be exactly equivalent to
serializing data into a vector then hashing that vector for all types.
Needed for applications where the tweak and the secret key material are on different
devices (and the one with the secret material does not want to know how to compute
the tweak itself).
This code was unmaintained, is unlikely to work on the majority of systems
(since it holds the whole utxoset in RAM, and not in a terribly efficient
manner), and has a dependency on `eventual` which has been broken for a
long time.
The library no longer compiles on nightly because of this, and without any
known usecases for `UtxoSet`, nor good ability to test it, I'm simply
removing the code.
I recommend anyone who cares about this extracts the code from the previous
commit and creates a new crate. It should be more featureful anyway, e.g.
support a backing store.
This is just a convenience type for the (txid, vout) pairs that get produced
a lot in Bitcoin code. To the best of my knowledge there is nowhere this can
be used in the actual library (in particular, TxOutRef.index is a usize for
convenience while TxIn.prev_index is a u32 for correct consensus encoding,
so there is not redundancy here).
A fixed buffer of 12 bytes was unsafely copied from the bytes of a
string - if the string was shorter than that, memory from outside would
leak into the packet.
Replace the unsafe copy by a safe loop. Also add a panic if
an attempt is made to use a command string longer than 12 bytes.
Rather than having methods taking &mut self, have them consume self
and return another Builder, so that methods can be chained.
Bump major version number.
This is easy to satisfy given that the template-to-script code takes a
slice of keys. Just do &keys[..n_keys] if you have too many keys. (If
you have too few you're SOL no matter what.) This way we can catch
likely configuration errors without putting much of a burden on users
who legitimately have more keys than the template requires.
Also add a method required_keys() to Template so that users can check
how many keys they ought to have.
This is easy for downstream to add, not easy for them to remove. Plus scripts
have a pretty recognizable form and are usually obvious from context anyway.
Does not do stuff like validating the form of contracts, since this seems like
more of an application thing. Does not even distinguish a "nonce", just assumes
the contract has whatever uniqueness is needed baked in.
Breaking changes are:
opcode::All::from_u8 is now From<u8>
script::Builder::from_vec is now From<Vec<u8>>
script::Script::from_vec is now From<Vec<u8>>
There is still a lot of work to do modernizing the library, but the code
compiles cleanly with all unit tests passing now. Probably not much can
be done now until wizards-wallet is in better shape and the library is
actually in use.
Work is stalled on some other library work (to give better lifetime
requirements on `eventual::Future` and avoid some unsafety), so
committing here.
There are only three errors left in this round :)
Also all the indenting is done, so there should be no more massive
rewrite commits. Depending how invasive the lifetime-error fixes
are, I may even be able to do sanely sized commits from here on.
27 files changed, 3944 insertions(+), 3812 deletions(-) :} I've
started doing whitespace changes as well, I want everything to
be 4-space tabs from now on.
BTW after all this is done I'm gonna indent the entire codebase...
so `git blame` is gonna be totally broken anyway, hence my
capricious cadence of commits.
Will take some experimentation to see if this is what I want the API
to be, if the memory usage is acceptable, etc.
This will force a total reindex for wizards-wallet users.
[breaking-change]
Reconnecting an existing socket simply was not working; the Rust socket
did not expose any methods for reconnection, so I simply tried calling
connect() again. As near as I can tell, this was a no-op --- which makes
sense because both the sending and receiving threads had their own copy
of the Socket, and it's not clear what the synchronization behaviour
should have been.
Instead if the connection fails, we relay this information to the main
thread, wait for an acknowledgement, then simply destroy the listening
thread. The caller can then simply call `start()` again.
Turns out TOML does not support tables named "", so we instead encode
the accounts list as an array rather than a name-keyed hashmap. This
is fine since the account name is in the account structure itself
anyway.
`verify` cannot handle illegally padded signatures because it takes an object
of type `Signature`, which is a fixed-size type. This should have been part
of the previous commit --- an important lesson about running the unit tests
before every push!
Sorry, this is needed to enable proper txid/vout lookups for the address index.
This means any users of wizards-wallet need to rebuild their utxo sets, and
will also mean an increase in RAM usage.
I was trying to do something clever by making sure that the numeric
bounds were consistent with whatever ordering relation we were checking,
AND that the boolean values were also consistent...this is Wrong is the
case of negative numbers, and pointless anyway since I recently fixed
`set_bool_value`, `set_num_lo` and `set_num_hi` to update both numeric
and boolean information if possible, so they will always contain the
same info.
Now unspendable outs are determined by attempting to create a minimal
satisfying input script. If this can't be done, the output is unspendable.
(Unfortunately this "minimal satisfying script" is not (yet) something
that can be shown to the user, since it is more a bundle of constraints
than actual data pushes.)
Current limitations:
- OP_ADD and friends mean the checker gives the script a free pass.
There is no fundamental reason for this, I just didn't get to it
yet.
- Pubkeys are checked for DER encoding but signatures aren't. This
is because secp256k1 exposes a method for pubkeys, but not one
for sigs :). Signatures are loosely length checked.
Sorry for so many things in one commit ... it was an iterative
process depending as I worked on BIP32 to get the other stuff
working. (And I was too lazy to separate it out after the fact.)
A breaking change by the array newtyping is that Show for Sha256dHash
now outputs the slice Show. You have to use `{:x}` to get the old hex
output.
We no longer confirm that chained transactions occur in the correct order
in blocks, which is a minor consensus regression and should be dealt with
in future.
Looks like to implement the crypto opcodes I may need to switch from
rust-crypto to rust-openssl.. or implement RIPEMD-160 for rust-crypto.
In either case I will need to generalize the hash.rs stuff to support
other hashes, so I'm committing here as a checkpoint before doing all
that.
I noticed that the little/big endian hex string functions for Sha256dHash
did not match my intuition. What we should have is that the raw bytes
correspond to a little-endian representation (since we convert to Uint256
by transmuting, and Uint256's have little-endian representation) while
the reversed raw bytes are big-endian.
This means that the output from `sha256sum` is "little-endian", while the
standard "zeros on the left" output from bitcoind is "big-endian". This
is correct since we think of blockhashes as being "below the target" when
they have lots of zeros on the left, and we also notice that when hashing
Bitcoin objects with sha256sum that the output hashes are always reversed.
These two functions le_hex_string and be_hex_string should really not be
used outside of the library; the Encodable trait should give access to a
"big endian" representation while ConsensusEncodable gives access to a
"little endian" representation. That way we describe the split in terms
of user-facing/consensus code rather than big/little endian code, which
is a better way of thinking about it. After all, a hash is a collection
of bytes, not a number --- it doesn't have an intrinsic endianness.
Oh, and by the way, to compute a sha256d hash from sha256sum, you do
echo -n 'data' | sha256sum | xxd -r -p | sha256dsum
Since TOML will not encode C-like enums as strings, we do it
ourselves. This is also worthwhile so that we can get the
lowercase "bitcoin" and "testnet" as encodings for the actual
enum values, which are more verbose and camel case.
A pretty serious oversight :) this was not noticed because I was
simultaneously dealing with a serious tcp connection bug in rustc,
and I had thought bitcoind's angry disconnects were a further
symptom of that.
This is a massive simplification, fixes a couple endianness bugs (though
not all of them I don't think), should give a speedup, gets rid of the
`serialize_iter` crap.