Taking an external dependency just to convert ints to byte arrays
is somewhat of a waste, especially when Rust isn't very aggressive
about doing cross-crate LTO.
Note that the latest LLVM pattern-matches this, and while I haven't
tested it, that should mean this means no loss of optimization.
* add client side block filters with code from murmel. use siphash from bitcoin_hashes pass Bitcoin Core tests upgrade to bitcoin_hashes 0.7
* add filter.filter_id() test use BlockFilter directly
* fixed edge cases of matching empty query sets or or using empty filter
For p2wsh the scriptCode is the witness script, but for p2wpkh, it's the
equivalent legacy p2pkh output script.
The name scriptCode is used in the BIP, so it's less confusing.
This is a port of the bitcoin-core CPartialMerkleTree and CMerkleBlock classes.
Here they are called PartialMerkleTree and MerkleBlock.
These are useful for SPV clients that wish to verify that a transaction is
present in a specific block in an authenticated way.
- Add merging logic for PartiallySignedTransactions
- Add (en)decoding logic for PartiallySignedTransaction
- Add converting constructor logic from Transaction for
PartiallySignedTransaction
- Add extracting constructor logic from PartiallySignedTransaction for
Transaction
Squashed in fixes from stevenroose <stevenroose@gmail.com>
- Prevent PSBT::extract_tx from panicking
- Make PartiallySignedTransaction fields public
- Implement psbt::Map trait for psbt::Output
- Add (en)decoding logic for psbt::Output
- Implement PSBT (de)serialization trait for relevant psbt::Output types
- Add macro for merging fields for PSBT key-value maps
- Add macro for implementing decoding logic for PSBT key-value maps
- Add convenience macro for implementing both encoding and decoding
logic for PSBT key-value maps
- Add macro for inserting raw PSBT key-value pairs into PSBT key-value
maps
- Add macro for getting raw PSBT key-value pairs from PSBT key-value
maps
We continue to support only compressed keys when doing key derivation,
but de/serialization of uncompressed keys will now work, and it will
be easier/more consistent to implement PSBT on top of this.
This change also moves the secp256k1::Error wrapper from util::Error to
consensus::encode::Error, since we do not use it anywhere else. We can
add it back to util::Error once we have instances of secp256k1::Error
that are not related to consensus::encode.
- Switch util::address::Payload::Pubkey variant to wrap
util:🔑:PublicKey
- Switch util::address::Address::p*k* constructors to use
util:🔑:PublicKey
- Fix tests for aforementioned switch
- Add convenience methods for util:🔑:PublicKey to
util:🔑:PrivateKey conversion
- Switch BIP143 tests to use util:🔑:PublicKey
- Move network::encodable::* to consensus::encode::*
- Rename Consensus{En,De}codable to {En,De}codable (now under
consensus::encode)
- Move network::serialize::Error to consensus::encode::Error
- Remove Raw{En,De}coder, implement {En,De}coder for T: {Write,Read}
instead
- Move network::serialize::Simple{En,De}coder to
consensus::encode::{En,De}coder
- Rename util::Error::Serialize to util::Error::Encode
- Modify comments to refer to new names
- Modify files to refer to new names
- Expose {En,De}cod{able,er}, {de,}serialize, Params
- Do not return Result for serialize{,_hex} as serializing to a Vec
should never fail
- Add serialize::Error::ParseFailed(&'static str) variant for
serialization errors without context
- Add appropriate variants to replace network::Error::Detail for
serialization error with context
- Remove error method from SimpleDecoders
- Separate serialize::Error and network::Error from util::Error
- Remove unneeded propagate_err and consume_err
- Change fuzzing code to ignore Err type
Previously this structure was unused, it's now being used by the `TxIn`
structure to simplify the code a little bit and avoid confusions. Also
the rust-lightning source code has an `OutPoint` similar to this one
but with the `vout` index as an `u16` to avoid unsafe conversions.
I've added to new methods to `OutPoint`:
- `null`: Creates a new "null" `OutPoint`.
- `is_null`: Checks if the given `OutPoint` is null.
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
The `serde_struct_impl!` macro has been modified to be compatible
with the serde 1.0 crate, we use this macro and not the `serde_derive`
crate because the latter doesn't support Rust 1.14.0 which is shipped
on Debian stable and we should remain compatible with it.
Two new features were added:
- "serde": enables serialization/deserialization for common types, it pulls
the serde 1.0 dependency.
- "serde-decimal": enables serialization/deserialization for `UDecimal`/`Decimal`,
this pulls the strason 0.4 depdendency and the serde 1.0 dependency.
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
- Add derive_pub to ExtendedPubKey
- Add derive_priv to ExtendedPrivKey
- Removed from_path from ExtendedPrivKey as it is superseded by
derive_priv
- Add checking of derive_pub and derive_priv to test_path
- Add checking of correct error when invoking ckd_pub on a hardened
ChildNumber
- Add test vector 3 from BIP32 specification
There seemed to be some confusion as to whether the internal
represenation of a ChildNumber is supposed to be the index (0..2^31-1
for _both_ Normal and Hardened) or the actual number (0..2^31-1 for
Normal and 2^31..2^32-1 for Hardened). This commits fixes this
confusion.
- Make clear that the internal representation is the index rather
than the actual number
- Make the internal representation non-public
- Provide methods for creating valid ChildNumbers
- Change relevant callers and tests to conform to this new ChildNumber
My rationale for using index rather than the actual number as internal
representation is that the difference between the two enum variants
already encode wether a ChildNumber is a normal one or a hardened one,
so the only bit of extra information left to be encoded is its index.
ExtendedPubKey and ExtendedPrivKey implemented `ToString` directly but
Rust documentation says to implement `Display` and get the `ToString`
implementation for free.
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
Addresses #96.
Turns out it was being used for hex encoding/decoding, so replaced that with the `hex` crate.
i chose to import the `decode` method as:
```
use hex::decode as hex_decode
```
so that it is clear to the reader what is being decoded when it is called. "decode" is such a generic sounding function name that it would get confusing otherwise.
In a project of mine I needed to check the merkle root before
moving some Vec<Transaction>s around, so need to be able to
calculate the merkle root on a Vec<Sha256dHash> directly.
This resolves an very unergonomic API by allowing iteration over a
Transaction being signed's inputs without needing to take a
conflicting reference to the transaction.
The API is still relateively unsafe in that its very easy to
generate bogus sighashes with it, but this is much better than it
was, and its not clear how to fix it further.
This is needed to for a sane BIP143 implementation. Should be exactly equivalent to
serializing data into a vector then hashing that vector for all types.
Needed for applications where the tweak and the secret key material are on different
devices (and the one with the secret material does not want to know how to compute
the tweak itself).
Rather than having methods taking &mut self, have them consume self
and return another Builder, so that methods can be chained.
Bump major version number.
This is easy to satisfy given that the template-to-script code takes a
slice of keys. Just do &keys[..n_keys] if you have too many keys. (If
you have too few you're SOL no matter what.) This way we can catch
likely configuration errors without putting much of a burden on users
who legitimately have more keys than the template requires.
Also add a method required_keys() to Template so that users can check
how many keys they ought to have.
Does not do stuff like validating the form of contracts, since this seems like
more of an application thing. Does not even distinguish a "nonce", just assumes
the contract has whatever uniqueness is needed baked in.
Breaking changes are:
opcode::All::from_u8 is now From<u8>
script::Builder::from_vec is now From<Vec<u8>>
script::Script::from_vec is now From<Vec<u8>>
There is still a lot of work to do modernizing the library, but the code
compiles cleanly with all unit tests passing now. Probably not much can
be done now until wizards-wallet is in better shape and the library is
actually in use.
Work is stalled on some other library work (to give better lifetime
requirements on `eventual::Future` and avoid some unsafety), so
committing here.
There are only three errors left in this round :)
Also all the indenting is done, so there should be no more massive
rewrite commits. Depending how invasive the lifetime-error fixes
are, I may even be able to do sanely sized commits from here on.
27 files changed, 3944 insertions(+), 3812 deletions(-) :} I've
started doing whitespace changes as well, I want everything to
be 4-space tabs from now on.
BTW after all this is done I'm gonna indent the entire codebase...
so `git blame` is gonna be totally broken anyway, hence my
capricious cadence of commits.
Now unspendable outs are determined by attempting to create a minimal
satisfying input script. If this can't be done, the output is unspendable.
(Unfortunately this "minimal satisfying script" is not (yet) something
that can be shown to the user, since it is more a bundle of constraints
than actual data pushes.)
Current limitations:
- OP_ADD and friends mean the checker gives the script a free pass.
There is no fundamental reason for this, I just didn't get to it
yet.
- Pubkeys are checked for DER encoding but signatures aren't. This
is because secp256k1 exposes a method for pubkeys, but not one
for sigs :). Signatures are loosely length checked.
Sorry for so many things in one commit ... it was an iterative
process depending as I worked on BIP32 to get the other stuff
working. (And I was too lazy to separate it out after the fact.)
A breaking change by the array newtyping is that Show for Sha256dHash
now outputs the slice Show. You have to use `{:x}` to get the old hex
output.
Looks like to implement the crypto opcodes I may need to switch from
rust-crypto to rust-openssl.. or implement RIPEMD-160 for rust-crypto.
In either case I will need to generalize the hash.rs stuff to support
other hashes, so I'm committing here as a checkpoint before doing all
that.
I noticed that the little/big endian hex string functions for Sha256dHash
did not match my intuition. What we should have is that the raw bytes
correspond to a little-endian representation (since we convert to Uint256
by transmuting, and Uint256's have little-endian representation) while
the reversed raw bytes are big-endian.
This means that the output from `sha256sum` is "little-endian", while the
standard "zeros on the left" output from bitcoind is "big-endian". This
is correct since we think of blockhashes as being "below the target" when
they have lots of zeros on the left, and we also notice that when hashing
Bitcoin objects with sha256sum that the output hashes are always reversed.
These two functions le_hex_string and be_hex_string should really not be
used outside of the library; the Encodable trait should give access to a
"big endian" representation while ConsensusEncodable gives access to a
"little endian" representation. That way we describe the split in terms
of user-facing/consensus code rather than big/little endian code, which
is a better way of thinking about it. After all, a hash is a collection
of bytes, not a number --- it doesn't have an intrinsic endianness.
Oh, and by the way, to compute a sha256d hash from sha256sum, you do
echo -n 'data' | sha256sum | xxd -r -p | sha256dsum
A pretty serious oversight :) this was not noticed because I was
simultaneously dealing with a serious tcp connection bug in rustc,
and I had thought bitcoind's angry disconnects were a further
symptom of that.
This is a massive simplification, fixes a couple endianness bugs (though
not all of them I don't think), should give a speedup, gets rid of the
`serialize_iter` crap.
I think this is what I want to do for everything json-visible...perhaps
I will not be able to keep the macro for it though, since there are
some clever variations on it (e.g. blocks should have their header's
hash as a field, txes should appear as txids unless vebose output is
requested, etc.)
We get a speed up (~5%) and memory savings (~10%) on initial sync from
using a HashMap, though it's hard to tell precisely how much savings
because it's quite nonlinear.
I haven't tested de/serialization. Some work needs to be done there to
split up the UTXO set since it takes forever to saveload.
We were conflicting with the Rust stdlib trait Hash, which is used
by various datastructures which need a general hash. Also implement
Hash for Sha256dHash so that we can use bitcoin hashes as keys for
such data structures.