The protocol has a bug where a 0u8 is pushed at the end of each
block header on the wire in headers messages. WHy this bug came
about is unrealted and shouldn't impact API design.
- Move network::encodable::* to consensus::encode::*
- Rename Consensus{En,De}codable to {En,De}codable (now under
consensus::encode)
- Move network::serialize::Error to consensus::encode::Error
- Remove Raw{En,De}coder, implement {En,De}coder for T: {Write,Read}
instead
- Move network::serialize::Simple{En,De}coder to
consensus::encode::{En,De}coder
- Rename util::Error::Serialize to util::Error::Encode
- Modify comments to refer to new names
- Modify files to refer to new names
- Expose {En,De}cod{able,er}, {de,}serialize, Params
- Do not return Result for serialize{,_hex} as serializing to a Vec
should never fail
- Add serialize::Error::ParseFailed(&'static str) variant for
serialization errors without context
- Add appropriate variants to replace network::Error::Detail for
serialization error with context
- Remove error method from SimpleDecoders
- Separate serialize::Error and network::Error from util::Error
- Remove unneeded propagate_err and consume_err
- Change fuzzing code to ignore Err type
Addresses #96.
Turns out it was being used for hex encoding/decoding, so replaced that with the `hex` crate.
i chose to import the `decode` method as:
```
use hex::decode as hex_decode
```
so that it is clear to the reader what is being decoded when it is called. "decode" is such a generic sounding function name that it would get confusing otherwise.
A fixed buffer of 12 bytes was unsafely copied from the bytes of a
string - if the string was shorter than that, memory from outside would
leak into the packet.
Replace the unsafe copy by a safe loop. Also add a panic if
an attempt is made to use a command string longer than 12 bytes.
Work is stalled on some other library work (to give better lifetime
requirements on `eventual::Future` and avoid some unsafety), so
committing here.
There are only three errors left in this round :)
Also all the indenting is done, so there should be no more massive
rewrite commits. Depending how invasive the lifetime-error fixes
are, I may even be able to do sanely sized commits from here on.
27 files changed, 3944 insertions(+), 3812 deletions(-) :} I've
started doing whitespace changes as well, I want everything to
be 4-space tabs from now on.
BTW after all this is done I'm gonna indent the entire codebase...
so `git blame` is gonna be totally broken anyway, hence my
capricious cadence of commits.
Reconnecting an existing socket simply was not working; the Rust socket
did not expose any methods for reconnection, so I simply tried calling
connect() again. As near as I can tell, this was a no-op --- which makes
sense because both the sending and receiving threads had their own copy
of the Socket, and it's not clear what the synchronization behaviour
should have been.
Instead if the connection fails, we relay this information to the main
thread, wait for an acknowledgement, then simply destroy the listening
thread. The caller can then simply call `start()` again.
Sorry for so many things in one commit ... it was an iterative
process depending as I worked on BIP32 to get the other stuff
working. (And I was too lazy to separate it out after the fact.)
A breaking change by the array newtyping is that Show for Sha256dHash
now outputs the slice Show. You have to use `{:x}` to get the old hex
output.
Looks like to implement the crypto opcodes I may need to switch from
rust-crypto to rust-openssl.. or implement RIPEMD-160 for rust-crypto.
In either case I will need to generalize the hash.rs stuff to support
other hashes, so I'm committing here as a checkpoint before doing all
that.
Since TOML will not encode C-like enums as strings, we do it
ourselves. This is also worthwhile so that we can get the
lowercase "bitcoin" and "testnet" as encodings for the actual
enum values, which are more verbose and camel case.
A pretty serious oversight :) this was not noticed because I was
simultaneously dealing with a serious tcp connection bug in rustc,
and I had thought bitcoind's angry disconnects were a further
symptom of that.
This is a massive simplification, fixes a couple endianness bugs (though
not all of them I don't think), should give a speedup, gets rid of the
`serialize_iter` crap.
I think this is what I want to do for everything json-visible...perhaps
I will not be able to keep the macro for it though, since there are
some clever variations on it (e.g. blocks should have their header's
hash as a field, txes should appear as txids unless vebose output is
requested, etc.)
It is now visible that EOF (i.e. peer hung up) is interpreted
as a message decode error. Probably what we want to do is reset
the connection on any error. TODO
We get a speed up (~5%) and memory savings (~10%) on initial sync from
using a HashMap, though it's hard to tell precisely how much savings
because it's quite nonlinear.
I haven't tested de/serialization. Some work needs to be done there to
split up the UTXO set since it takes forever to saveload.
We were conflicting with the Rust stdlib trait Hash, which is used
by various datastructures which need a general hash. Also implement
Hash for Sha256dHash so that we can use bitcoin hashes as keys for
such data structures.