91ff2f628c Introduce SPDX license identifiers (Tobin C. Harding)
Pull request description:
When `rust-bitcoin` was started in 2014 the SPDX license list and short identifiers where not a thing. Now that we have short identifiers and they are gaining popularity in other projects we can consider using them.
- Add links to the SPDX website in the readme
- Shorten the author section to a single line
- Remove all the licence information in each file and replace it with an
SPDX ID (see https://spdx.dev/ids/#how)
Of note:
- If the author of a file is explicitly listed, maintain this information
- If the 'author' is listed as the generic 'Rust Bitcoin developers' just remove the attribution, this is implicit. This does loose the date info but that can be seen at any time from the git index using
`git log --follow --format=%ad --date default <FILE> | tail -1`
apoelstra, please confirm that I'm not treading on your toes here, especially, are you ok with the new 'written by' string format?
### Ref
- https://spdx.dev/ids/#how
- https://spdx.org/licenses/CC0-1.0.html
- https://spdx.dev/ids/
ACKs for top commit:
apoelstra:
ACK 91ff2f628c
sanket1729:
ACK 91ff2f628c. I am also in IDGAF camp, but I like more red lines in diff.
Kixunil:
ACK 91ff2f628c
Tree-SHA512: ca8aac00f015c18ec18de83dfeb50dd6f4f840653c7def85daa2436a339021ada5f3c34ad0cdf6b18e3e39c45a6d58a8313742e4001d467785b10eee7fdbc938
When `rust-bitcoin` was started in 2014 the SPDX license list and short
identifiers where not a thing. Now that we have short identifiers and
they are gaining popularity in other projects we can consider using
them.
- Add links to the SPDX website in the readme
- Shorten the author section to a single line
- Remove all the licence information in each file and replace it with an
SPDX ID (see https://spdx.dev/ids/#how)
Of note:
- If the author of a file is explicitly listed, maintain this
information
- If the 'author' is listed as the generic 'Rust Bitcoin developers'
just remove the attribution, this is implicit. This does loose the date
info but that can be seen at any time from the git index using
`git log --follow --format=%ad --date default <FILE> | tail -1`
Fix#1020 (see more relevant discussion there)
This definitely makes the amount of generics compiler
has to generate by avoding generating the same functions
for `R`, &mut R`, `&mut &mut R` and so on.
old:
```
> ls -al target/release/deps/bitcoin-07a9dabf1f3e0266
-rwxrwxr-x 1 dpc dpc 9947832 Jun 2 22:42 target/release/deps/bitcoin-07a9dabf1f3e0266
> strip target/release/deps/bitcoin-07a9dabf1f3e0266
> ls -al target/release/deps/bitcoin-07a9dabf1f3e0266
-rwxrwxr-x 1 dpc dpc 4463024 Jun 2 22:46 target/release/deps/bitcoin-07a9dabf1f3e0266
```
new:
```
> ls -al target/release/deps/bitcoin-07a9dabf1f3e0266
-rwxrwxr-x 1 dpc dpc 9866800 Jun 2 22:44 target/release/deps/bitcoin-07a9dabf1f3e0266
> strip target/release/deps/bitcoin-07a9dabf1f3e0266
> ls -al target/release/deps/bitcoin-07a9dabf1f3e0266
-rwxrwxr-x 1 dpc dpc 4393392 Jun 2 22:45 target/release/deps/bitcoin-07a9dabf1f3e0266
```
In the unit-test binary itself, it saves ~100KB of data.
I did not expect much performance gains, but turn out I was wrong(*):
old:
```
test blockdata::block::benches::bench_block_deserialize ... bench: 1,072,710 ns/iter (+/- 21,871)
test blockdata::block::benches::bench_block_serialize ... bench: 191,223 ns/iter (+/- 5,833)
test blockdata::block::benches::bench_block_serialize_logic ... bench: 37,543 ns/iter (+/- 732)
test blockdata::block::benches::bench_stream_reader ... bench: 1,872,455 ns/iter (+/- 149,519)
test blockdata::transaction::benches::bench_transaction_deserialize ... bench: 136 ns/iter (+/- 3)
test blockdata::transaction::benches::bench_transaction_serialize ... bench: 51 ns/iter (+/- 8)
test blockdata::transaction::benches::bench_transaction_serialize_logic ... bench: 5 ns/iter (+/- 0)
test blockdata::transaction::benches::bench_transaction_size ... bench: 3 ns/iter (+/- 0)
```
new:
```
test blockdata::block::benches::bench_block_deserialize ... bench: 1,028,574 ns/iter (+/- 10,910)
test blockdata::block::benches::bench_block_serialize ... bench: 162,143 ns/iter (+/- 3,363)
test blockdata::block::benches::bench_block_serialize_logic ... bench: 30,725 ns/iter (+/- 695)
test blockdata::block::benches::bench_stream_reader ... bench: 1,437,071 ns/iter (+/- 53,694)
test blockdata::transaction::benches::bench_transaction_deserialize ... bench: 92 ns/iter (+/- 2)
test blockdata::transaction::benches::bench_transaction_serialize ... bench: 17 ns/iter (+/- 0)
test blockdata::transaction::benches::bench_transaction_serialize_logic ... bench: 5 ns/iter (+/- 0)
test blockdata::transaction::benches::bench_transaction_size ... bench: 4 ns/iter (+/- 0)
```
(*) - I'm benchmarking on a noisy laptop. Take this with a grain of salt. But I think
at least it doesn't make anything slower.
While doing all this manual labor that will probably generate conflicts,
I took a liberty of changing generic type names and variable names to
`r` and `R` (reader) and `w` and `W` for writer.
As things are right now, memory exhaustion protection in `Decodable`
is based on checking input-decoded lengths against arbitrary limits,
and ad-hoc wrapping collection deserialization in `Take`.
The problem with that are two-fold:
* Potential consensus bugs due to incorrect limits.
* Performance degradation when decoding nested structured,
due to recursive `Take<Take<..>>` readers.
This change introduces a systematic approach to the problem.
A concept of a "size-limited-reader" is introduced to rely on
the input data to finish at enforced limit and fail deserialization.
Memory exhaustion protection is now achived by capping allocations
to reasonable values, yet allowing the underlying collections
to grow to accomodate rare yet legitmately oversized data (with tiny
performance cost), and reliance on input data size limit.
A set of simple rules allow avoiding recursive `Take` wrappers.
Fix#997
Use cargo to upgrade from edition 2015 to edition 2018.
cargo fix --edition
No manual changes made. The result of the command above is just to fix
all the use statements (add `crate::`) and fix the fully qualified path
formats i.e., `::Foo` -> `crate::Foo`.
Do various whitespace refactorings, of note:
- Use space around equals e.g., 'since = "blah"'
- Put return/break/continue on separate line
Whitespace only, no logic changes.
247a14f4c3 Use test big block for bench_stream_reader instead of making one (Riccardo Casatta)
b92dfbb63f exclude test_data when publishing the crate (Riccardo Casatta)
f5a9681a2a include a big block in test_data, use it for ser/de benchmark (Riccardo Casatta)
09dada55d6 Move bip158 test vectors to test_data (Riccardo Casatta)
06d1a820c3 Remove testnet block hex from tests, use test_data with include_bytes! (Riccardo Casatta)
Pull request description:
In the first two commits I moved some data from source files to the newly introduced `test_data` dir, including it with `include_[str|bytes]!` macro.
The second-to-last commit introduces a big block in test_data which is very handy in ser/de benchmark (I used it for #672) because with smaller blocks you may not notice performance improvements.
Since I don't want to pollute the package the last commit excludes the `test_data` dir from the published package. I think it's fine to do it because dependent packages don't run dependencies tests.
ACKs for top commit:
apoelstra:
ACK 247a14f4c3
Kixunil:
tACK 247a14f4c3
Tree-SHA512: a2beb635b0a358737d0b57d3e7205b1ddf87652b9a8c889ce63e2867659a8eaf7e43a5b87a453345d56d953745913f40b58596f449e5fbc87340e0dd2aef0727
Rust idiomatic style is to put the rustdoc _above_ any attributes on
types, functions, etc.
Audit the codebase and move comments/attributes to the correct place.
Add a trailing full stop at times to neaten things up a little extra.
This is the initial step towards using and maybe enforcing clippy.
It does not fix all lints as some are not applicable. They may be
explicitly ignored later.
Docs can always do with a bit of love.
Clean up the module level (`//!`) rustdocs for all public modules.
I claim uniform is better than any specific method/style. I tried to fit
in with what ever was either most sane of most prevalent, therefore
attaining uniformity without unnecessary code churn (one exception being
the changes to headings described below).
Notes:
* Headings - use heading as a regular sentence for all modules e.g.,
```
//! Bitcoin network messages.
```
as opposed to
```
//! # Bitcoin Network Messages
```
It was not clear which style to use so I picked a 'random' mature
project and copied their style.
* Added 'This module' in _most_ places as the start of the module
description, however I was not religious about this one.
* Fixed line length if necessary since most of our code seems to follow
short (80 char) line lengths for comments anyways.
* Added periods and fixed obvious (and sometimes not so obvious)
grammatically errors.
* Added a trailing `//!` to every block since this was almost universal
already. I don't really like this one but I'm guessing it is Andrew's
preferred style since its on the copyright notices as well.
Based on the original work by Justin Moon.
*MSRV unchanged from 1.29.0.*
When `std` is off, `no-std` must be on, and we use the [`alloc`](https://doc.rust-lang.org/alloc/) and core2 crates. The `alloc` crate requires the user define a global allocator.
* Import from `core` and `alloc` instead of `std`
* `alloc` only used if `no-std` is on
* Create `std` feature
* Create `no-std` feature which adds a core2 dependency to polyfill `std::io` features. This is an experimental feature and should be
used with caution.
* CI runs tests `no-std`
* MSRV for `no-std` is 1.51 or so
This is instead of encode::Errors because the encoders should
not be allowed to return errors that don't originate in the writer
they are writing into.
This is a part of the method definition that has been relied upon for a
while already.
Currently whenever an unrecognized network message is received, it is never
flushed from the read buffer, meaning that unless the stream is closed and
recreated it will keep returning the same error every time `read_next()` is
called.
This commit adds the length of the message to `UnrecognizedNetworkCommand`,
so that the `StreamReader` can flush those bytes before returning the error
to the caller.