Improve documentation

This commit is contained in:
Christian Reitter 2024-12-21 14:53:31 +01:00
parent f17029f23e
commit 9e00e1d5e3
3 changed files with 8 additions and 3 deletions

View File

@ -1,7 +1,8 @@
# Documentation # Documentation
Experimental code to invoke `bx mnemonic-new` to generate weak mnemonics, and derive some addresses from them. Experimental code to invoke `bx mnemonic-new` many times to generate weak mnemonics, and derive some addresses from them. Uses the PostgreSQL database for storing results.
Uses the PostgreSQL database for storing results.
Restarting the target binary over and over is slow, but it was the first strategy to try before we moved to more optimized, custom implementations and provided reference data for the original behavior.
## Usage ## Usage

View File

@ -2,6 +2,8 @@
A simple lib to derive child addresses based on `xpriv` key as input. A simple lib to derive child addresses based on `xpriv` key as input.
Goes from `xpriv` to address, using standard derivation paths. Goes from `xpriv` to address, using standard derivation paths.
This tool focuses on Bitcoin P2PKH addresses.
## Usage ## Usage
`cargo run -- -x "put-some-xprv-here" -p "m/0"` `cargo run -- -x "put-some-xprv-here" -p "m/0"`

View File

@ -4,7 +4,7 @@ Allows the creation of a bloom filter based on a data set.
Also allows checking if an address is in the filter. Also allows checking if an address is in the filter.
The input data set should be `/n` delineated `.txt` file that looks something like: The input data set should be `\n` delineated `.txt` file that looks something like:
``` ```
firstaddress firstaddress
secondaddress secondaddress
@ -15,6 +15,8 @@ The resulting bloom filter can be "checked against" with an address, and will re
It's important to keep in mind that bloom filters are probabilistic data structures and as such result in false positives at a certain rate, which can be adjusted for by increasing the data set size. Adjust this depending on your workload. If you check millions or billions or addresses against a filter and cannot tolerate more than a few false positives, we recommend setting an appropriately small false positive factor. It's important to keep in mind that bloom filters are probabilistic data structures and as such result in false positives at a certain rate, which can be adjusted for by increasing the data set size. Adjust this depending on your workload. If you check millions or billions or addresses against a filter and cannot tolerate more than a few false positives, we recommend setting an appropriately small false positive factor.
If you have lots of unused RAM capacity in which you can store the original data set, the overhead and size-accuracy trade-offs of a bloom filter may not be necessary for you.
## Generate bloom filter ## Generate bloom filter
`python bloom-util.py create --filter_file filter.pkl --addresses_file addresses.txt` `python bloom-util.py create --filter_file filter.pkl --addresses_file addresses.txt`