docs: add security section
This commit is contained in:
parent
304b1f9baa
commit
4c0521473f
|
@ -4,6 +4,7 @@
|
||||||
# User Guide
|
# User Guide
|
||||||
|
|
||||||
- [Installing Keyfork](./INSTALL.md)
|
- [Installing Keyfork](./INSTALL.md)
|
||||||
|
- [Security Considerations](./security.md)
|
||||||
- [Shard Commands](./shard.md)
|
- [Shard Commands](./shard.md)
|
||||||
- [Common Usage](./usage.md)
|
- [Common Usage](./usage.md)
|
||||||
- [Configuration File](./config-file.md)
|
- [Configuration File](./config-file.md)
|
||||||
|
|
|
@ -0,0 +1,81 @@
|
||||||
|
# Security Considerations
|
||||||
|
|
||||||
|
Keyfork handles data that is considered sensitive. As such, there are a few
|
||||||
|
base considerations we'd like to make about the environment Keyfork is run in.
|
||||||
|
This ensures that the amount of mitigations needed to run Keyfork are reduced.
|
||||||
|
|
||||||
|
## Build Process
|
||||||
|
|
||||||
|
Keyfork should be built using a secure toolchain, such as through Guix or the
|
||||||
|
Distrust Packages system. Using something such as `rustup` means Rust can't
|
||||||
|
properly be verified from source. Ideally, Keyfork should be built by multiple
|
||||||
|
developers and verified between them to ensure the results are deterministic.
|
||||||
|
|
||||||
|
## Hardware
|
||||||
|
|
||||||
|
Keyfork is expected to run on hardware detached from the Internet and from any
|
||||||
|
other computers. This helps ensure the Keyfork seed is never exposed to any
|
||||||
|
online system. Exposing the Keyfork seed may result in a compromise of data
|
||||||
|
derived from Keyfork. The hardware is expected to be stored in a safe location
|
||||||
|
along with the removable storage containing the operating system and (if using
|
||||||
|
Keyfork Shard) the shard file, where adversaries are not able to tamper with
|
||||||
|
the hardware, OS, or shard file.
|
||||||
|
|
||||||
|
## Software
|
||||||
|
|
||||||
|
Keyfork is intended to be one of few programs running on a given system. The
|
||||||
|
ideal system to run Keyfork under is an OS whose only dependencies are Keyfork
|
||||||
|
and Keyfork's runtime dependencies. Because of these restrictions, Keyfork does
|
||||||
|
not necessarily need to include memory-locking or memory-hardening
|
||||||
|
functionality, although such functionality may be included upon further
|
||||||
|
releases.
|
||||||
|
|
||||||
|
## Keys in Memory
|
||||||
|
|
||||||
|
As Keyfork is expected to be the only program running on a given system, it is
|
||||||
|
not expected for Keyfork to defend against malicious software on a system
|
||||||
|
scanning the memory of Keyfork and extracting the keys. As such, at this time,
|
||||||
|
Keyfork does not zero out previously-used memory. Additionally, if such
|
||||||
|
software did exist, because Keyfork is intended to run on hardware detached
|
||||||
|
from the Internet and from any other computers, the risk of practical covert
|
||||||
|
channels is reduced. Tempest and side channel attacks may be mitigated by
|
||||||
|
running Keyfork on hardware located in a Faraday cage.
|
||||||
|
|
||||||
|
## Security of Local Shards
|
||||||
|
|
||||||
|
The threat model of Keyfork in a "local shard" configuration is that an
|
||||||
|
adversary can, without leaking the seed:
|
||||||
|
|
||||||
|
* Compromise `M-1` shard holders or shards
|
||||||
|
|
||||||
|
The threat model of Keyfork in a "local shard" configuration does not include:
|
||||||
|
|
||||||
|
* Compromise of the system running Keyfork
|
||||||
|
|
||||||
|
Keyfork does not provide a mechanism by itself to ensure the operating system
|
||||||
|
or the Keyfork binary has not been tampered with. Users of Keyfork on a shared
|
||||||
|
system should verify the system has not been tampered with before starting
|
||||||
|
Keyfork.
|
||||||
|
|
||||||
|
## Security of Remote Shards
|
||||||
|
|
||||||
|
The threat model of Keyfork in a "remote shard" configuration is that an
|
||||||
|
adversary can, without leaking the seed:
|
||||||
|
|
||||||
|
* Compromise `M-1` shard holders, shard holder devices, or shards
|
||||||
|
* Eavesdrop upon (but not intercept or tamper with) secure communications
|
||||||
|
|
||||||
|
The threat model of Keyfork in a "remote shard" configuration does not include:
|
||||||
|
|
||||||
|
* The compromise of the system initiating the "remote shard" requests.
|
||||||
|
|
||||||
|
Keyfork has a "remote shard" mode, where shards may be transport-encrypted to
|
||||||
|
an ephemeral key and combined on a system run by a user we will call the
|
||||||
|
"administrator". In this design, it is expected that a secure communications
|
||||||
|
channel is established that can be spied upon but can't be tampered with. The
|
||||||
|
administrator can then begin distributing encoded (not encrypted!) public keys
|
||||||
|
to remote shardholders, who then decrypt and re-encrypt the shards to an ECDH
|
||||||
|
AES-256-GCM key. Because the shard is re-encrypted, it can't be intercepted by
|
||||||
|
anyone intercepting the communication. However, it is possible for the
|
||||||
|
administrator to leak either the Keyfork seed or any number of shards if they
|
||||||
|
are the only user operating the system combining the shares.
|
Loading…
Reference in New Issue