replace initial blog post with bitby report

This commit is contained in:
Anton Livaja 2025-03-20 18:45:48 -07:00
parent e5aad264af
commit 1d043466cc
Signed by: anton
GPG Key ID: 44A86CFF1FDF0E85
2 changed files with 97 additions and 298 deletions

View File

@ -1,298 +0,0 @@
---
layout: post
title: Adventures In Supply Chain Integrity
date: 2024-03-28
cover_image: "/assets/images/whale_shark.jpg"
authors:
- name: Ryan Heywood
bio: Professional bonker / twerker.
twitter: le twitter
- name: Anton Livaja
bio: Professional banana juggler.
twitter: antonlivaja
- name: Lance R. Vick
bio: Dolphin trainer
twitter: no.
---
When a compiler is used to compile some piece of software, how do we verify
that the compiler can be trusted? Is it well known who compiled the compiler
itself? Usually compilers are not built from source, and even when they are,
they are seeded from a binary that itself is opaque and difficult to verify.
How does one check if the supply chain integrity of the compiler itself is
intact, even before we get to building software with it?
Compiler supply chains are obscured and at many points seeded from binaries,
making it nearly impossible to verify their integrity. In 1984, Ken Thompson
wrote "Reflections on Trusting Trust" and illustrated that a compiler can
modify software during the compilation process, compromising the software. Put
simply, this means that reviewing the source code is not enough. We need to be
sure that the compiler itself isn't compromised, as it could be used to modify
the intended behavior of the software.
What about the software that's built using the compiler? Has the source code
been modified during compilation? Has the resulting binary of the software been
tampered with, perhaps in the CI/CD runner which runs an OS with a
vulnerability in one of its sub dependencies? Or perhaps the server host has
been compromised and attackers have gained control of the infrastructure?
These are difficult software supply chain security issues which are often swept
under the rug or completely overlooked due to lack of understanding. To
eliminate this surface area of attack, we need a good answer to these
questions, and more importantly we need tooling and practical methods which can
help close these gaps in the supply chain.
This line of questioning becomes especially concerning in the context of widely
used software, such as images pulled from DockerHub, package managers, and
Linux distributions. Software procured via these channels are used widely and
are pervasive in almost all software and as such pose a severe attack vector.
If the maintainer of a widely used DockerHub image has their machine
compromised, or are coerced or even forced under duress to insert malicious
code into the binaries they are responsible for, there is no effective measure
in place to detect and catch this, resulting in millions of downstream
consumers being impacted. Imagine what would happen if the maintainer of a
default DockerHub image of a widely used language was compromised, and the
binary they released had a backdoor in it. The implications are extremely far
reaching, and would be disastrous.
There are two distinct problems at hand which share a solution:
1. How do we ensure that we can trust the toolchain used to build software
2. How do we ensure that we can trust software built with the toolchain
The answer to both questions is the same. We achieve it via verifiability and
determinism. To be clear, we are not trying to solve the problem of the code
itself being compromised in the source. If the source code is compromised,
determinism does not help prevent that. If the code is reviewed and verified as
being secure, then determinism and multiple reproductions of the software
add a set of excellent guarantees.
Deterministically built software is any software which always compiles to the
same bit-for-bit exact binary. This is useful because it makes it trivial to
check the integrity of the binary. If the binary is always the same, we can use
hashing to ensure that nothing about the binary has changed. Typically minor
differences which are introduced during the build process, such as time stamps,
mean that software is typically non-deterministic. By pinning all aspects of
the environment the software is built in and removing any changing factors such
as time and user or machine IDs, we can force the software to always be
bit-for-bit.
Now, imagine a scenario where a developer is compiling software, and they are
not doing it deterministically. Any time they build the software, they have no
way to easily verify if the binary changed in a meaningful way compared to the
previous one without doing low level inspection. With determinism, it's as
simple as hashing one binary, repeating the compilation, hashing the second
result, and comparing it with the original. This is great, but it's still not
enough to ensure that the binary can be trusted, as there may be malware which
always modifies the binary in the same manner. To mitigate this, we can build
the software on multiple different machines, ideally by different maintainers,
using different operating systems and even different hardware, as it's much
less likely that multiple diverse stacks and individuals are compromised by the
same malware or attacker. Following this process, we can eliminate the risk of
modification during compilation going undetected. To add a layer of trust that
the hashes can be trusted, we can use cryptographic signing, as is customary
for many software releases.
Assessing the current state of affairs regarding software package managers and
Linux distributions, and how far they have gone to mitigate these risks, we
performed an analysis of popular projects:
Alpine is the most popular Linux distribution (distro) in the container
ecosystem and has made great strides in providing a minimal `musl` based
distribution with reasonable security defaults and is suitable for a lot of use
cases, however in the interest of developer productivity and low friction for
contributors, none of it is cryptographically signed.
Debian (and derivatives like Ubuntu) is one of most popular option for servers
and is largely reproducible and also signs all packages. Being `glibc` based
with a focus on compatibility and desktop use cases, it results in a huge
number of dependencies for almost any software run on it, enacts partial code
freezes for long periods of time between releases, and often has very stale
packages as various compatibility goals block updates. This overhead introduces
a lot of surface area of malicious code to hide itself in. Unfortunately, due
to its design, when building software deterministically on this OS, each and
every repo needs to keep costly snapshots of all dependencies to reproduce
build containers, as Debian packages are archived and retired after some time
to servers with low bandwidth. This creates a lot of friction for teams who, as
a result, have to archive often hundreds of .deb files for every project, and
also has the added issue of Debian having very old versions of software such as
Rust, which is a common requirement. This can be quite problematic for teams
who want to access latest language features. Even with all this work, Debian
does not have truly reproducible Rust (which will be discussed later in this
post), and packages are signed only by single maintainers whom we have to fully
trust that they didn't release a compromised binary.
Fedora (and RedHat based distros) also sign all packages, but otherwise suffer
from similar one-size-fits-all bloat problems as Debian with a different coat
of paint. Additionally, their reliance on centralized builds has been used as
justification for them to not pursue reproducibility at all which makes them a
non-starter for security focused use cases.
Arch has very fast updates as a rolling release distro, and package definitions
are signed and often reproducible, but they change from one minute to the next,
still resulting in the challenge of having to come up with a solution to pin
and archive sets of dependencies that work well together for software that's
built using it and requires determinism.
Nix is almost entirely reproducible by design and allows for lean and minimal
output artifacts. It is also a big leap forward in having good separation of
concerns between privileged immutable and unprivileged mutable spaces, however
like Alpine there is no maintainer-level signing in order to reduce the
friction for hobbyist that wants to contribute.
Guix is reproducible by design as well, borrowing a lot from Nix. It also does
maintainer-level signing like Debian. It comes the closest to the solution we
need, but it only provides single signed package contributions, and a `glibc`
base with a large dependency tree, with a significant footprint of tooling to
review and understand to form confidence in it. This is still too much overhead
we simply don't want or need for use cases like container builds of software,
lean embedded operating systems, or any sensitive system where we want the
utmost level of supply chain security assurance.
For those whose goal is to build their own software packages deterministically
with high portability, maintainability, and maximally easy supply chain
auditability, none of these solutions hit the mark.
On reflecting on these issues, we concluded we want the `musl`-based
container-ideal minimalism of Alpine, the obsessive determinism and full-source
supply chain goals of Guix, and a step beyond the single-signature packages of
Debian, Fedora, and Arch. We also concluded that we want a fully verifiable
bootstrapped toolchain, consisting of a compiler and accompanying libraries
required for building most modern software.
You may know where this is going. Here is where we made the totally reasonable
and not-at-all-crazy choice to effectively create…
## Yet *Another* Linux Distribution
Lets take a look at some of the features we care about most compared to make
it more clear why nothing else hit the mark for us.
A comparison of `stagex` to other distros in some of the areas we care about:
| Distro | Containerized | Signatures | Libc | Bootstrapped | Reproducible | Rust Deps |
|--------|---------------|------------|-------|--------------|--------------|-----------|
| Stagex | Native | 2+ Human | Musl | Yes | Yes | 4 |
| Guix | No | 1 Human | Glibc | Yes | Yes | 4 |
| Nix | No | 1 Bot | Glibc | Partial | Mostly | 4 |
| Debian | Adapted | 1 Human | Glibc | No | Partial | 232 |
| Arch | Adapted | 1 Human | Glibc | No | Partial | 262 |
| Fedora | Adapted | 1 Bot | Glibc | No | No | 166 |
| Alpine | Adapted | None | Musl | No | No | 32 |
We are leaving out hundreds of distros here, but at the risk of starting a holy
war, we felt it was useful to compare a few popular options for contrast to the
goals of the minimal container-first, security-first, deterministic distro we
put together.
We are not the first to go down this particular road road. The Talos Linux
project built their own tiny containerized toolchain from gcc to golang as the
base to build their own minimal immutable k8s distro.
Getting all the way to bootstrapping rust, however, is a much bigger chunk of
pain as we learned…
## The Oxidation Problem - Bootstrapping Rust
Getting from gcc all the way to golang was mostly pain-free, thanks to Google
documenting this path well and providing all the tooling to do it. One only
needs 3 versions of golang to get all the way back to GCC.
Bootstrapping Rust is a bit of an ordeal. People love Rust for its memory
safety and strictness, however we have noticed supply chain integrity is not
an area where it excels. This is mostly because Rust changes so much from one
release to the next, that a given version of Rust can only ever be built with
its immediate predecessor.
If one follows the chicken-and-egg problem far enough the realization dawns
that in most distros the chicken comes first. Most included a non-reproducible
“seed” Rust binary presumably compiled by some member of the Rust team, then
use that to build the next version, and then carry on from there. This means
even some of the distros that _say_ their Rust builds are reproducible have a
pretty big asterisk. We wont call anyone out - you know who you are.
Granted, even if you were to build all the way up from the OCaml roots of Rust
(if you can find that code and then get it to build), you would still require a
trusted OCaml compiler. Software supply chains are hard, and we always end up
back at the famous Trusting Trust Problem.
There have been some amazing efforts by the Guix team to bootstrap GCC and the
entire package chain after it with a tiny human-auditable blob of x86 assembly
via the GNU Mes project. That is probably in the cards for our stack as well,
however for the short term we wanted to at least go as low in the stack as GCC
like we do with go as a start which is already a sizable effort. Thankfully,
John Hodge (mutabah), a brilliant (crazy?) member of the open source community,
created “mrustc” which implements a minimal semi-modern rust 1.54 compiler in
C++ largely from transpiled Rust code. It is missing a lot of critical features
that make it unsuitable for direct use, but it _does_ support enough features
to compile official Rust 1.55 sources, which can compile Rust 1.56 and so on.
This is the path Guix and Nix both went down, and we are taking their lead
here.
Mrustc at the time lacked support for musl libc which threw a wrench in things,
but after a fair bit of experimentation we were able to patch in support musl
and get it upstream.
The result is we now have the first deterministic `musl` based rust compiler
bootstrapped from 256 bytes of assembly, and you can reproduce our builds right
now from any OS that can run Docker 26.
## Determinism and Real World Applications
To demonstrate how determinism can be used to prevent real world attacks in
practical terms let's consider a major breach which could have been prevented.
SolarWinds experienced a major security breach in which Russian threat actors
were able to compromise their infrastructure and piggyback on their software to
distribute malware to their entire client base. The attackers achieved this by
injecting malicious code into SolarWinds products, such as the Orion Platform,
which was then downloaded by the end users. This seems like a very difficult
thing to protect from, but there is a surprisingly simple solution. If
SolarWinds leveraged deterministic builds of their software, they would have
been able to detect that the binaries of the software they are delivering to
their clients have been tampered.
To achieve this, there are a few ways they could have gone about this, but
without getting too deep into implementation details, it would have sufficed to
have multiple runners in different isolated environments, or even on different
cloud platforms, which would reproduce the deterministic build and compare the
resulting hashes in order to verify the binaries have not been tampered. If any
of the systems built the software and got a different hash - that would be a
clear signal that further investigations should be made which would have likely
lead to the detection of the intruder. Without this approach, SolarWinds was
completely unaware of their systems being infiltrated for months, and during
this period large quantities of end user data was exfiltrated, along with their
tooling. Considering SolarWinds is a cybersecurity software and services
provider, the tools stolen from them were then likely used to further develop
and weaponize the attacker's capabilities.
## Future Work
These initial efforts were predominately sponsored with financial and
engineering time contributions from Distrust, Mysten Labs, and Turnkey, who all
share threat models and container-driven workflows Stagex is designed to
support.
While we all have a vested interest to help maintain it, we all felt it
important this project stand on its own and belong to the community and are
immensely appreciative to a number of volunteers that have very quickly dived
in and started making significant contributions and improvements.
As of writing this, Stagex has 100+ packages covering some of the core software
you may be using regularly, all built using the deterministically built
toolchain, and of course the software itself also built deterministically. Some
of the packages include `rust`, `go`, `nodejs`, `python3.8`, `curl`, `bash`,
`git`, `tofu` and many more.
We would like to support building with `buildah` and `podman` for build-tooling
diversity. We would also love help from the open source community to see GCC
bootstrapped all the way down to x86_assembly via Mes. This may require using
multiple seed distro containers to work in parallel to ensure we dont have a
single provenance source for that layer.
We are also actively on and have made some progress towards the addition of
core packages required to use this distribution as a minimal Linux OS.
If you have need for high trust in your own build system, please reach out and
we would love to find a way to collaborate.
## References
* [Bootstraping rust](https://guix.gnu.org/en/blog/2018/bootstrapping-rust/)
* [Full source bootstrappin](https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-building-from-source-all-the-way-down/)
* [Running the "Reflections on Trusting Trust" Compiler](https://research.swtch.com/nih)
* [Reflections on Trusting Trust](https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf)

View File

@ -0,0 +1,97 @@
# ByBit Incident Report
The ByBit incident is an example of a nation state actor using a series of sophisticated attacks to compromise high value targets. When the value at stake is such that it justifys spending funds on buying 0-days, in some cases multiples, and combining them into elaborate exploit chains, attacking multiple different layers of the tech stack, highly targetted social engineering, compromise of individuals, planting of moles or even phsyical attacks, the threat model which needs to be assume to adequately address risks needs to be extreme.
### Threat Model Assumptions
The assumptions we make about nation state actors at Distrust:
* All screens are visible to the adversary
* All keyboards are logging to the adversary
* Any firmware/boot-loaders not verified on every boot are compromised
* Any host OS with network access is compromised
* Any guest OS used for any purpose other than prod access is compromised
* At least one member of the Production Team is always compromised
* At least one maintainer of third party used in the system is compromised
* Physical attacks are viable and likely
* Side-channel attacks are viable and likely
The suggested mitigating controls following in this report consist of tools which we developed to address exactly this type of threat model and are at varying levels of maturity. The good news is the reference designs and concepts are available to you today, but some of the tooling needs more work - so if you care about these issues and want to help us complete the work on the missing pieces, please talk to us.
### The Method
This report highlight the major single points of failure, which rely on a single individual and/or computer, thus creating an opportunity for compromise. Blockchains benefit from security of the network via strong cryptography and decentralization. More "traditional" parts of the infrastructure historically have not had the ability to distribute trust, but with some clever tactics we can achieve a decentralization of trust which helps us ensure that no single individual or computer can compromise a system.
## Root Cause Analysis and Mitigating Controls
### Developer Workstation Compromise
> Earliest known malicious activity was identified, when a developers Mac OS workstation was compromised, likely through social engineering. ([Sygnia report](https://www.sygnia.co/blog/sygnia-investigation-bybit-hack/))
#### Primary Mitigation
Day-to-day work machines should not be used for production access / managing tokens for production access. This is an operational security shortcoming, as any interaction with production systems, whether via an API token, or web interface should be done via a dedicated computer or highly isolated environment (hardware-based virtualization like QubesOS/Xen preferred) with minimal dependencies only used for carrying out production tasks. Any interactions outside of production related tasks create opportunities for the system to be compromised - downloading and opening files, downloading and running software libraries (such as Docker which was the source of malware in this case), visiting websites (yes, the browser sandbox can be broken) etc.
#### Advanced Mitigation
Another way to mitigate this risk is to use a hardened server, such as a secure enclave, which is immutable, and can remotely attest to the code it's running. Setting up that server to only deploy code that's signed by x trusted PGP (or other signing algorithms) can achieve a state where no single individual has the ability to modify the infrastructure.
- Use [EnclaveOS](https://git.distrust.co/public/enclaveos) - a minimal and immutable operating system for running security critical software with high accountability on secure enclaves. EnclaveOS can also be extended to support multi-party management of secrets such that no person can control them alone. This can be used to set up secure enclave which acts as the deployment system. EnclaveOS is a reference implementation, but we are happy to help invest energy into making this tool easier to use for everyone.
- Use [Bootproof](https://git.distrust.co/bootproof) alongside EnclaveOS to prove which software booted on a given system by leveraging platform hardware or firmware remote attestation technologies. This tool is designed but not yet in development. Currently EnclaveOS can be used with Nitro VMs on AWS with some work to achieve remote attestation - and several Distrust clients are using this setup in production today. Our team would be happy to invest energy to develop this tooling if anyone is willing to help fund it. It would unlock use of general hardware like TPMs and other remote attestation technologies to allow deploying remote attestation setups to different cloud platforms for more security via diversity.
#### Additional notes
* This isn't the first time an attack like this happened. Those who have been around for a while will remember the [Axie Infinity Hack](https://www.bleepingcomputer.com/news/security/hackers-stole-620-million-from-axie-infinity-via-fake-job-interviews/) which also happend due to compromise of a developer who used their day to day machine for managing cryptographic material and accessing production systems.
* The use of tools like MDM on systems for production access is not recommended, as they create a single point of failure. Most MDM solutions mean that a third party has complete access to the fleet of computers it's "protecting", and even if self hosted creates a large single point of failure which is challenging to mitigate to a resonable degree. Instead, the approach should rely on making the surface area for attack so minimal that introducing anything else introduces more risk than benefit. For illustrative pruposes, imagine a hardware-based virtual machine which only has a minimal operating system, the CLI tool for the preferred cloud platform, and a network interface which has a firewall configuration permitting only connections to a specific production asset. If this sytem is only used for accessing that specific asset, the introduction of anything additional, including an MDM or anti-malware/anti-virus software, actually increases the surface area for attack. Of course, this is a stepping stone to improve controls around accessing production systems until better mitigating controls can be put in place, making it impossible to directly interact with and change production systems as an individual.
* Additional resiliency can be achieved by deploying a system for deployment across multiple accouts with different ownership or even different cloud platforms. This is out of scope of this report which focuses on mitigating controls where most companies should start their journey to improve their supply chain security.
* It is also worth noting that it appears a Docker container with network connectivity was used to compromise a developer's machine initially. This points to an often overlooked issue, which is that Docker is not a secure containerization technology, as it makes it fairly trivial to move files across the container boundary, as part of its design. This is useful for some usecases but not for strict isolation - which should instead rely on hardware-based virtualization.
### JavaScript Code Tampering
> Preliminary incident reports by both Sygnia and Verichains were shared by Bybits CEO, Ben Zhou in his X post. Both reports highlighted the same attack vector the modification of JavaScript resources directly on the S3 bucket serving the domain app.safe[.]global. ([Sygnia repors](https://www.sygnia.co/blog/sygnia-investigation-bybit-hack/))
#### Primary Mitigation
Ensure that the bucket / server serving the website can not be modified by a single individual. Set up immutable infrastructure by deploying software using a hardened server—such as an enclave—that only serves software reproduced across multiple systems and signed by a set of trusted parties. The software is then deployed to an immutable server or bucket for secure delivery to clients. The main risk to mitigate here is the "root" access account controlling the infrastructure. However, secure enclaves and remote attestation can effectively reduce this risk ([EnclaveOS](https://git.distrust.co/enclaveos) + [Bootproof](https://git.distrust.co/bootproof)).
#### Advanced Mitigation
* Leverage bit-for-bit reproducibility to ensure that the software being delivered has not been tampered. In the case of JS code, which is not compiled but interpreted, the source code can be reviewed, and hashed to have a way for checking integrity of the code. This process of hashing should be done in trusted isolated environments, and ideally on multiple machines to ensure that no single computer has the ability to tamper with the code.
* [This video](https://antonlivaja.com/videos/2024-incyber-stagex-talk.mp4) (4:30-6:30) explains how reproducibility helps protect the integrity of software. For those new to reproduction and determinism, it's advised to watch the whole video.
* This attack vector actually extends to all underlying software used in the build environment, such as the different libraries, as well as the compiler. To maximally mitigate this risk, a bootstrapped compiler should be used, and all software including the compiler itself should be built deterministically to close off tampering attack vectors across the whole foundation of software used in build environments. This allows one to reproduce the identical bit-for-bit binary in diverse environments (different OS, different chipset, different cloud platform, different access etc.), and ensure that the is still exactly the same - proving there has been no tampering.
* Use [StageX](https://codeberg.org/stagex/stagex) to reproduce your software and close off compiler and environment risks. StageX is a minimalism and security first repository of reproducible and multi-signed OCI images of common open source software toolchains full-source bootstrapped from Stage 0 all the way up. It's currently actively being used by [Talos Linux](https://github.com/siderolabs/talos/releases/tag/v1.10.0-alpha.2), [Mysten Labs (SUI)](https://github.com/MystenLabs/sui/tree/jnaulty/stagex-update) and [Turnkey](https://whitepaper.turnkey.com/foundations) to name a few of the widely known projects.
* Use [ReprOS](https://codeberg.org/stagex/repros) to help with reproduction. It's a bare-bones immutable OS designed for securely reproducing and signing software. Each build is executed in a one-time use environment, eliminating persistent risks. It is in currently in beta testing. This project is currently in beta.
#### Additional Notes
All third party code should be manually reviewed. Currently most companies rely on SAST tools. This is not enough, as SAST tools are unable to detect novel exploits. The cost of using open source code, at a minimum, should be to review every line of code manually. If companies are so stringent about having developers review their first party code, why do companies choose to not apply the same principles to third party code? It is burdensome, but necessary for high risk targets. If you're unfamiliar a good example of what's possible with supply chain attacks is the [xz backdoor](https://en.wikipedia.org/wiki/XZ_Utils_backdoor).
- Distrust's answer to this is [SigRev](https://git.distrust.co/public/sigrev), which helps harness the power of nerds to create a repository of signed reports for reviews of open source software. The idea is that companies can come together to fund review of common open source software, to save money, and simultaneously help secure Open Source software. SIgRev has been designed, but is not yet in development and is seeking funding.
### Compromise of WebUI
> Bybit initiated a transaction from the targeted cold wallet using Safe{Wallet}s web interface. The transaction was manipulated, and the attackers siphoned the funds from the cold wallets. ([Sygnia report](https://www.sygnia.co/blog/sygnia-investigation-bybit-hack/))
#### Primary Mitigation
Initializing transactions from a WebUI leaves a lot of surface area for the attack as browsers are known for being difficult to protect. This is due to the nature of what a browser is - a window into the open internet. Additionally, the v8 engine which is the backbone of most browsers is an immensly complex and difficult surface area to defend, resulting in frequent 0-day vulnerabilities, as well as supply chain issues.
* Do not sign transactions involving large sums in a browser.
* Use offline trusted environments for signing, to protect key material, and mitigate the risk of a compromised UI displaying incorrect information. In the case of the ByBit hack in particular, preventing the JS tampering would have mitigated this risk, but other supply chain attack vectors which can achieve the same outcome remain (extensions, v8 engine 0-day exploits etc.). By using a minimal set of CLI tools to sign transactions offline, the WebUI compromise would have been avoided.
* Use [AirgapOS](https://git.distrust.co/public/airgap), which is an immutable, diskless OS used for offline secret management and operations. It is a swiss-army knife which essentially turns a laptop into a hardware wallet. Some modifications for the laptop are required such as removing radio cards from the laptop. Inside of it are [keyfork](https://git.distrust.co/public/keyfork) and [Icepick](https://git.distrust.co/public/icepick) which are tools for generating and managing entropy which can be derived for different cryptographic algorithms, as well as for cryptographic signing operations. Keyfork and Icepick are both extremely minimal and written in rust and currently support Solana, Pyth, Cosmos, Kyve and Seda as we received funding to implement those, but can be extended to support other chains - we are currently working on Bitcoin, but would be happy to add support for Ethereum as well - again this is not a political decision, we just had individual sponsor implementing support for those blockchains first. These three tools are all being used in production today by multiple clients, and have been audited by several security firms whose reports can be found in the respective repositories.
## Extras
We have noticed that many companies still neglect basic security hygiene practices that apply to everyone and could meaningfully improve the security of systems with relatively little effort.
1. Adopt FIDO2 as MFA wherever possible and avoid using SMS, TOTP, Yubico OTP, email codes and push notifications. If your provider doesn't offer FIDO2, you should ask them why as it's objectively the best type of MFA currently available.
2. Use smart cards for FIDO2, and for managing PGP keys which can be used for *signing commits*, and *ssh* access. We built [tooling and guides](https://qvs.distrust.co/generated-documents/all-levels/pgp-key-provisioning.html) which makes it easy to provision PGP keys and load them onto smart cards. Signing commits is helpful as it can help protect modification of code via attacks like commit spoofing, and keeping the SSH key securely inside of a smart card is akin to keeping seed phrases safely stored in HSMs.
## Summary
The Distrust team has helped build and secure some of the highest risks systems in the world such as the vaulting systems at BitGo, Unit410, and Turnkey as well as helping electrical grid operators, industrial control system operators and other. Through working with companies that are exposed to the most sophisticated known attackers where all attacks are viable, Distrust developed a methodology to help mitigate this level of threat. We are now using our hard learned lessons to help everyone improve their security posture by open sourcing all our learnings and creating open source tooling everyone can benefit from.
You can learn more about what we are building on our [website](https://distrust.co)