RandomExampleFinance has been audited. Smart contracts - clean. And yet, six months later, funds are gone. This post is about everything the audit didn't cover.
This can happen for many reasons, not only because of hackers befriending you and hacking, or lost multisig. If you want to be able to perform similar exercise to your own company, follow along.
The overview
Let's assume we have a company RandomExampleFinance (I hope there is no real company under that name). This is a small sized web3 startup. We have a CEO, a CTO, two marketing persons, 5 devs reporting to CTO and a HR employee. The company works remotely - everyone is a digital nomad, the infra is shared online. The device policy is purely BYOD and there is no expensive corporate EDR connected to a 24/7 SOC - sorry, we're on budget! Company's main product is a DeFi that allows some novel way to earn yield - it doesn't really matter how.
The assets
In order to understand the threat model, we have to first define, what we are protecting. A holiday photos that an employee keeps on his laptop? No, we should care about the core thing: Private key to our funds and those funds. And in second order: anything that could lead to losing them: application that users use, trust to your website domain, your social media accounts. This is the core we need to protect.
How the funds could be stolen?
There are few common scenarios of attacks:
- A smart contract exploit leading to loss of funds
- A web2 type exploit leading to loss of funds (no user interaction)
- A web2 type exploit leading to loss of funds (user interaction)
- Exploit or attack against exact person that has access to the funds (keys)
- Dependency confusion / Supply chain attack
- Governance/multisig or malicious employee attack
Each of them has its different reasons and different way to prevent it.
1. A smart contract exploit leading to loss of funds
This is probably the easiest one to understand and most known in the web3 industry. Simply said, the smart contract may have a logic/coding flaw that allows to steal the funds.
Remediation: smart contract audit, running a bug bounty program to support ongoing testing - one-time test doesn't guarantee anything, since it represents a point in time - but the techniques evolve, and what if a new type of vulnerability is discovered right after the audit? Use the approach described below to maximize audit efficiency.
Pro tip: Companies tend to believe that they may change just one line of code vs audited commit and it shouldn't change anything. But this often turns out to be not true. It is always better to reach out to whoever audited and ask (even if for extra cost) to verify the addition.
2. A web2 type exploit leading to loss of funds (no user interaction)
This one may happen only in some cases. No user interaction means that some software running on the servers of your app can be exploited remotely.
A web2 exploit is something OWASP WSTG describes. You probably heard of SQL Injection or Remote Code Execution. So the OWASP testing guide has 400+ pages and there are multiple exploit chains possible.
Even if the dApp is only a frontend, it can be abused - but this will be covered in the later point.
This depends on your attack surface - here are the key questions to ask:
Remediation: Asset inventory first - know every subdomain, server, and cloud resource you own. Put everything customer-facing behind Cloudflare. Anything not meant for clients shouldn't be internet-accessible at all. Commission a blackbox OSINT sweep before a whitebox pentest - you may be surprised what's findable.
Pro tip: Scope your pentest around your actual assets, not your most popular app. "Test everything" with no asset inventory just burns budget on the wrong things.
3. A web2 type exploit leading to loss of funds (user interaction)
User interaction means that for the attack to succeed, users need to do something - for instance, approve a malicious transaction. However, from experience with web3 hacks - it isn't always difficult to convince users, so the "user interaction" part should never be underestimated.
Example of such attack may be: a site that serves some 3rd party data e.g. token informations, suddenly is hacked and the tokens start to distribute malicious scripts. Or a 3rd party provider is compromised and scripts on your website are hijacked. What can you do?
Typical technical controls to help it are well-known frontend hardening measures such as:
All of them secure the user browser instructing it on what to load. This way, anything injected, like scripts, won't be effective. And the truth is, that every penetration test flags those - typically as "low" findings. Every free scanner finds those. But the impact of having or not having those is huge, because it either blocks any unwanted scripts or not - and despite "official" CVSS score for those, in web3 injections of scripts lead to great damage, hence those controls should be prioritized.
Remediation: SRI, CSP, proper cookie flags. Don't pull latest dependencies immediately on release. Treat third-party scripts as untrusted by default.
Pro tip: "Your frontend is only as secure as the least secure third-party script it loads." Since those are typically low findings, rarely someone patches it (because configuring CSP is painful).
4. Exploit or attack against exact person that has access to the funds (keys)
Knowing what happened to Drift protocol, one can ask if it is even possible to secure against this style of attacks. Building a 6 months relationship has no precedence in any red team and in most cases 99% of people in that situation would open whatever is sent from a long term work colleague. And what is the way from opening malicious file to losing the key?
Typically when an automated RAT runs on a workstation, it has a stealer module in it. Which means it is an automated plunderer who knows where to look and what to look for. It takes whatever it finds and disappears. In most cases these are built to evade AV detections, too.
Also think that the phishing may not always be a RAT execution. It may be AWS account takeover, or Gmail. It can be credentials theft to tweet from company account that the new token is $SCAM and whoever buys first will get airdrop.
Speaking of phishing - make sure that every team member has MFA configured. Ideally application-based to avoid SIM swaps. However, you should be aware that if an active application session is stolen from user browser (via cookie theft) it may completely bypass MFA.
Remediation: For max security, use KMS or hardware-based key management. .env is not truly secure, but in most cases it is an acceptable tradeoff - just never on a developer's personal laptop with production keys. Dedicated signing machine for key operations only - no browsing, no dev work. Compartmentalize key access by role.
Pro tip: "If your lead dev has the production key on his laptop because he was 'just testing deployment' - that's your biggest vulnerability right now, and no audit will catch it."
5. Dependency confusion and Supply Chain
This is trickier, because it often happens through a third-party compromise rather than a direct attack on your infrastructure. Moreover often during audits, it is said: 3rd party integrations are not in scope. And that's fine - but the question is did the team really make a list of external integrations and even threat model it? Are you at least aware what's the worst that can happen if external integrations misbehave?
On the other hand, some of supply chain attacks may happen immediately. If you are unlucky, the dependency you are just pulling may have been hijacked 4 seconds ago. Attackers know teams auto-update. At current state, the information about compromise travels fast.
Remediation: Pin your dependency versions. Use lockfiles. Review diffs on dependency updates before merging. Maintain a list of your third-party integrations and what access each one has - if one gets compromised, you should know the blast radius without having to figure it out under pressure.
Pro tip: "The 'we'll just use latest' policy is a bet that every maintainer of every package in your dependency tree will never get compromised. That's a lot of trust in strangers."
6. Governance/multisig or malicious employee attack
That's something that's not rarely seen. Especially in remote, distributed teams, you don't always know your peers good enough, and even if you do, there is probably little you can do if they decide to misbehave. Or a colleague may have been compromised - how do you notice, if your colleague is compromised, when he writes to you? Have you ever thought about that?
For instance, do your team review what is being signed with a multisig? Or does everyone just sign because "something is pushed"? Do you have a process for this?
Also another thing that is often overlooked - the offboarding. It is super common that former employees remain in shared chat, have shared access etc. And yes, 99% of people are just honest, hardworking humans who simply ignore it, or also don't remember. But there can be 1%.
Remediation: Map critical procedures and assets. Multisig operations, adding new users, contracts upgrades, funds transfer. The process should assume someone might be dishonest. Yes, this is inconvenient - it is up to you to decide, whether you want to have less flexible process to protect against 1% chance hack though.
Pro tip: Employees and colleagues may be compromised. Enforce MFA (or at least 2FA) on their all accounts. Maintain a list of all accesses granted on onboarding, and revoke all on offboarding.
The practical thing: If I were to be a small company CISO, where would I invest?
The company needs to grow. Like a human, who has to care about basic everyday security, he cannot live in a bunker - because despite being safe, he would lose any potential upside life could bring. Hence - how to navigate in this wild space? For a small company, the budget for security is limited and the attack surfaces seem to be infinite. But some of the ideas may be helpful for you.
Follow the money - start with the smart contracts / web3 code
Simply, if your main TVL is in your smart contracts, do audit them. However how you do it should be well thought. The more low-hanging fruits are found before the expensive audit, the more time the auditors will spend on actual edge cases, instead of documenting 4 missing onlyOwner modifiers on admin functions.
Pro tip: Ideally the last audit should have very little to no findings. That is the sign that the layered approach worked - all the earlier rounds caught the easy issues and the expensive auditors focused on what matters.
Asset inventory and principle of least privilege
For any other infrastructure you have, especially web servers, websites, check the following things:
Security culture
The company does not need to run monthly phishing simulations or, even worse, run corporate security awareness trainings. However it is simply good to play a long term awareness game and talk security. From time to time, analyze root causes. Find one person that can bring that topic up periodically, e.g. monthly, weekly and simply discuss one recent incident and brainstorm - what the company could do? Would we be protected or would we be hacked?
In the end also, being secure means being more time consuming to hack than the next similar company in the same risk/reward ratio for the attacker, and not falling for a bait (e.g. dev is compromised on interview). If those things are met, the risk of a real targeted attack, where someone chose your small company, is relatively low.
Threat modeling
While professional security services should be performed by entity/company that does this for living (which should guarantee extensive experience), actually a threat model can be done by the team itself, and even shared with the security provider (to better understand what are the main concerns). While there are official methodologies, like STRIDE described by OWASP, even a small exercise enables thinking in terms of adversary mindset. Ask yourself: what if this function is abused. If an attacker wants to steal our money, what would he do? Asking those questions leads to security oriented thinking, which leads to more secure design.
As a summary, I am going to leave you with a not-very-optimistic conclusion that no matter what you do, the security is NEVER 100% in the same way as humans may die to a coconut. On the other hand, assuming normal circumstances and no unusual, terrible misfortune or a black swan event, implementing security oriented culture and using security services consciously as described here, increases your chance of incident-free business.
But if there's one thing to take from this - "no audit covers your ops. No tool covers your people. The weakest link in every incident post-mortem isn't the code - it's a process someone skipped because it was inconvenient that day."


















