Keywords are a key part of, well, any programming language. And Clarity, the language behind secured-by-Bitcoin Stacks, much like other blockchain programming languages, is no different. However, what is different is that avoiding or lightly using keywords in other languages is not a deal-breaker (at least not in the early days); in smart contract world though, leaving even a character of doubt when using keywords leaves an abstract surface area for light bugs, or, more likely & infinitely worse, expensive exploit vectors.
Two of the most-common keywords in Clarity, used to refer to a principal (wallet addresses), are contract-caller & tx-sender. Along with block-height, contract-caller & tx-sender are a critical part of a Clarity dev’s arsenal from day 1 — but while they’re easy to understand at the surface level, understanding the minute implementation differences in detail is a non-negotiable. Both keywords are inherently secure, but both keywords can be abused in different ways.
Today, we’re reviewing the slightly more popular tx-sender. Specifically, we’ll see how it’s defining property, principal persistence as the originator, can be exploited through phishing & hijacking that persistence:
Bypassing Admin Asserts! Through Tx-Sender Phising
The majority of contracts are written somewhat like this: defining cons, vars, maps, etc…, body of public functions, & a mini-body of public admin functions. Admin functions are those that allow for dangerous activities such as changing an mint price, pausing a critical public function, etc…99% of the time these public functions have some sort of admin check that users Asserts in the following way:
(asserts! (is-eq tx-sender [admin-address]) (err NOT-AUTHORIZED))
We’ll now explore how, if devs are careless & users are inattentive, the persistence of tx-sender can be phished to by-pass admin controls.
Let’s first review exactly what the two following keywords do in Clarity as defined by the Clarity Book:
The difference between “called the function” & “sent the transaction” on a surface level seems small enough to ignore. However, it’s this deceptive similarity that leaves people over-using tx-sender without really understanding the underlying implications. Let’s go ahead & re-write this definition in our own words:
When I say “Transaction Chain” I mean this: a public function called (aka a transaction originated) may or may not have a sequence (or chain) of multiple internal functions or external contract calls that we can also think of as transactions. For example, maybe calling a mint for an NFT paid in a fungible-token such as $MIA. The client/end-user is originating a single mint transaction, however, within that function there is also a contract-call? to the $MIA token smart contract that allows for transfers (paying in that token) like so:
(unwrap! (contract-call? ‘SP466FNC0P7JWTNM2R9T199QRZN1MYEDTAR0KP27.miamicoin-token transfer mint-price-mia contract-caller contract-owner (some 0x00)) (err u102))
(nft-mint? example-nft (var-get last-id) tx-sender)
When the keyword tx-sender is used the context never changes: it always refers to the transaction originator. However, when the keyword contract-caller is used the context may change: it refers to the most recent transaction sender which may or may not be the originating principal.
In other words, regardless of how many hops a transaction makes, tx-sender will always refer to tx-sender — the originator sending the transaction. Now, let’s explore how this persistence can be hijacked for an exploit.
Let’s imagine we have our client / end-user Bob who is an admin in some new NFT project (we’ll call it NF3); as such, his wallet ise checked by asserts in required admin functions such as: update-admin-address, update-mint-price, freeze-metadata, set-uri, etc… Below we an admin function update-admin-address (very similar to some I’ve seen in mainnet):
;; contract NF3.clar
;; normal NFT functionality...(define-public (update-admin-address (new-admin principal)
;; checks that current admin (Bob) calling is tx-sender
(asserts! (is-eq tx-sender (var-get current-admin)) (err u101))
;; updates admin to new-admin param
(ok (var-set current-admin new-admin)
We see that this is a super simple function that does explicitly two things:
If Bob calls this new-admin(Alice) function directly through a command-line interface or through a trusted front-end then all will be good & only Bob can pass the Asserts! check:
However, what if Bob were calling this from an unknown front-end & he speed-reads through the post-conditions? A hacker could leverage the persistence of tx-sender to circumvent the asserts! check in NF3.clar. Below is an example phising contract (Exploit.clar) with a single function & line:
;; contract exploit.clar
;; calling new-admin() of NF3.clar from this contract yet first originated by Bob
(ok (as-contract (try! (contract-call? .NF3 new-admin [hacker-address-here]))))
The [hacker-address-here] principal bold-ed above is the parameter for the new-admin function — as you can see we set it to any principal desired. Let’s call our exploiter Elliot. If Bob, or the current-admin principal, is at any time phished to call the contract above, the following happens:
The Asserts! statement in NF3 checks against tx-sender which is always the originating transaction sender — in this case since Bob kicks off the transaction chain by being phished, the tx-sender will pass & set the new admin to Elliot, our exploiter! This scenario is particularly dangerous since our exploiter is now the admin moving forward, but the logic here works for any admin function asserting with tx-sender: if the originator ever gets phished, the persistence of tx-sender can & will be abused.
For the non-technical curiously perusing here, there are two main ways to reduce the likelihood of this occurring. First, absolutely avoid interacting with any unfamiliar front-end, particularly one that has post-conditions turned-off / is ran in allow-mode. Next, make sure to always scroll down & read through every single post-condition when signing a transaction — do not be lazy & simply stop reading at the cut-off. It’s easy to obfuscate malicious post-conditions with sheer quantity, read over every single one.
For the developers, the prevention here is simple: mentally ingrain the differences between contract-caller & tx-sender. Test the scenario presented above yourself & as you’re writing out your next contract. Think through exactly how the context of tx-sender will persist — assume someone will write a malicious contract that hi-jacks the persistence & hedge against it.