How to audit a Rust Stylus project, quadratic voting in Stylus
Stylus Saturdays, Part 5, Jan 25th, 2025
To recap: Stylus Saturdays is a regular newsletter that focuses on developments in the Stylus ecosystem. Be sure to give us a follow if you enjoy our content!
Hello again!
In this extra long post, we’re going to be discussing how to audit your Stylus project (namely, my preference for avoiding big label names, and the rationale behind that), and how to build a quadratic voting protocol contract.
In the process of building a quadratic voting contract, we will be discussing the issues with a textbook product development process, and developing some macros to simplify storage accesses and checked adding/subbing.
How to audit a Stylus project?
Stylus is a relatively new technology, though the feel is the same as developing with most EVM-code generating programming languages when it boils down to it, owing to the execution domain (the EVM). It might seem challenging to find someone to audit your codebase, but have no fear; it’s not as complicated as it seems! Having gone through this a few times (most recently for Longtail), I feel somewhat equipped to share my opinion and advice on this approach.
Full disclaimer: take any opinion on this subject with a grain of salt. This is all based on my opinion and experiences so far, and I am absolutely not an expert. It’s important to make up your own mind!
Approach
In my experience, a good approach is to work with a individual auditor (or a very small operation), then a code competition platform, then an ongoing security bounty program.
Individual auditors
Individual auditors are a lot cheaper owing to a lack of an organisation management fee for your audit, and you can select them based on their success in a past audit that’s similar to yours. Some will be super hungry if they’re unaffiliated, and will work hard to make things work for you. They can be brought in at virtually any time of your development life cycle, and will serve as useful pre-screening for a competition audit and a sounding board for some ideas.
Audit/code competition platform
A code competition platform is a space where anyone can participate in auditing your code and receive rewards for finding vulnerabilities, making it the closest thing to a pure meritocracy for security. In a single audit, hundreds of people could look at your code from different backgrounds. This is preferable due to its sheer mass compared to an audit from an agency, where you’ll get one to two auditors assigned to your project for a period of time! In terms of raw coverage, this is the best methodology for identifying issues. The system of awarding more to higher severity issues means more attention on actual issues that have demonstrable impact, as opposed to focusing on simple mistakes that a solo team might identify to tick boxes.
Ongoing bounty programs
A bounty program is something you can run in perpetuity, and it will shield you against unforeseen risks that might have been caught in the past that were then missed.
We’ve had experience having an audit competition done with Code4rena for Longtail and working with an individual auditor named Dadekuma for 9lives so far (whom we selected after he won top prizes in our Longtail competition), which yielded good results for us. In the past, we’ve used Immunefi, which has worked for us. We don’t have one currently set up for Longtail/9lives however.
A strategy for finding an individual auditor could be to find someone with a good track record in audits similar to yours. Stylus may be difficult at first, but over time, more programs will presumably run their competitions and have a paper trail there.
Linus Torvalds famously stated “With enough eyes, all bugs are shallow”. Your goal as someone receiving an audit is to find the right amount of specialised eyes on your code that’s able to identify issues, and in my opinion, with a traditional audit company, there isn’t enough overlap of the technology domain (Stylus, EVM, Rust), your problem domain (ie, a AMM), and a price that makes sense for most teams.
The Auditing Uncanny Valley
Most security researchers working at firms have their people already taking part in security competitions on the down-low, so you can actually benefit from them anyway with an open competition. In my opinion, the one thing you absolutely do not want to do is to collaborate with a mid tier company that’s great at business development.
Types of security teams
To understand this, we need to think about the different types of security teams in this space (from my view). There are four distinct centralised security teams, Boutiques, Tier 1, Tier 2, and Tier 3 agencies. A Tier 1 agency will be one that works on huge ticket audits (ie, Uniswap, with Offchain Labs, etc), and has been around for longer than 5 years. These teams are on the spear’s edge of open source development.
A Tier 2 agency is one that will be rizzing you up, and is somewhat well-known. These include teams that will be cold DM’ing you constantly looking for work. These teams have existed for longer than 5 years, but are not seen working with huge ticket projects (except in a competition, which sometimes they send their team to participate in). A Tier 3 agency is like a Tier 2, except they’re a lot smaller, and often don’t exist for longer than 5 years. Tier 2 and Tier 3 agencies exist because they’re great at business development. If that weren’t the case, you would see them working on bigger projects, or morphing into a boutique.
Tier 2 and Tier 3 (I want to stress this doesn’t include Boutiques) auditing teams are incentivised to take on as much work as possible endlessly, and are going to cold outreach you to no end via every social channel you have. Competition platforms are cheaper compared to them, and your prize pool size is a linear increase in the amount of eyeballs looking at your code presumably.
A Boutique is a very small and specialised operation. They’re often somewhat well-known due to a prolific founder or business developer. They’re often from a research background, and will have a open source profile. These teams are suitable to work with in a collaborative development setting, since you should be thinking of them as teams that have chosen to remain small with a high level of skill, instead of the meat grinder “get as many projects through” Tier 2/Tier 3 approach. We’ve worked with Verilog Solutions in the past and I recommend working with them if you need a team to do advisory work on your development, though we haven’t worked with them for Stylus.
Issues with these teams
The problem with Tier 2 and Tier 3 agencies is there is a perverse incentive for them to always be approaching projects with the intention of taking on audits, even if they don’t have the track record of working in your problem and tech stack space, because their branding isn’t so sacred to them. They’re essentially out to make as much money as they can at all costs. Tier 2 and Tier 3 agencies might intermediate a large pool of individual auditor contractors, which they’ll negotiate discounts on your behalf of (and collaborate closely with them), but in these cases in my opinion, you’d be better off working directly with private auditors based on the method from above (or even better yet, going straight to a competition format).
Reminder: these teams probably have their people participating in audit competitions anyway. So if you go the competition route, you’ll have exposure to their people regardless.
Things to keep in mind for an audit competition
If you take the route of doing a audit competition, be mindful of having your codebase be easy to work with. It’s important that an outsider can get up to speed relatively quickly, and that you have a decent enough coverage to not waste anyone’s time with simple submissions. It’s super important that you have clean and easily accessible explanations of how your math works, what your constraints are, etc. The issue with an audit competition is that auditors won’t let you know in an ongoing and collaborative fashion if your documentation sucks (in my experience).
It’s also important that you take your time to organise comarketing/awareness with the audit competition team. That way, their competitors can read through your docs, and familiarise yourself generally with the product scope of what you’re up to.
Best of luck out there!
Building a quadratic voting protocol for product development
Why?
Recently, we begun work releasing a fun little experiment, a webapp we call Shahmeer’s Game. Shahmeer’s Game is a public implementation of an approach we implement internally as a part of our webapp conversion efforts for 9lives.
We have people vote using quadratic voting with tokens for ideas to implement, with their suggestions being A/B tested and screened to measure success. Quadratic voting is a strategy for organising voting that more closely resembles intensity of opinions. This struck me as a good read to learn more: https://blog.colony.io/what-is-quadratic-voting-a-guide-to-dao-decision-making/.
Success in having an idea included that hits the goal for its inclusion rewards the bettor tokens, meaning the correctly predicting team members are rewarded for their success. This reward might be financial by LPing the token with a hard asset. This resembles a prediction market of sorts, where correct predictions mean rewards.
This work was partly inspired by Google’s internal prediction market. One of our roadmap items is to release a prediction market DAO for the 9lives community, which we believe is a great way to engage our community, and to come up with higher quality ideas.
Traditional product development flow and issues
A traditional product development flow might be:
Product owner picks a goal they want to develop for.
Team member ideates concept? Shares with product owner.
Product owner decides to include idea based on their feeling, or some research they have.
Development team sprints to have it done within the period.
The problem with a traditional product development flow is that no-one knows your product best like your whole team as hivemind, and your community. This decision-making process is not inclusive of different perspectives (the product owner makes the ultimate call), doesn’t punish the product owner for making a mistake in their estimation of dev’s time to deliver a feature (making assumptions in this example dev won’t kick the task back), and does not tolerate outside perspective in the traditional funnel.
Prediction market development flow
An internal prediction market flow might be:
Product owner picks a quantitative goal they want to optimise for (ie, a greater percentage of users minting via a stablecoin product).
Team member ideates concept. Shares it using a bulletin board.
Using a virtual currency, stakeholders see idea suggestion, and might decide to spend some of their accumulated virtual credits on betting that it would have a measurable impact on the quantative goal from before.
The date for idea inclusion begins. The top 5 items by votes are chosen to be implemented.
Creators of the ideas that were chosen are rewarded a flat token amount.
The ideas are implemented first to last in the sprint window. If the change is not made during the window, tokenholders are treated like the idea didn’t achieve its goal.
A period of reflection and validation passes. The idea’s quality is determined with AB testing, and ideas that measurably moved the goal are rewarded some tokens.
This system, unlike a traditional product development flow, punishes poor decision-making in product delivery time estimation while incentivising communication of fresh ideas with the flat token incentive. It rewards savvy decision-makers who come up with unique ideas that move the envelope with the goal your team is out to achieve, and punishes crappy decision-making.
Over time it optimises itself, and it provides a unique vehicle for an open source team to invite and involve the community in planning decisions. Any poor decisions that did not accomplish their goal see the bettors of that outcome have their potential winnings allocated to the rest of the tokenholders who did not bet on that idea.
For example, Erik suggests to the idea bulletin board adding an extra APY display on a lending protocol’s landing page. Ivan, being intimately familiar with the internals of the system, disagrees, arguing that frontend technical debt prohibits a change like this within the two week development cycle.
Ivan’s financially incentivised to make informed arguments against Erik’s proposal, since he wants an amount of the flat fee for having his idea incorporated into development. Erik responds to his criticism by collaborating with a developer who believes in the idea’s potential to flesh out the technical heavy lifting that’s needed.
Ivan is unable to justify that it’s infeasible. Everyone makes an informed decision! Erik’s idea is chosen by several members of the team, who believe in the potential of the idea, and it’s implemented in the development cycle!
Development
Quadratic voting is converting the square root of your voting power to points which influence the outcome, and having people voting vote on a ballot of different ideas:
Traditionally, from what I’ve seen, people discuss Quadratic Voting by having it be the power of 2, calling it the cost of voting.
Our reference implementation will support these user stories:
Idea suggestion
A user goes to suggest an idea. They do so with a platform to distribute the content of their idea in the form of its preimage before being put onchain. This could be done using IPFS, or in our context, a centralised webapp.
Voting
The operator supplies winning concepts to notify the contract that they were chosen for inclusion in the next round of product features. In doing so, the users that ideated the concepts have their claimable balances of STG tokens increased. Each winning concept is marked as having a balance of STG that will be sent to the correct voters if it comes true.
Winners chosen
The operator supplies winning concepts to notify the contract that they were chosen for inclusion in the next round of product features. In doing so, the users that ideated the concepts have their claimable balances of STG tokens increased. Each winning concept is marked as having a balance of STG that will be sent to the correct voters if it comes true.
Operator confirms which winners achieved goal
The operator chooses winning concepts that accomplished the goal from the options that it received earlier. In doing so, users who bet on that outcome are able to claim their share of the pool of SGT tokens for the dilution event.
Correct predictors draw down amount
Previous stage predictors draw down the concepts they voted on. In doing so, they receive their share of the token dilution for each claimable section.
Implementation
The below is available at the repo https://github.com/fluidity-money/shahmeersgame.com. There will be some things I exclude for brevity reasons.
In the below, we implement some code that uses ERC20Votes from OpenZeppelin to implement non-custodial voting. Rewards for correct performance are in the form of a constantly inflationary token, which is seeded to ShahmeersGame by the development team. After a successful month of game playing, the team will seed liquidity on Longtail against USDC, Superposition’s native AMM, to reward correct bettors.
First, we implement some storage. We need storage to store the amount of quadratic voting tokens invested by a user, the amount of “STG Token” (the token that represents voting power) invested by a user, and we need a way to have this done on a per concept basis. We need some storage as well for tracking who submitted ideas.
I prefer an approach based on this, without using the sol!
macro:
// storage.rs
#[entrypoint]
#[storage]
pub struct ShahmeersGame {
pub enabled: StorageBool,
pub version: StorageU256,
pub token_addr: StorageAddress,
pub operator_addr: StorageAddress,
// The amount of STG to release each epoch to the chosen winners,
// diluting the users who bet incorrectly on their proposals.
pub dilution_stg_correct_concepts: StorageU256,
// The amount of STG to release each epoch to the submitters
// of ideas that are included.
pub dilution_stg_submitters: StorageU256,
// Submitters of ideas, for sending the token compensation for having their ideas
// marked for inclusion.
pub submitters: StorageMap<Concept, StorageAddress>,
// Submitters who are entitled to a fixed amount for having their idea
// be included for submission.
pub submitters_claimable: StorageMap<Address, StorageU256>,
// Was this concept correct? We do this to remember in perpetuity so we don't have a
// mysterious repeat of this concept's submission.
pub concept_is_correct: StorageMap<Concept, StorageBool>,
pub epochs: StorageVec<GameEpoch>,
}
In this model, we have multiple epochs that are the development team asking for ideas from the prediction market pool. These epochs include info on the weighting of different proposals at that point in time:
// storage.rs
#[storage]
pub struct GameEpoch {
pub time: StorageU64,
// Quadratic amounts invested in a concept.
pub concept_quad_amts: StorageMap<Concept, StorageU256>,
// STG amounts invested in a concept.
pub concept_stg_amts: StorageMap<Concept, StorageU256>,
// Token amounts invested by a user.
pub user_stg_amts: StorageMap<Address, StorageU256>,
// Quadratic amounts invested by a user into a specific concept.
pub user_concept_quad_amts: StorageMap<Address, StorageMap<Concept, StorageU256>>,
// Quadratic amounts invested in every concept by users.
pub global_quad_amts: StorageU256,
// Has the operator picked winners for this epoch?
pub winners_picked: StorageBool,
// Future STG claimable by users who invested in this winning concept.
pub concept_stg_claimable: StorageMap<Concept, StorageU256>,
// Whether the future STG claimable by users was collected.
pub user_concept_claimed: StorageMap<Address, StorageMap<Concept, StorageBool>>,
}
Once we have the storage set up, we need some helper functions and types for errors:
// error.rs
#[repr(C)]
#[derive(Debug)]
pub enum Error {
...
}
impl From<Error> for u8 {
fn from(v: Error) -> Self {
unsafe { *<*const _>::from(&v).cast::<u8>() }
}
}
pub type R<A> = Result<A, Error>;
impl From<Error> for Vec<u8> {
fn from(v: Error) -> Self {
vec![0x77, 0x70, v.into()]
}
}
#[macro_export]
macro_rules! assert_or {
($cond:expr, $err:expr) => {
if !($cond) {
Err($err)?;
}
};
}
Our custom enum for error handling (fields not included for brevity), custom type for Results (R
), and custom macro (assert_or!
), all exist to simplify development, to make the code more Solidity-like. We need some code to encode our custom enumerator as a return type in the event of a revert. We do so by converting it to a Vec<u8>
, and have it prefixed with a preamble to indicate it came from us. What would be preferable is an encoding that’s well-understood by Solidity, like Error(String)
, but I find that the impact on codesize is too much for me.
Following the custom errors, we begin to implement the entrypoint code. First, we define a faux constructor (Stylus doesn’t have native constructors, so we need to set it using a proxy pattern later):
// lib.rs
#[public]
impl ShahmeersGame {
pub fn ctor(
&mut self,
token: Address,
operator: Address,
dilution_stg_concepts: U256,
dilution_stg_submitters: U256,
) -> R<()> {
assert_or!(self.version.is_zero(), Error::AlreadyCreated);
assert_or!(
!dilution_stg_concepts.is_zero() && !dilution_stg_submitters.is_zero(),
Error::ZeroSTGAmt
);
self.enabled.set(true);
self.version.set(U256::from(1));
self.token_addr.set(token);
self.operator_addr.set(operator);
self.dilution_stg_correct_concepts
.set(dilution_stg_concepts);
self.dilution_stg_submitters.set(dilution_stg_submitters);
let mut e = self.epochs.grow();
e.time.set(U64::from(block::timestamp()));
Ok(())
}
...
Our constructor includes some safety rails to prevent a user from providing zero values in ways that would result in weird behaviour for the rest of the contract. With the code in this article, it will all be abridged slightly. You can read the full implementation at Github.
The implementation in each epoch dilutes the existing supply of STG token to its bettors, so that the token has a constantly inflationary property, reducing smart contract custody risk by taking a more “passive” ongoing strategy of token management. It measures voting power using ERC20Votes from OpenZeppelin, a contract which tracks voting power at different time points, stored as a large array which is found using binary search based on timestamps. We can create this simply using OpenZeppelin’s Wizard:
We introduce some code to register a concept, entitling a user to a dividend for its later inclusion as a concept. This encourages ideation:
// lib.rs
#[public]
impl ShahmeersGame {
/// Register a new concept, claiming any fees for whoever
/// uses it to this address.
pub fn register(&mut self, concept: Concept, beneficiary: Address) -> R<()> {
assert_or!(self.enabled.get(), Error::NotEnabled);
assert_or!(
self.submitters.get(concept).is_zero(),
Error::ConceptRegistered
);
assert_or!(!beneficiary.is_zero(), Error::BadBeneficiary);
self.submitters.setter(concept).set(beneficiary);
evm::log(events::Registered {
concept,
beneficiary,
});
Ok(())
}
Events are emitted with evm::log
. These are Solidity-style events declared in IEvents.sol
, imported from events.rs
:
// IEvents.sol
interface IEvents {
event Registered(bytes32 indexed concept, address indexed beneficiary);
event WinnerChosen(
bytes32 indexed concept,
uint256 indexed stgToGain
);
event EpochBumped(uint256 indexed prevEpoch);
event ConceptCameTrue(bytes32 indexed concept);
event STGClaimed(
bytes32 indexed concept,
address indexed winner,
uint256 indexed amt
);
}
// events.rs
use stylus_sdk::alloy_sol_types::sol;
sol!("src/IEvents.sol");
pub use IEvents::*;
This lets us use Foundry Forge to compute selectors and to import these events for ABI inclusion later.
We introduce some helpful macros to simplify storage setting:
// storage.rs
#[macro_export]
macro_rules! define_storage_op {
($name:ident, $f:expr) => {
#[macro_export]
macro_rules! $name {
($field:expr, $val:expr) => {{
let a = $field.get();
$field.set($f(a, $val)?)
}};
($field:expr, $h:expr, $val:expr) => {{
let a = $field.get($h);
$field.setter($h).set($f(a, $val)?)
}};
($field:expr, $h:expr, $k:expr, $val:expr) => {{
let a = $field.get($h).get($k);
$field.setter($h).setter($k).set($f(a, $val)?)
}};
}
};
}
define_storage_op!(storage_add, checked_add);
define_storage_op!(storage_sub, checked_sub);
#[macro_export]
macro_rules! storage_adds {
( $( $args:tt );* $(;)? ) => {$(storage_add! $args;)*};
}
#[macro_export]
macro_rules! storage_subs {
( $( $args:tt );* $(;)? ) => {$(storage_sub! $args;)*};
}
These macros will make it easier to add to, and sum from, existing storage fields. Instead of writing:
{
let a = e.concept_quad_amts.get(concept);
e.concept_quad_amts
.setter(concept)
.set(a.checked_add(quad_amt).ok_or(Error::CheckedAdd)?)
}
Which is the equivalent of e.concept_quad_amts[concept] += quad_amt
in Solidity, a language with native int overflow exceptions, we can write:
storage_add!(e.concept_quad_amts, concept, quad_amt);
These are one of the greater advantages of Rust, native, hygienic, performant, macros.
In this function, we compose the macros together with storage_adds
:
// lib.rs
#[public]
impl ShahmeersGame {
/// Vote, using the delegated voting power of the OZ token at the point in
/// time of the epoch creation. Returns the STG amount that was taken.
pub fn add_votes(&mut self, concept: Concept, stg_amt: U256) -> R<U256> {
assert_or!(self.enabled.get(), Error::NotEnabled);
assert_or!(
!self.submitters.get(concept).is_zero(),
Error::ConceptNotRegistered
);
assert_or!(
!self.concept_is_correct.get(concept),
Error::ConceptDoneAlready
);
let mut e = self.epochs.setter(self.epochs.len() - 1).unwrap();
// We can't allow this to happen if the operator has already picked winners.
assert_or!(!e.winners_picked.get(), Error::WinnersPicked);
let stg_already_spent = e.user_stg_amts.get(msg_sender());
let stg_amt = {
// Get the past votes at the point in time of the epoch's creation.
let votes = stg_call::get_past_votes(
self.token_addr.get(),
msg_sender(),
U256::from(e.time.get()),
)?;
assert_or!(!votes.is_zero(), Error::ZeroVotes);
// Get the past amount spent by the user in this epoch.
min(stg_amt, votes - stg_already_spent)
};
assert_or!(!stg_amt.is_zero(), Error::NotEnoughToken);
let quad_amt = maths::stg_to_quad(stg_amt);
storage_add!(e.concept_quad_amts, concept, quad_amt);
storage_adds! {
(e.concept_stg_amts, concept, stg_amt);
(e.user_stg_amts, msg_sender(), stg_amt);
(e.user_concept_quad_amts, msg_sender(), concept, quad_amt);
(e.global_quad_amts, quad_amt);
}
Ok(stg_amt)
}
We essentially check if certain fields are set, then we increase balances/counts for later purposes in the process of determining value of STG to send in its diluted form to the bettor.
Taking votes if someone regrets doing so is largely the same:
// lib.rs
#[public]
impl ShahmeersGame {
/// Take some STG from a concept. We can't allow this to take place if the
/// winners were already called, though it would be waste without harm!
pub fn take_votes(&mut self, concept: Concept, stg_amt: U256) -> R<()> {
assert_or!(self.enabled.get(), Error::NotEnabled);
let mut e = self.epochs.setter(self.epochs.len() - 1).unwrap();
assert_or!(!e.winners_picked.get(), Error::WinnersPicked);
assert_or!(
e.user_stg_amts.get(msg_sender()) >= stg_amt,
Error::NotEnoughToken
);
// Go to take some quadratic voting power from the concept given,
// reverting if the user didn't supply enough for the STG they're taking.
let quad_amt = maths::stg_to_quad(stg_amt);
assert_or!(
e.user_concept_quad_amts.getter(msg_sender()).get(concept) >= quad_amt,
Error::ConceptNoUserQuad
);
// Now that we've confirmed they actually had that much, we can take from
// the global allocations. Of course, with checked storage, it's not needed to
// actually check above like we did, but, thanks to Stylus' caching, we can
// do this check for a better user experience without too great a detriment to
// the code's gas profile.
storage_subs! {
(e.concept_quad_amts, concept, quad_amt);
(e.concept_stg_amts, concept, stg_amt);
(e.user_stg_amts, msg_sender(), stg_amt);
(e.user_concept_quad_amts, msg_sender(), concept, quad_amt);
(e.global_quad_amts, quad_amt);
};
Ok(())
}
It’s a very similar function!
Now, we can implement a function to choose winners from the pool that is available. To prevent denial of service/reduce gas costs, we require that the caller supplies as calldata a golfed number of concepts, with the concepts being determined as being correctly submitted if the minimum quadratic amount is more than the amount remaining:
// lib.rs
#[public]
impl ShahmeersGame {
/// Choose winners by going through the amount of quadratic votes in each
/// concept, until the minimum amount in a outcome is greater than the
/// amount tracked as being allocated in the entire epoch. Bump the epoch
/// for the entire game, and return winning proposals amount of STG that
/// could be received if they come true.
pub fn choose_winners(
&mut self,
concept_count: u64,
concepts: Vec<Concept>,
) -> R<Vec<(Concept, U256)>> {
assert_or!(self.enabled.get(), Error::NotEnabled);
assert_or!(self.operator_addr.get() == msg_sender(), Error::NotOperator);
let mut e = self.epochs.setter(self.epochs.len() - 1).unwrap();
// We can't allow someone to make a mistake by somehow calling this twice.
assert_or!(!e.winners_picked.get(), Error::WinnersPicked);
// It might be better to prescreen the concepts to see if they
// match, but this is fine.
assert_or!(!concepts.is_empty(), Error::ConceptsEmpty);
let mut concepts = concepts
.into_iter()
.map(|c| (c, e.concept_quad_amts.get(c)))
.collect::<Vec<(Concept, U256)>>();
// Sort this inplace, replacing what's there.
concepts.sort_unstable_by(|(_, x), (_, y)| y.cmp(x));
// The problem with this dedup is that it will be wasteful if
// someone makes a mistake with their calldata here. But it
// exists to prevent abuse, not to be efficient.
concepts.dedup();
// Now that we've sorted the concepts supplied, we can sum the
// amounts, then compare the difference between the tracked
// amount as being allocated globally, and the minimum here.
{
let concepts_allocated: U256 = concepts.iter().map(|(_, x)| x).sum();
let (_, min_concept) = concepts.last().unwrap();
assert_or!(!min_concept.is_zero(), Error::ConceptZeroSupplied);
assert_or!(
*min_concept > e.global_quad_amts.get() - concepts_allocated,
Error::BadConcepts
);
}
// Now that we know the actual winning concepts, we need to pick the ones
// that are under the number of concepts we want to declare as winners.
// We need to apportion the size of the STG token we want to release each
// epoch. Here, we take as much as we can (up to the amount asked).
let concepts = &concepts
.into_iter()
.take(concept_count as usize)
.collect::<Vec<_>>();
// We take the summed amount to know how to dilute the share of the STG
// token to distribute.
let winning_concept_sum: U256 = concepts.iter().map(|(_, x)| x).sum();
e.winners_picked.set(true);
concepts
.iter()
.map(|(c, quad_amt)| {
let stg_pct_for_winner = (quad_amt * SCALING_AMT) / winning_concept_sum;
let stg_amt_for_winner =
(self.dilution_stg_correct_concepts.get() * stg_pct_for_winner) / SCALING_AMT;
// This amount will be claimable by the predictors of this outcome if it comes true.
e.concept_stg_claimable.setter(*c).set(stg_amt_for_winner);
evm::log(events::WinnerChosen {
concept: *c,
stgToGain: stg_amt_for_winner,
});
// Now that we've had a winner chosen, we need to send the
// submitters a fixed dividend for their idea.
stg_call::transfer(
self.token_addr.get(),
self.submitters.get(*c),
self.dilution_stg_submitters.get(),
)?;
Ok((*c, stg_amt_for_winner))
})
.collect::<R<Vec<_>>>()
}
Our work in determining the percentages of STG token to dilute to the bettors is based on the distribution of votes to these concepts. In this code, we do lots of collection/sorting/deduplication to prevent abuse. This is a function that would be painful to implement via Solidity, so it’s nice that we benefit from Rust’s native collections type for this. Finally, we transfer the submitters of the concepts their incentive amount for ideation.
Finally, the operator picks the winners that accomplished their goal:
// lib.rs
#[public]
impl ShahmeersGame {
/// Declare that these winners were correct, and claimable.
pub fn pick_winners_that_accomplished(&mut self, epoch: u64, concepts: Vec<Concept>) -> R<()> {
assert_or!(self.enabled.get(), Error::NotEnabled);
assert_or!(self.operator_addr.get() == msg_sender(), Error::NotOperator);
let e = self.epochs.setter(epoch).ok_or(Error::BadEpoch)?;
for c in concepts {
assert_or!(
!e.concept_stg_claimable.get(c).is_zero(),
Error::BadConcepts
);
self.concept_is_correct.setter(c).set(true);
evm::log(events::ConceptCameTrue { concept: c });
}
Ok(())
}
It’s essentially marking the concepts as claimable/correct.
Finally, the final major function is implemented, the function for a bettor to claim their funds as a claimant:
// lib.rs
#[public]
impl ShahmeersGame {
/// Draw down winning amount on behalf of a user, sending it to their
/// address. It should be fine to allow someone to call this without
/// checks, since it will send based on delegations.
pub fn draw_down_winner(&mut self, epoch: u64, concept: Concept, winner: Address) -> R<U256> {
assert_or!(self.enabled.get(), Error::NotEnabled);
assert_or!(
self.concept_is_correct.get(concept),
Error::NotCorrectConcept
);
let mut e = self.epochs.setter(epoch).ok_or(Error::BadEpoch)?;
assert_or!(
!e.user_concept_claimed.getter(winner).get(concept),
Error::AlreadyClaimed
);
// Get the user's share of the pool of quadratic tokens invested in this outcome.
let pct_of_quad = (e.user_concept_quad_amts.get(msg_sender()).get(concept) * SCALING_AMT)
/ e.concept_quad_amts.get(concept);
let stg_amt = (pct_of_quad * e.concept_stg_claimable.get(concept)) / SCALING_AMT;
stg_call::transfer(self.token_addr.get(), winner, stg_amt)?;
e.user_concept_claimed
.setter(msg_sender())
.setter(concept)
.set(true);
evm::log(events::STGClaimed {
concept,
winner,
amt: stg_amt,
});
Ok(stg_amt)
}
This essentially sends the diluted STG amount to the bettor. It’s okay for us to call this repeatedly and on behalf of someone else, since there’s no reason not to. It also simplifies using a multicall over this.
Finally:
// lib.rs
#[public]
impl ShahmeersGame {
/// The operator bumps the epoch for whatever reason. Maybe voting has
/// been concluded, and we're ready for the next batch.
pub fn bump_epoch(&mut self) -> R<()> {
assert_or!(self.enabled.get(), Error::NotEnabled);
assert_or!(self.operator_addr.get() == msg_sender(), Error::NotOperator);
let prev_epoch = self.epochs.len() - 1;
self.epochs.grow().time.set(U64::from(block::timestamp()));
evm::log(events::EpochBumped {
prevEpoch: U256::from(prev_epoch),
});
Ok(())
}
And the cycle continues.
Testing
Following the implementation, we first implement a simple and happy user story. A strategy I benefit from with Stylus is to first write a user story-centric test, then to run a mutation testing pass on the code.
Mutation testing is the practice of randomly variating control flow in the code, then testing how permissive the tests are with these changes. This way, you can let the testing suite dictate which tests you should be writing next to test your assumptions.
Before writing any code, we first add proptest
to our code, some helper functions and macros to help us with randomly setting values using property based testing. Property based testing lets us randomly vary data in our testing, so we can test our assumptions about what the tests are actually checking:
// storage.rs
/// Simple strategy that generates values up to a million.
#[cfg(not(target_arch = "wasm32"))]
fn strat_tiny_u256() -> impl Strategy<Value = U256> {
(0..1_000_000).prop_map(|x| U256::from(x))
}
#[cfg(not(target_arch = "wasm32"))]
fn strat_fixed_bytes_sizeable<const N: usize>() -> impl Strategy<Value = FixedBytes<N>> {
// Create a slice of fixed bytes, with a preference for the lower side, a
// la how I recall seeing Parity's Ethereum client do it. This has a 33%
// chance of filling out a third of the lower bits, which, in our
// interpretation, is decoded as big endian in the next function, so
// the right side, a 33% chance of two thirds, and a 33% chance of
// everything is potentially filled out.
(0..3).prop_perturb(move |s, mut rng| {
let mut x: [u8; N] = [0u8; N];
let q = N / 3;
if s == 2 {
for i in q * 2..N {
x[N - i - 1] = rng.gen();
}
}
if s >= 1 {
for i in q..q * 2 {
x[N - i - 1] = rng.gen();
}
}
for i in 0..q {
x[N - i - 1] = rng.gen();
}
FixedBytes::<N>::from(x)
})
}
#[cfg(not(target_arch = "wasm32"))]
pub fn strat_large_u256() -> impl Strategy<Value = U256> {
strat_fixed_bytes_sizeable::<32>().prop_map(|x| U256::from_be_bytes(x.into()))
}
#[cfg(not(target_arch = "wasm32"))]
pub fn strat_fixed_bytes<const N: usize>() -> impl Strategy<Value = FixedBytes<N>> {
strat_fixed_bytes_sizeable::<N>()
}
These macros will let us randomly create U256 values by creating random bytes.
We also define one for randomly creating addresses:
// storage.rs
#[cfg(not(target_arch = "wasm32"))]
pub fn strat_address() -> impl Strategy<Value = Address> {
proptest::arbitrary::any::<[u8; 20]>().prop_map(Address::new)
}
Then finally, we compose it together to randomly construct storage:
// storage.rs
#[cfg(not(target_arch = "wasm32"))]
pub fn strat_shahmeers_game() -> impl Strategy<Value = ShahmeersGame> {
// We set the storage offset so we may run this in a parallel test.
(
strat_large_u256().no_shrink(), // Storage offset
any::<bool>(),
strat_large_u256(),
strat_address(),
strat_address(),
strat_tiny_u256(),
strat_tiny_u256(),
)
.prop_map(
|(
i,
enabled,
version,
token_addr,
operator_addr,
dilution_stg_correct_concepts,
dilution_stg_submitters,
)| {
storage_set_fields!(ShahmeersGame, i, {
enabled,
version,
token_addr,
operator_addr,
dilution_stg_correct_concepts,
dilution_stg_submitters,
})
},
)
}
In the above, we’re too lazy to create random values for the submitters, and the concepts. We’ll be setting that up later. We need one final piece of the puzzle. This code will include guards to let us manipulate balances of the voting power:
// host_stg_call.rs
thread_local! {
static PAST_VOTES: RefCell<HashMap<Address, U256>> = RefCell::new(HashMap::new());
}
fn give_votes(spender: Address, amt: U256) {
PAST_VOTES.with(|v| {
let mut v = v.borrow_mut();
v.insert(spender, amt);
})
}
fn reset_all_votes() {
PAST_VOTES.with(|v| {
let mut v = v.borrow_mut();
v.clear();
})
}
pub fn use_points_f<T>(
spenders: HashMap<Address, U256>,
f: impl FnOnce() -> R<T>,
) -> Result<T, Error> {
for (r, amt) in spenders.clone() {
give_votes(r, amt)
}
let x = f();
reset_all_votes();
let v = x?;
Ok(v)
}
#[macro_export]
macro_rules! use_votes {
(
{ $( $key:expr => $value:expr ),* $(,)? },
$func:expr
) => {
$crate::host_stg_call::use_points_f(
map_macro::hash_map! { $( $key => $value ),* },
|| { $func }
).unwrap()
};
}
pub fn get_past_votes(_addr: Address, spender: Address, _ts: U256) -> R<U256> {
Ok(PAST_VOTES.with(|v| *v.borrow().get(&spender).unwrap_or(&U256::ZERO)))
}
pub fn transfer(_addr: Address, _recipient: Address, _amt: U256) -> R<()> {
Ok(())
}
The combination of property based testing (test your code’s assumptions about values being set, handling randomness/large numbers), combined with mutation testing (randomly variate control flow, see how your tests handle changes there) is a good combination to aggressively test your assumptions, and thus, your code.
We write our first test:
// e2e.rs
proptest! {
#[test]
fn test_best_user_story(
mut c in strat_shahmeers_game(),
votes in strat_large_u256(),
concept in strat_fixed_bytes::<32>(),
) {
c.enabled.set(false);
c.version.set(U256::from(0));
c.ctor(
c.token_addr.get(),
c.operator_addr.get(),
c.dilution_stg_correct_concepts.get(),
c.dilution_stg_submitters.get()
).unwrap();
c.register(concept, msg_sender()).unwrap();
let stg_amt = use_votes!{
{msg_sender() => votes},
c.add_votes(concept, U256::from(votes))
};
// Let's see what happens if we take votes from our user, then add it back.
c.take_votes(concept, votes).unwrap();
// Give back our votes.
use_votes!{
{msg_sender() => votes},
c.add_votes(concept, U256::from(votes))
};
let epoch = c.epochs.get(c.epochs.len() -1).unwrap();
assert_eq!(stg_amt.root(2), epoch.concept_quad_amts.get(concept));
c.operator_addr.set(msg_sender());
let (winning_concept, stg_to_gain) = c.choose_winners(3, vec![concept]).unwrap()[0];
assert_eq!(concept, winning_concept);
// There's only one outcome, so we should be able to collect the entire bag of STG.
assert_eq!(c.dilution_stg_correct_concepts.get(), stg_to_gain);
// Now the sender can claim the amount.
c.pick_winners_that_accomplished(0, vec![concept]).unwrap();
// Now the caller can claim the money!
assert_eq!(
stg_to_gain,
c.draw_down_winner(0, concept, msg_sender()).unwrap()
);
// See if we can repeat the cycle.
c.bump_epoch().unwrap();
}
}
I couldn’t resist the urge to take the balance, then add it back.
We test our assumptions about testing using cargo mutants
:
Found 66 mutants to test
ok Unmutated baseline in 17.0s build + 0.6s test
INFO Auto-set test timeout to 20s
MISSED src/lib.rs:270:44: replace - with + in ShahmeersGame::bump_epoch in 0.6s build + 0.6s test
MISSED src/lib.rs:96:32: replace - with + in ShahmeersGame::add_votes in 0.9s build + 0.8s test
MISSED src/lib.rs:270:44: replace - with / in ShahmeersGame::bump_epoch in 0.8s build + 0.8s test
66 mutants tested in 1m: 3 missed, 24 caught, 39 unviable
done
It seems, according to our mutants testing, that we must test that epochs are incremented properly, and that votes are being tracked as being on the floor of what balance the user has. We got pretty lucky that on our first go only three mutants were found!
So we extend our testing to include checks for the epoch bumping taking place correctly, by adjusting our heavy end to end test, and having the bump epoch function return the next id:
// lib.rs
pub fn bump_epoch(&mut self) -> R<u64> {
...
self.epochs.grow().time.set(U64::from(block::timestamp()));
evm::log(events::EpochBumped {
prevEpoch: U256::from(prev_epoch),
});
Ok(prev_epoch as u64 + 1)
}
// e2e.rs
// See if we can repeat the cycle.
assert_eq!(1, c.bump_epoch().unwrap());
Following this, we adjust our end to end test again to split our votes in two:
#[test]
fn test_best_user_story(
mut c in strat_shahmeers_game(),
votes_1 in strat_tiny_u256(),
votes_2 in strat_tiny_u256(),
concept in strat_fixed_bytes::<32>(),
) {
...
let votes = votes_1 + votes_2;
let stg_amt_1 = use_votes!{
{msg_sender() => votes},
c.add_votes(concept, votes_1)
};
let stg_amt_2 = use_votes!{
{msg_sender() => votes},
c.add_votes(concept, votes_2)
};
// Let's see what happens if we take votes from our user, then add it back.
c.take_votes(concept, votes_1).unwrap();
c.take_votes(concept, votes_2).unwrap();
// Give back our votes.
use_votes!{
{msg_sender() => votes},
c.add_votes(concept, votes_1)
};
use_votes!{
{msg_sender() => votes},
c.add_votes(concept, votes_2)
};
...
Splitting up our balances and reducing the amount that the user wants to take, not just the balance that’s available in votes, tests the minimum balance check.
Running this again, we see that we’ve now resolved all mutants!
Found 71 mutants to test
ok Unmutated baseline in 16.0s build + 0.8s test
INFO Auto-set test timeout to 20s
71 mutants tested in 57s: 29 caught, 42 unviable
done
Amazing! We could assume, based on this, that our test coverage is covering most of the assumptions we have made (though not absences of checks for example).
If you made it this far, good on you! Thanks for sticking out a longer post than usual. If these ideas came in handy for you, be sure to tweet at me at @baygeeth. Your encouragement motivates me to publish more.
Stylus resources
In no particular order:
If your project has any updates you’d like to share, or you would like to request that we monitor your project’s developments to include you in the weekly newsletter, please reach out to @baygeeth on X!
Stylus Saturdays is powered by Superposition, the first defi-native layer 3 that pays you to use it.