Conditional Funding Markets

Or how a futarchy can allocate treasury.

Retro PGF is one of the most successfully implemented mechanisms for rationally allocating treasury to grow ecosystem value.

However, it comes with challenges, as deciding which project should receive funding typically relies on a vote, thus depending on individual jurors’ preferences rather than on eliciting actual information about which projects contributed most to ecosystem growth.

To overcome this, we need measurable objectives (metrics) that can be assessed for each project after the fact. Examples include the number of smart contract calls, fees generated (for an L2), or order flow (for a DEX).

Optimism has already taken steps in this direction with Retro Funding Round 4, where Citizen House badge holders vote on the weighting of metrics, which are then used to evaluate projects objectively.

In this post, we define a mechanism that extends retro-funding using measurable objectives and prediction markets, enabling project proactive funding.

Motivation for Prediction Markets

Let’s consider a retro-funding round that relies solely on metrics, similar to Optimism Retro-funding Round 4.

Such a mechanism can only efficiently fund projects with sufficient runway to reach a retro-funding round.

Additionally, relying on a deterministic retro-funding rule only incentivizes projects to increase metrics but doesn’t guarantee whether the funds are allocated most effectively.

Instead, we propose a funding mechanism for future efforts and introduce prediction markets that forecast how each project will impact future metrics.

These forecasts will allow:

  • Unlocking project funding beforehand enables funding a more significant number of projects, including those that lack the necessary runway.
  • Avoiding funding projects where the funding provided does not directly produce a change in metrics, e.g., projects funded via other means, projects farming for rewards for work already completed.
  • Continuously eliciting information through forecasts, thereby learning about the efficacy of the allocation mechanism. For example, suppose forecasters assign the same value to conditional estimates (funding vs not funding). In that case, this strongly hints at funding not being a predictor of project success in terms of this metric, and the DAO would update the set of metrics to evaluate projects on.

Model

A DAO organizes a round of funding with an overall budget b and a set P of projects that apply for funding.

We require a curation mechanism with crypto-economic guarantees to ensure that the set P doesn’t contain spam.

Each project p comes with a single investment ask i_p > 0 that it expects to receive if selected. The mechanism can easily be extended to multiple, variable-sized asks per project, but we keep it single here for simplicity.

The DAO defines a set of measurable objectives M, or “metrics,” such as:

  • “Active verified users in 6 months” for an app.
  • “Attributed order flow in 1 year” for a DEX front-end.
  • “Gas fees generated in 2 years” for contracts running on an L2.

We denote m(p) as the metric value for a given project, aggregated from today until the specified date.

Each metric has an associated positive weight w_m. The value the DAO assigns to each project is defined by the weighted sum of metrics \sum_{m \in M} w_m m(p). Each w_m accounts both for:

  • The significance of the metric as a proxy for projects’ success, expressing the DAO’s preferences over which metric is more or less important.
  • Normalizing the metric depending on its type (e.g., normalize a TVL per month metric by 2x the highest protocol’s TVL per month, etc.).

We assume a crypto-economic oracle (such as UMA or reality.eth) gathers metrics on-chain and makes them available to smart contracts.

The mechanism will output a set of actual investments \hat{i}_p with \hat{i}_p \in \{0, i_p\}.

Objective

The DAO wants to allocate funding in a way that:

  • Maximizes its overall ROI on Metrics defined by \frac{\sum_{p \in P} \sum_{m \in M} w_m m(p)}{\sum_{p \in P} \hat{i}_p} or \frac{\sum_{p \in P} s(p)}{\sum_{p \in P} \hat{i}_p} with s(p) the weighted sum of metrics.
  • Respects the budget \sum_{p \in P} \hat{i}_p \leq b.

Since funding is allocated proactively, we expect this mechanism to elicit accurate forecasts via prediction markets for the weighted sum of metrics of each candidate project.

Conditional Funding Market (CFM) Mechanism

The mechanism is essentially a Decision Market:

Decision markets both predict and decide the future. They allow experts to predict the effects of each of a set of possible actions, and after reviewing these predictions a decision maker selects an action to perform.

(Chen et al., 2011)

The mechanism operates as follows:

  1. It runs prediction markets to forecast the expected value of conditional outcome tokens, which reflect the metrics-based results of funding or not funding a project.
  2. It then applies a decision rule based on these forecasts.

Prediction Markets for Project Metrics

For each project, we create a prediction market with a single outcome: the weighted sum of metrics s(p) = \sum_{m \in M} w_m m(p).

A caveat of this design is not having per-metric forecasts, which would enable more granular feedback: if a metric appears to be hard to predict by market participants, the DAO will have difficulty learning from it and adjusting. Another approach enabled by the Logarithmic Scoring Rule (LMSR) (Hanson, 2002) would be to create prediction markets with multiple base events (one per metric). But for simplicity’s sake, we will not develop it in this post.

For each project, create a contract that takes (e.g.) sDAI deposits and, for 1 sDAI deposited, returns a pair of outcome tokens (\textsf{Short}, \textsf{Long}). Additionally, we assume there is substantial certainty that the weighted sum will be in the value range [v^{\text{min}}, v^{\text{max}}].

Note the usage of sDAI, which is interest-bearing. This is key to mitigating traders’ opportunity costs. Any yield-bearing stablecoin could replace it.

Short and Long tokens have a typical scalar token design. At resolution time, if:

  • s(p) \leq v^{\text{min}}, only \textsf{Short} tokens redeem for 1 sDAI.
  • s(p) \geq v^{\text{max}}, only \textsf{Long} tokens redeem for 1 sDAI.
  • If v^{\text{min}} < s(p) < v^{\text{max}}:
    • ~~\textsf{Long}~~ tokens redeem for \frac{s(p) - v^{\text{min}}}{v^{\text{max}} - v^{\text{min}}} sDAI.
    • ~~\textsf{Short}~~ tokens redeem for \frac{v^{\text{max}} - s(p)}{v^{\text{max}} - v^{\text{min}}} sDAI.

A market scoring rule can guarantee truthful reports and incentive compatibility. The Logarithmic Market Scoring Rule (LMSR) is the most widely studied and is a strong choice.

Current prices represent the market’s prediction of s(p)

Conditional Tokens

We also need to make the forecast dependent on whether funding occurs for a given project. For this, we rely on conditional tokens (as introduced by Gnosis):

  • A pair (\textsf{Short}^{\text{yes}}, \textsf{Long}^{\text{yes}}) which redeems for 1 sDAI if funding happens.
  • A pair (\textsf{Short}^{\text{no}}, \textsf{Long}^{\text{no}}) which redeems for 1 sDAI if no funding is provided.

Two corresponding Yes and No prediction markets are created (per project).

Decision Rule

At any point, both markets’ prices will represent the aggregate forecast about the weighted sum of metrics in their respective Yes or No worlds.

The decision rule can be applied once markets have converged on a forecast. This highly depends on new information being released (especially by projects themselves) that can influence bets.

Assuming projects release all relevant information at the start of the funding round, we expect markets to converge quickly. Also, the longer markets run, the more bettors must try accounting for future information (see Hanson, 2013).

Hence, the decision rule is applied around one week after creating the markets. To prevent manipulation, the precise time the decision rule is applied can be randomized over several days.

The most straightforward decision rule is the max decision rule: if \text{price}(\textsf{Yes}) > \text{price}(\textsf{No}), fund the project; otherwise, don’t fund it.

It has been shown that this rule isn’t incentive-compatible with truthful reporting and creates manipulation opportunities for traders (Othman, Sandholm, 2010). Namely, this could result in Yes odds being greater than No odds, thus funding the project, when truthful reporting would have recommended the contrary.

However, we expect to experimentally observe the effects of manipulation and adjust accordingly. Multiple possible mitigations have already been researched:

  • Picking the last trader from a reputable set, as indicated by (Othman, Sandholm, 2010).
  • Using mixed strategy rule to pick the decision (Chen et al, 2011). This would leave room for making sub-optimal decisions but might be an acceptable trade-off in aggregate.

In any case, a practical mitigation is to run enough small rounds to limit the potential downside of any such sub-optimal decision.

Curation Mechanism

We require a mechanism to elicit curated projects to prevent spam and instantiate the CFM mechanism only for relevant projects. Specifically, we would like a mechanism that favors projects with a higher chance of achieving funding.

For this, we use a repeated auction, where projects bid for their inclusion in a slot, as this has some interesting properties for bootstrapping prediction markets. Other approaches include stake-based curation or curation from a reputable jury.

A slot’s duration, e.g. a week, can be modulated by the DAO. Whenever a slot starts, a new auction is launched. Projects compete by posting bids together with some metadata.

This auction can be a first-price auction, a second-price auction with commit-reveal, or a Dutch auction.

The auction winner earns the right to submit her project to the CFM mechanism (see below) for the given slot. The auction revenue is used to bootstrap prediction market liquidity.

The auction winner must then define the initial price of the Yes market (see below) by defining the ratio by which 1 sDAI is split in \textsf{Long}^{\text{Yes}} and \textsf{Short}^{\text{Yes}}. The project owner is incentivized to input the most accurate ratio, which will limit her impermanent loss as a liquidity provider. Prediction markets will benefit from the project owners revealing private information through these starting prices.

A limitation of this mechanism is that only projects with enough financial backing might compete in the auction and get funded. A solution is to enable all projects to produce project tokens, enabling funding by selling tokens and then rewarding investors with a cut of the retroactive funding if it happens (see Retroactive Public Goods Funding. Note: The Optimism team has long been… | by Optimism | Optimism PBC Blog | Medium).

Liquidity Subsidies

Sufficient liquidity must be available to ensure the proper functioning of prediction markets, especially until the decision rule is applied. However, LPs face impermanent loss whenever the Short/Long price moves away from the initial price when the AMM pool was launched.

A key element of this mechanism is the expectation that the DAO will subsidize this liquidity by rewarding liquidity providers (akin to liquidity mining). These rewards must be sufficient so that, when added to AMM fees, they compensate for impermanent loss.

This subsidy is justified as a payment for the information the DAO gains from operating the prediction markets.

Additionally, a key means for the DAO to limit the total cost of this subsidy is to define an initial market price (e.g., based on past project data) and incentivize liquidity provision at that price. The curation mechanism described above already achieves this.

Funding Algorithm

For each project selected by the CFM mechanism, the funding algorithm then:

  • Computes the ROI.
  • Distributes the budget b in a manner that maximizes aggregate ROI.

As long as all project funding requirements are relatively small compared to the budget, we expect a simple greedy algorithm to work: distribute funding to selected projects with the highest ROI first.

Additionally, since projects are funded proactively, we assume that the DAO wants to retain some control over project funding to prevent misuse of funds. This can be implemented through the gradual delivery of funding, along with a backstop mechanism that can halt financing at any time if a project is observed to be non-compliant with predefined guidelines. However, this backstop mechanism must cancel the winning prediction market as its forecasts become irrelevant. Hence, it must be used sparingly, otherwise threatening forecasters’ long-term participation in the mechanism.

3 Likes

I was thinking a potential risk of allowing auction winner to set DM start price is that it could allow them to intentionally cause high IL, resulting in low liquidity, hence reducing the cost of for them to manipulate their market’s price, unless a liquidity floor is enforced by some other means.

The causal link between low liquidity and low manipulation cost is that low liquidity means other speculators have a reduced incentive to notice and correct mispricings.

2 Likes

Agreed!

2 factors are at play against this:

  1. As long as there is enough competition in the curation auction, the auction winner will have to commit a sizeable amount of liquidity. This means they will themselves suffer the IL if the attack doesn’t succeed.
  2. Even with relatively low liquidity, rational bettors will still show up and (imprecisely, as liquidity is low) start adjusting the price. Whenever the price gets closer to bettors’ beliefs, the more liquidity providers will be confident in showing up and starting depositing, nullifying the issue after some time.

On point 1, the slot duration size can be increased whenever bids are few to ensure enough competition. Inversely, whenever the DAO commits more funds to distribute, more projects should show up, increasing bidding competition and thus permitting the slot duration to be reduced.

On point 2, additionally, setting LP fees high enough can make it rational for LPs to start depositing even if they suffer some IL, as long as there is enough volume. In the manipulation scenario, the manipulator acts like a consistent noise trader, inducing counteracting trades and, thus, volume.

2 Likes

This makes sense. The higher value of liquidity required to be committed : proposal funding value, the less manipulation risk/incentive exists, as the cost of failing is higher. However the higher this ratio is, the more retro-pgf-esque this is (in the sense that more initial capital is required), and hence the less benefit is being derived from using decision markets. So it is important that the mechanism is secure even if this ratio is quite far from 1, in order to maximally benefit from decision markets.

Perhaps a simple solution is to apply some bounds to the initial price so that it can’t be so extreme as to e.g. cause 99% IL, as a sanity check, while still mostly leaving the initial price up to the proposal creator.

On point 2, additionally, setting LP fees high enough can make it rational for LPs to start depositing even if they suffer some IL, as long as there is enough volume. In the manipulation scenario, the manipulator acts like a consistent noise trader, inducing counteracting trades and, thus, volume.

I agree with this in principle. I imagine though that if the ratio between initial liquidity and proposal funding ask value is sufficiently low, that this could still lead to issues. But this is likely only an issue if the IL is absurdly high, due to no sanity check (referred to above) being in place.

My current mental model for this is that the speed and efficiency with which incorrect/manipulated prices are corrected is a function of the liquidity (among other things ofc). Hence if the initial liquidity is sufficiently low, the manipulation effort may largely go through unnoticed (within the relevant time frame) by rational informed traders, due to the low incentive they have.

I agree that more the participation of informed traders will attract LPs to provide additional liquidity, however I do not think this fundamentally alters the dynamic, given that in order for the informed traders to initially show up, they need an incentive.

You’re mostly right about the retro-pgf-esque part, but there are still some differences:

  1. this mechanism improves the metrics-ROI for the DAO (which is not the cast of retro-pgf, at last not in a myopic way)
  2. this mechanism requires liquidity provision which has a different structure from regular VC funding for retro-pgf projects: risk/reward profile, lock-up duration and amounts are very different.

To elaborate a bit on point 2, this mechanism basically requires project owners to find liquidity, and the more predictable the project is, the easier it will be. As a project is more predictable, IL can be assumed to be lower, and thus, initial liquidity provisioning will appear as a more valuable, short-to-medium-term investment.

How can we figure out whether the IL will be large or not? We could say: the initial price can’t be set outside some n standard deviations around the historical metrics measurement, together with making the market bounds wide enough. This could work with projects which have such past measurements.

This deserves further modeling. My current mental model is that there will always be a slight value to trade even with very low liquidity, so some random walks will happen away from the price the manipulator is trying to fix. Whenever the price gets closer to bettors’ beliefs, some liquidity should jump in the market, even if a tiny bit. Repeat this, and a form of mirror effect to why puddles evaporate happens.

Makes sense, good points. thx

I suppose though liquidity provision will generally be unprofitable, so as to make speculation +EV? Hence it kind of has to be a bad investment, in order to work?

Perhaps a simpler solution than this which doesn’t require knowledge of the historical metrics is to just enforce that the results of the market are only accepted if during some observation window, the average $liquidity was > X% of the initial required $liquidity. This has the effect of leaving exactly how to achieve this up to the market creator, hence avoiding having to micromanage them.

This is interesting. Regarding the amount of liquidity, a fixed value could be a start but seems a bit dangerous in the longer term (a manipulator might have the incentive to add liquidity to ensure the decision is taken and this depends on the extractible value).
Also, from the point of view of bettors, the larger the liquidity, the more incentive there is to participate. And we prefer that all information is revealed before the measurement is performed to avoid a race during the measurement window. This suggests that participation should be incentivized through higher liquidity before the measurement window and that the measurement window should rather have less liquidity.

Yes but it might be profitable from the pov of liquidity providers (first of which is the curator/auction winner/market creator) to participate as long as:

  • LP fees are high enough, which can be only justified if there are enough noise traders
  • subsidies are high enough.

Basically, subsidies are there to compensate for LP negative EV.

1 Like

Ah yeah I completely forgot about the liquidity subsidy section of your article.

yeah I think it makes sense for liquidity requirements to be a function of the magnitude of the ask amount, so that manipulation costs increase as manipulation incentives increase.

1 Like

Yeah I think it is worthwhile for the liquidity observation window to start before the price/outcome observation window, so the market has time to equilibrate and factor in all information before measurement begins. However I also think that it is worthwhile to continue to monitor liquidity during the price observation window, so that liquidity doesn’t dry up to such a degree that manipulation becomes relatively easy while the price is being measured.

So interesting, thanks for this! It seems like a good way to make funding/grant allocation more efficient, and this is the Holy Grail in many industries.

DAOs seem to be the perfect targets to start with, as some of them have large treasuries and need to allocate them for grants to the ecosystem. I am no expert in prediction markets, but just want to share some thoughts and questions I have:

Comparative Analysis between Grant Allocation Methods:

It would be great to compare the efficiency of Futarchy vs. RPGF vs. Milestone-based grants. My understanding is that this can’t be done with the same projects… I guess a DAO could run two different programs in parallel and see which one yields a higher ROI, for example. Even this could vary a lot based on the projects; the goal would be to prove empirically and practically that one methodology is superior to the others.

Subsidies from the DAO:

My understanding is that subsidizing LP will be a cost for the DAO. Can it be measured upfront with a max amount, so the DAO can know what to expect?

Is there a risk of having too few bettors for a PM? I guess if this is the case, it could make sense for the DAO to not only subsidize LP but also to incentivize bettors by dedicating a specific budget for the best bettors with a final Leader Board.

Curation Method:

I really like the auction mechanism by slot you described here. However, I feel like there is a risk of “plutocratic behaviors” that some actors with huge budgets are willing to take some costs to win the auction, in order just to state they have a “partnership” with a specific DAO. Do you see this as a risk? Can it be mitigated?

Also imo this system could make sense not only for DAOs but also for regular companies, incubators, and VCs at some point. Eager to see what’s next !