Challenge Setting Assessment Issues
Existing issues in how challenge settings are assessed.
The current assessment process for challenge setting is covered below with the issues that the assessment criteria currently has.
Assessment Guide Source: Current fund 8 guidance.
​

Alignment

Criteria
This challenge is critical to achieve Cardano's mission.
​
Guidance - IOG guidance for Fund 9
Issues
  • "Cardano's mission" - This wording suggests that Cardano's mission is the only thing important to the challenge setting process when it is also not made clear what that mission exactly is. 'Banking the unbanked' or 'financial operating system' are some common communicated mission statements made within the community however this would not be a fair reflection of the global purpose of Cardano and ambitions of the community. The wording here is not effective.
  • Guidance controlled by central actor - IOG are the central actor that determine what is a priority with the high level goals provided. These allow little room for community purpose driven challenges and do not necessarily reflect the priorities of the community. No open feedback process seems to be present for this guidance.
​

Feasibility

Criteria
The Catalyst community has the capacity to address this challenge. Bonus if there's an established challenge team.
​
Issues
  • 'Catalyst community' - This should be the entire Cardano community not just the Catalyst community.
  • 'community has the capacity to address this challenge' - This statement is very high level and doesn't make it easy to assess most challenges easily on a one to five star basis. Easy examples for ones that could be less effective in terms of capacity could be 'Cure cancer' or 'Cure world hunger' as a challenge setting. However, it is hard to state with any guarantees what the community has a capacity for delivering. To state that the community has no capacity to 'Cure cancer' is hard to actually prove and could in fact be incorrect - how do you really fully know that is the case? This criteria leads to a rough ranking based on a subjective personal opinion of what the assessor believes the community has capacity for. This is inherently difficult to actually know and as such would benefit from not being a core assessment criteria.
  • 'Bonus if there's an established challenge team.' - The challenge teams are not set at the time of the proposal. People who join as co-proposers are not mandated to be on the challenge team so having an 'established challenge team' is not very effective for being a bonus factor. The co-proposers added don't necessarily reflect who ends up being on the challenge team. The usage of the word bonus is also risky as then challenge settings that don't have co-proposers could be down ranked even though they could be backed by more data / insights and logic for the usage of that funding categorisation and also could attract a lot of challenge teams members after being selected. The community encourages to people to join a challenge team after a challenge is selected which even further highlights why this should not be an important factor to use in the assessment of challenge settings. Any assessments for challenges should as a result not use the current challenge team to determine the quality, risks or potential outcome of a challenge setting.
​

Verifiability

Criteria
Success criteria and suggested metrics are set correctly to measure progress in addressing the challenge.
​
Issues
  • Favours specific challenges over broad challenges - By requesting suggested success metrics the challenge setting process favours more specific challenges over broader categorisation. For instance effective success metrics for a DeFi focussed challenge could be 'Increase total TVL' or 'Increase transaction volume' in the DeFi ecosystem. Broader categorisation has a wider scope for the proposals it includes meaning it needs to focus on higher level objectives rather than more specific metrics that are easier to verify. For instance 'Increasing the number of impactful products or integrations available' could be an objective for a Products & Integrations challenge. The second one is more an objective than an effective metric as it becomes harder to verify what is considered impactful without being more specific. Proposals in that challenge could be DeFi protocols and they should be encouraged to have these more specific metrics such as increasing TVL or transaction volume as this is the useful verifiable metrics specific to the proposal. Due to this adding criteria for having metrics for assessing challenge settings favours specific challenges over broader objective based categorisation. This criteria makes is problematic due to the issues behind specific categorisation such as causing the high justification governance effort and the higher budget weighting complexity as discussed in the comparison between specific and broad categorisation. Promoting only challenges with specific metrics reduces the chances of other types of challenge settings being selected such as the miscellaneous challenge which might not have specific metrics but can still provide high impact as a challenge.
  • Proposal metrics & challenge metrics - Proposal metrics should be accurate and useful to gage whether that proposal has a high impact. Challenge metrics have the issue of limiting the types of proposal that could be included or applying metrics that are not as useful for the proposals that get submitted. Every proposal is different and what determines success for a given proposal is not necessarily the same as another one. Due to this it is difficult to give challenges a small set of metrics that cover all the possibilities unless you have a more specific challenge. This then leads to all the issues outlined in this analysis over specific challenges. Higher level objectives can be verified by aggregating the impact of individuals proposals based on their own metrics however the funding categorisation does not need to specify these metrics and could be set in a separate part of the funding process by the community. As an example, there could be a DeFi community / alliance / guild that helps to recommend metrics for the community to strive towards. Those metrics can be used to assess the impact of DeFi proposals being voted on or that have been funded already and the community wants to assess their overall impact.
Copy link
Outline
Alignment
Feasibility
Verifiability