Methodology & Data

Funding categorisations can go through multiple steps to be created, assessed, voted and used in the Catalyst process. The number of steps are different for challenge setting and funding categories which take different approaches in how to tackle funding categorisation.

This methodology breaks down the meaning for each of the data points used in the analysis data. This includes a breakdown of the different steps for how categorisations can get created and selected in each approach. A time value is attached to each stage for both funding categories and challenge settings so that a dollar value can be derived. Using a dollar value makes it simple to calculate a total overall cost and efficiency for both challenge setting and funding category approaches.

The following has been applied to various steps below regarding challenge settings:

  • Estimations for new and repeated challenge settings - Both new and repeated challenges settings can be submitted in each funding round. A different cost in terms of time required for the categorisation is defined in the relevant steps. A rough estimate of 50% is attached for simplicity in the divide between new and repeated categorisations that occur each funding round.

Data sources

Cost comparison analysis data

Original data sources

Hourly rates

The time spent by different roles in the process needs to be accounted for as a cost in the same unit of account. For this analysis the united states dollar will be used.

Voters - $10 per hour

Anyone with ADA can be a voter in the Catalyst funding process. The average hourly wage across the world differs greatly between different countries. The cost of voting is integrated into this analysis to highlight the costs of participation. The more time it takes for someone to participate in voting on funding categorisations the less likely they are going to do so due to cost of their time. For simplicity, a $10 hourly rate will be used. This rate will be attached to the usage of voters time to participate in the voting process.

Challenge team members - $30 per hour

Challenge team members help to research and create categorisations and then also lead them if they get selected. Other interested community members can also join in the later stages. Ideally anyone fulfilling this role has some analytical ability to understand certain problems and goals of the community, how to make sensible categorisations that state what proposal types to include and also can attach a sensible budget weighting to that categorisation. Due to the importance of this work it would be sensible to value it at a mid to senior salary range from the analysis role salary research in the contributor analysis. This roughly equates to $70,000 annually which would be around $33.65 per hour. For simplicity a $30 hourly rate will be used. This rate will be used when attaching a cost associated with challenge teams.

Community advisors & Veteran community advisors - Use budget allocation for CA & vCA process from fund for calculations. Use $30 per hour for any work outside of scope.

Community advisors and veteran community advisors help to assess the proposals which are submitted. Currently this also includes funding categorisations in the form of challenge settings. This role requires a certain level of analytical ability to understand the wider ecosystem and then the categorisation being proposed. Assessors will need to determine whether this aligns with community goals, is a suitable categorisation and also whether it has a suitable budget to achieve the intent. Each funding round a percentage of the funds is allocated to the CA and vCA process at 4% and 1% respectively. These budgets will be used to determine the cost per categorisation assessment. For any work outside the budgets scope a $30 per hour rate can be applied.

Proposers - $30 per hour

A collection of different annual salaries was researched based on a number of different roles that would be relevant to working in the Catalyst and Cardano ecosystem in the contributor analysis. Most of these roles would exist and be performed by proposal teams. The budget range estimates range anywhere from $50,000 to $120,000 depending on the role and seniority. The higher end estimate for this range for developers can be considered conservative! Those that work in the blockchain industry can often find higher paying salaries due to the complexity and skill level required from them. With this in mind a conservative average annual salary of $70,000 can be applied to accommodate a wide ranging amount of roles and skill levels of proposers. This salary equates roughly to a $33.65 hourly rate. For simplicity a $30 hourly rate will be used. This rate will be used when attaching a cost associated with any proposers time input required in the funding process.

Funding round data

There are a number of data inputs for each funding round that are used for this analysis. These inputs are covered below with what they mean and why they’re included.

Fund total

Total funding allocated to a funding round.

CA budget

Total funding allocated to the community advisor assessment process. This is currently 4% of the fund total.

vCA budget

Total funding allocated to the veteran community advisor assessment review process. This is currently 1% of the fund total.

All proposals total

Total number of proposals across an entire funding round. Includes normal proposals requesting funding and also proposals suggesting funding categorisations like challenge settings.

Funding categorisations total

Total number of funding categorisations being suggested for a given funding round. For challenge settings these come in the form of proposals by the community. For funding categories these are already defined.

Categorisation proposals percentage

Percentage of all proposals that are focussed on funding categorisation. This is only relevant for challenge setting.

Number of proposers (estimate)

Number of proposers in a given funding round. For this a simple estimation is given of one proposer per proposal. The situation that reduces this number is where the same proposers make multiple proposals. The situation where this number is increases is where there are multiple co-proposers on a single proposal. For simplicity a value of one proposer per proposal will be used for this analysis.

Number of voters (estimate)

Number of people who voted in each funding round. Currently this value is being derived from looking at the unique wallets in the voting results data and finding the proposal with the highest unique wallet votes. Some voters may have multiple wallets which may reduce this number. In contrast to this a number of voters would not have voted on the proposal that has the highest unique wallet count. The estimate is likely a conservative one due to this and could likely be higher.

Categorisation CA budget (estimate)

The total estimated budget that will be used by the CA assessment process to cover the funding categorisations submitted. The value is calculated from taking the categorisation proposals percentage and applying it to the total CA budget amount. This creates an estimated amount that would be used for assessing the challenge proposals.

Categorisation vCA budget (estimate)

The total estimated budget that will be used by the vCA assessment process to cover the funding categorisations submitted. The value is calculated from taking the categorisation proposals percentage and applying it to the total vCA budget amount. This creates an estimated amount that would be used on reviewing the categorisation assessments.

Total selected categorisations

The number of funding categorisations that will are actually selected and used for a given funding round.

Funding categorisation stages

The following are the different stages that should be considered for funding categorisations that can be created, selected and used in each funding round.

Categorisation creation

Teams form to create new or reuse existing categorisations for use in a given funding round.

Challenge settings (4 hours per categorisation)

Challenge settings allow for any form of categorisation to be submitted. In practice some challenges get repeated from previous rounds and some new categorisations get added:

  • Repeated categorisations (3 hours each categorisation) - Repeated categorisations do not need to spend much time defining or updating the categorisation. However they will need to spend time on objective or success metric updates and decide on the budget weighting to use. To do this effectively a conservative estimate of around three hours of total effort could be needed to review previous categorisation amounts and current ecosystem requirements.

  • New categorisations (5 hours each categorisation) - New challenge based categorisations must be defined and include what the challenge is about, what proposals should be included, any objectives or success metrics to include and also the budget weighting. All of these aspects should also ideally take into account the other previous or currently submitted categorisations. A conservative estimate to do this task to a sufficient quality could be at least 5 hours effort.

  • Average categorisation creation (4 hours each categorisation) - Averaging the two categorisation types effort equally.

Funding categories (No effort required)

Not applicable - Funding categories are already defined so no effort here is required.

Collaboration & moderation

Collaboration and moderation concerns the implications of ensuring that the funding categorisations follow guidelines and reduce any bad outcomes.

Challenge settings

Challenge settings are a competitive form of categorisation. Categorisations will compete with one another in the voting process to be selected. Due to this there is a need for either collaboration from challenge teams to remove duplication and any cross over where necessary. In addition to this, or as an alternative approach, moderation from Catalyst contributors can also fulfil the role of ensuring sensible guidelines for categorisations are being followed. With this in mind the following estimations can be applied:

  • Repeated categorisations (30 minutes categorisation) - Repeated categorisation teams may or may not need to collaborate with other challenge teams. They also may not need much moderation if they have been used before. Due to this a conservative 30 minutes average effort is applied.

  • New categorisations (60 minutes each categorisation) - Introducing new categorisations often means trying to change or improve existing categorisations or introduce some new goal for the ecosystem. The challenge team members will benefit from collaborating with people in the relevant focus area for this categorisation that they are creating. Both collaboration or moderation of these new categorisations could be required. A conservative 60 minute average amount of effort will be attached.

  • Average collaboration & moderation (45 minutes each categorisation) - Averaging the two categorisation types effort equally.

Funding categories (No effort required)

Not applicable - Funding categories are already defined so no effort here is required.

Collaboration & moderation

Collaboration and moderation concerns the implications of ensuring that the funding categorisations follow guidelines and reduce any bad outcomes.

Challenge settings

Challenge settings are a competitive form of categorisation. Categorisations will compete with one another in the voting process to be selected. Due to this there is a need for either collaboration from challenge teams to remove duplication and any cross over where necessary. In addition to this, or as an alternative approach, moderation from Catalyst contributors can also fulfil the role of ensuring sensible guidelines for categorisations are being followed. With this in mind the following estimations can be applied:

  • Repeated categorisations (30 minutes categorisation) - Repeated categorisation teams may or may not need to collaborate with other challenge teams. They also may not need much moderation if they have been used before. Due to this a conservative 30 minutes average effort is applied.

  • New categorisations (60 minutes each categorisation) - Introducing new categorisations often means trying to change or improve existing categorisations or introduce some new goal for the ecosystem. The challenge team members will benefit from collaborating with people in the relevant focus area for this categorisation that they are creating. Both collaboration or moderation of these new categorisations could be required. A conservative 60 minute average amount of effort will be attached.

  • Average collaboration & moderation (45 minutes each categorisation) - Averaging the two categorisation types effort equally.

Funding categories

Not applicable - Funding categories are an inclusive form of categorisation covering all forms of idea and innovation. They do not require teams to form each funding round to collaborate on the creating of categorisation nor do they need to be moderated prior to assessment due to already being predefined.

Voting

Voters will chose the categorisations in a competitive funding categorisation approach. With funding categories voters will vote on the budget weightings for each of the categories.

Voter knowledge (Not used in analysis - Cost of participation for reference)

The importance of voter knowledge would apply to any situation where there is a vote on selecting funding categorisation or for the budget weightings to apply. The more specific and granular the categorisation is the more understanding and knowledge the voter would need to make well informed decisions.

  • Ecosystem priorities understanding (1 hour) - To make a well informed decision voters would ideally need to have spent time in the ecosystem in the recent months leading up to the vote to understand what the main priorities and problems are in the community along with any progress made in previous funding rounds. With a basic understanding or insight into the ecosystem the voter would be better positioned to make well informed decisions on how to select the most promising funding categorisations. This cost of participation is just for reference and can be considered. This cost applies to any funding categorisation approach in some form or another where the voter is required to vote on how things are categorised. This cost could eventually be removed using funding categories as the categorisation approach is standardised and the budget weighting could be automated based off goals & objectives data along with categorisation usage data.

Challenge settings

As challenge settings are a changing and competitive form of categorisation voters will need to spend time reviewing each of the categorisations submitted to be well informed on which categorisations to vote for.

  • Categorisation selection (1 minutes each categorisation) - Each challenge setting has potential objectives, proposal types that can be included and a budget weighting that should be reviewed and then compared with the others challenges. For some challenges that are repeated from previous rounds the voter would not need to read these categorisations in the same depth. A very conservative estimate of 1 minute can be applied to cover the average amount of time required to understand a categorisation and also compare it with others. Some categorisations have been historically short and quick to read where as others have been in more depth. The complexity with challenge settings is that voters must decide which categorisations that they want to include and exclude using their vote. This factor increases the importance of the decision and time required to consider the options.

Funding categories

Funding categories are a recurring form of categorisation meaning that the voters would learn the categorisation once and then take this understanding forward into the next funding round. Voters would therefore not need to review categorisations in much depth each funding round.

  • Budget weighting vote (1 minutes each categorisation) - Some voters would require very little to no time to vote if they already know the categorisations from a previous voting round. Other voters may need to read through the categorisations. To account for both voters who can vote very quickly and those who are new and learning the categorisations an average of 1 minute for each categorisation will be allocated - similar to challenge setting categorisations. A key difference to consider is how funding categories remove the complexity of needing to choose between inclusion and exclusion of certain categorisations. Instead the only decision concerns the budget weighting to apply to the categories. This complexity reduction can help to ultimately lead to time saving in decision making.

Categorisation usage

Once funding categorisation selection and budget weightings are finalised the next consideration is how long it takes for proposers to understand and submit proposals to those categorisations.

Challenge settings

Challenge settings can introduce new categorisations each funding round. Proposers need to review the selected categorisations to find which ones are relevant to their proposals.

  • Categorisation review (1 minutes each categorisation) - Proposers would not need to necessarily review every categorisation as they would mostly be focussed on a subset of the different areas in which they could propose. Challenge settings can often have cross over with other categorisations which lead to proposers needing to consider multiple categorisations before submitting their proposals. Interpretation also must be taken into account where the cross over potential and changing categorisations leads to complexity in determining what can and cannot be submitted. Due to this a conservative estimate would be that proposers spend roughly spend 2 minutes each on about 50% of the categorisations available. This will equate to an average of 1 minutes needed for each categorisation. This estimation is conservative due to the complexity for proposers to compare options of where to submit proposals and also for interpreting categorisations on what can be submitted.

Funding categories

Funding categories are recurring and inclusive meaning that proposers will only need to learn the categorisations once and only occasionally see updates to those categorisations.

  • Categorisation review (1 minutes each categorisation) - Proposers would not need to necessarily review every categorisation as they would mostly be focussed on a subset of the different categories. Some proposers would need to spend no time reviewing the categorisations if they had previously used them in a prior round. For new proposers they would need to review each of the categorisations. There are seven total funding categories at the moment. To account for both newcomers and proposers who already know the categories a 1 minute average review time will be applied. Funding categories separate proposal focus areas explicitly which helps increase clarity and ease of determining where to submit different proposals.

Categorisation analysis

Analysis on the effectiveness of funding categorisation in the Catalyst process is currently carried out by the challenge teams after three months of execution. Analysis on this area is important as it will help determine lessons that can be learnt from previous categorisations and how that can improve future efforts. Contributors in the community responsible for improving the Catalyst process would need to spend time reviewing and analysing how any categorisation approach has performed after each funding round. Analysis could cover where proposal types were submitted, what was funded and the impact of how access to funding impacted the voting outcomes.

Challenge settings

  • Categorisation analysis (15 hour each categorisation) - Challenge settings require more time for analysis due to being a changing and competing form of categorisation. More effort would be needed to extract out useful information on the effectiveness of categorisation against previous rounds due to the changing selection and structure of the categorisations. Challenge teams are currently compensated for creating a final report for performing this task and are allocated 15 hours.

Funding categories

  • Categorisation analysis (15 hour each categorisation) - Funding categories are inclusive and recurring meaning less analysis would be needed to consider the funding access that each focus area had due to them being more explicit and clear. Instead the analysis effort can be better directed at looking at the proposals which were and weren’t funded and the impact of the budget weightings on voting results. This data would then be fed back into the process for the next round to make better informed budget weighting decisions.

Last updated