Scaling high-quality human judgement

Scaling high-quality human judgement

Deep Funding uses dependency graphs and a market of AI or human allocators steered by a spot-checking jury to pay a large number of open source contributors directly or indirectly upstream of a project or outcome that a funder cares about.

Take on the challenge

How does Deep Funding work?

How does Deep Funding work?

Deep Funding requires 3 components

A data layer mapping the various dependencies (vertices) of a project and the connections between them (edges).

A distilled human judgment mechanism (human or AI ‘agent allocators’) that assign weights to the edges to indicate their relative importance.

A jury to “spot-check” the graph, expressing expert preferences on a randomly selected set of edges. The mechanism then distributes funding to repos based on the weights provided by agent allocators that are most compatible with the spot checking results.

About

Current public goods funding mechanisms require funders to evaluate each project, making the process increasingly burdensome as the number of applicants increases. Funding mechanisms end up depending either on a large group of people, creating incentives for projects to publicly campaign to get themselves known, or a small group of people, creating incentives to privately curry favor and limiting the mechanism’s effectiveness at scaling beyond a small number of projects.


In Deep Funding, most of the work gets done by a public market of allocators, that suggest proposed weights of edges in a graph, which answer the question “what percent of the credit for A belongs to B?”. Potentially, there could be millions of weights to allocate, and so we encourage using AI to do this.


Funders have to “spot-check” the graph at a small randomly selected number of locations. The goal is to determine: which models most closely align with your preferences?

How it works in Practice

A bunch of models give suggestions on donations. Then, you are shown a random selection of "quiz questions" where you see two projects A and B and have to answer, "which deserves more?". At the end, you get scores of how well each model complies with your preferences. You can fund based on a weighted average of the results of the models, giving higher weights to models that comply with your preferences more.


One way to view this is that public submissions give you a large, eventually multi-million-dimensional, sea of possible answers, and your job is to provide a small number of inputs to “steer” into the region with the best answers.

Because each spot-check is “local”, you never have to answer difficult grand questions like “what is the dollar value of how valuable project X is to humanity?” - instead, you answer much more concrete questions like “compare the direct impact of A and B on C”.


By creating an open market where anyone can submit models that compete for how well they satisfy human preference spot checks, we avoid putting a lot of power into whoever is coming up with the model. This approach also ensures funders’ input meaningfully influences allocations while dramatically reducing cognitive overhead.


Collaborators

Collaborators

Storyteller

VoiceDeck

Weaver

Gitcoin Allo Protocol

Wizard

Open Source Observer

Sprinkler

Drips

Referee

Pairwise

Gardener

Vitalik Buterin

The Challenge

Develop an agent allocator function to assign weights to 40,000  identified Ethereum dependencies

The total prize of $250k

$170k

Repos based on the weighting of their edges by the winning model

$40k

Models that conform the best with spot check results by jury members manually giving weights

$40k

Prizes to open source submissions of models, based on how interesting they are to jury members

Timeline

December 12th

Data on 40,000 Ethereum dependencies for building your model

Jan 20th

Sample spot check data by jury members to train your model

Jan 20th

Deadline for “early bird” prizes for open source model submissions. At least half of the open source model submission prize pool will be reserved for early bird submissions.

Feb 20th

Submit your model

Feb 27th

Results

The Challenge

Develop an agent allocator function to assign weights to 40,000  identified Ethereum dependencies

The total prize of $250k

$170k

Repos based on the weighting of their edges by the winning model

$40k

Models that conform the best with spot check results by jury members manually giving weights

$40k

Prizes to open source submissions of models, based on how interesting they are to jury members

Timeline

December 12th

Data on 40,000 Ethereum dependencies for building your model

Jan 20th

Sample spot check data by jury members to train your model

Jan 20th

Deadline for “early bird” prizes for open source model submissions. At least half of the open source model submission prize pool will be reserved for early bird submissions.

Feb 20th

Submit your model

Feb 27th

Results

The Challenge

Develop an agent allocator function to assign weights to 40,000  identified Ethereum dependencies

The total prize of $250k

$170k

Repos based on the weighting of their edges by the winning model

$40k

Models that conform the best with spot check results by jury members manually giving weights

$40k

Prizes to open source submissions of models, based on how interesting they are to jury members

Timeline

December 12th

Data on 40,000 Ethereum dependencies for building your model

Jan 20th

Sample spot check data by jury members to train your model

Jan 20th

Deadline for “early bird” prizes for open source model submissions. At least half of the open source model submission prize pool will be reserved for early bird submissions.

Feb 20th

Submit your model

Feb 27th

Results

The Challenge

Develop an agent allocator function to assign weights to 40,000  identified Ethereum dependencies

The total prize of $250k

$170k

Repos based on the weighting of their edges by the winning model

$40k

Models that conform the best with spot check results by jury members manually giving weights

$40k

Prizes to open source submissions of models, based on how interesting they are to jury members

Timeline

December 12th

Data on 40,000 Ethereum dependencies for building your model

Jan 20th

Sample spot check data by jury members to train your model

Jan 20th

Deadline for “early bird” prizes for open source model submissions. At least half of the open source model submission prize pool will be reserved for early bird submissions.

Feb 20th

Submit your model

Feb 27th

Results

Join the Telegram group for more

Join the Telegram group for more

Join the Telegram group for more

Built on framer by Pollen Labs.

Deep Funding

Solving the problem of scaling high-quality human judgement

Built on framer by Pollen Labs.

Deep Funding

Solving the problem of scaling high-quality human judgement

Built on framer by Pollen Labs.

Deep Funding

Solving the problem of scaling high-quality human judgement

Built on framer by Pollen Labs.

Deep Funding

Solving the problem of scaling high-quality human judgement