Deep Funding uses dependency graphs and a market of AI or human allocators steered by a spot-checking jury to pay a large number of open source contributors directly or indirectly upstream of a project or outcome that a funder cares about.
Take on the challenge
Deep Funding requires 3 components
A data layer mapping the various dependencies (vertices) of a project and the connections between them (edges).
A distilled human judgment mechanism (human or AI ‘agent allocators’) that assign weights to the edges to indicate their relative importance.
A jury to “spot-check” the graph, expressing expert preferences on a randomly selected set of edges. The mechanism then distributes funding to repos based on the weights provided by agent allocators that are most compatible with the spot checking results.
About
Current public goods funding mechanisms require funders to evaluate each project, making the process increasingly burdensome as the number of applicants increases. Funding mechanisms end up depending either on a large group of people, creating incentives for projects to publicly campaign to get themselves known, or a small group of people, creating incentives to privately curry favor and limiting the mechanism’s effectiveness at scaling beyond a small number of projects.
In Deep Funding, most of the work gets done by a public market of allocators, that suggest proposed weights of edges in a graph, which answer the question “what percent of the credit for A belongs to B?”. Potentially, there could be millions of weights to allocate, and so we encourage using AI to do this.
Funders have to “spot-check” the graph at a small randomly selected number of locations. The goal is to determine: which models most closely align with your preferences?
How it works in Practice
A bunch of models give suggestions on donations. Then, you are shown a random selection of "quiz questions" where you see two projects A and B and have to answer, "which deserves more?". At the end, you get scores of how well each model complies with your preferences. You can fund based on a weighted average of the results of the models, giving higher weights to models that comply with your preferences more.
One way to view this is that public submissions give you a large, eventually multi-million-dimensional, sea of possible answers, and your job is to provide a small number of inputs to “steer” into the region with the best answers.
Because each spot-check is “local”, you never have to answer difficult grand questions like “what is the dollar value of how valuable project X is to humanity?” - instead, you answer much more concrete questions like “compare the direct impact of A and B on C”.
By creating an open market where anyone can submit models that compete for how well they satisfy human preference spot checks, we avoid putting a lot of power into whoever is coming up with the model. This approach also ensures funders’ input meaningfully influences allocations while dramatically reducing cognitive overhead.
Storyteller
VoiceDeck
Weaver
Gitcoin Allo Protocol
Wizard
Open Source Observer
Sprinkler
Drips
Referee
Pairwise
Gardener
Vitalik Buterin