Design of Bug Bounty Schemes


This project started in Fall 2022 and is ongoing.


Prof. Dr. Hans Gersbach (ETH)
Dr. Fikri Pitsuwan (ETH)

Industry Partner:

Swiss Post


Softwares often have security vulnerabilities and can be attacked by adversaries, with potentially significant negative social or economic consequences. To protect themselves, organizations traditionally invest significant resources into building and maintaining dedicated security teams. In recent years, however, systems have increased in complexity and internal teams are no longer adequately addressing potential vulnerabilities. It may be that 50% of all bugs are not found internally.

Given this, organizations have increasingly relied on bug bounty programs, where external individuals probe the system and report any vulnerabilities (bug) in exchange for monetary rewards (bounty). In addition to tech companies and blockchains, recent successes in these programs have led authorities to systematically adopt bug bounty as a main measure in their government’s cybersecurity. For example, the Federal Council of Switzerland states in a recent press release that “standardised security tests are no longer sufficient to uncover hidden loopholes. Therefore, in the future, it is intended that ethical hackers will search through the Federal Administration’s productive IT systems and applications for vulnerabilities as part of so-called bug bounty programmes.”[1]

Despite its growing importance, however, the design of bug bounty schemes in softwares and blockchains has not been the focus of economic research. Our project aims to offer insights into some of the dimensions of bug bounty design with tools from game theory and mechanism design.

Specifically, we build foundational models to study bug bounty schemes. We focus on important design variables:

  • How large should the crowd of agents invited to find bugs be?
  • Should paid experts be added to the crowd of invited bug finders?
  • Should artificial bugs be added to the software to increase participation in bug finding?
  • How should prizes for real and artificial bugs be designed?
  • How should the existence of artificial bugs be communicated?
  • How should prizes for successful bug finding be determined?
  • How do entry checks and barriers regarding the reputation and past achievements of security researchers affect the probability of finding bugs?
  • Alternatively: would the opposite approach (only allowing greenhorns) be beneficial in a bug bounty scheme?
  • How can bug bounty programs be designed to attract able security researchers in the war for talented security researchers?
  • How can rewards (monetary, reputation and career concerns) be optimally mixed in order to achieve the best balance between software security and costs of bug bounty schemes?

To answer these questions, we develop foundational models of crowd-sourced security and generate insights for bug bounty schemes for particular environments. A group of individuals of arbitrary size is invited to search for a bug. Whether a bug exists is uncertain. The individuals differ with regard to their abilities, which we capture by different costs to achieve a certain probability to find bugs if any exist. Costs are private information. We study equilibria of the contest and characterize the optimal design of bug bounty schemes. In particular, the designer can vary the size of the group of individuals invited to search, add a paid expert, insert a known bug with some probability, and pay multiple prizes.

We obtain the following results. First, we characterize the equilibria, establishing that any equilibrium strategy must be a threshold strategy, i.e. only agents with a cost of search below some (potentially individual) threshold participate in the bug bounty scheme. Second, we provide sufficient conditions for the equilibrium to be unique and symmetric. Third, we show that even inviting an unlimited crowd does not guarantee that the bug, if it exists, is found, unless there are agents which have zero costs, or equivalently have intrinsic gains from participating in the scheme. It may even happen that having more agents in the pool of potential participants lowers the probability of finding the bug. Fourth, adding a paid expert can increase or decrease the efficiency of the bug bounty scheme. Fifth, we demonstrate that in a model with multiple prizes, having one prize (winner-takes-all) achieves the highest probability of finding the bug. Sixth, we identify circumstances when asymmetric equilibria occur. Lastly, we illustrate how our baseline model can be extended to allow for multiple bugs, multiple experts, and heterogeneity of agents with respect to cost distributions, search times, and skills.

In the next step of the research, we examine how adding (known) bugs can perhaps be useful to the organization for at least three reasons: to motivate contestants, to screen false submissions, and to lower financial commitment. We conjecture that when the additional costs of paying rewards are taken into account, it might be optimal to insert several known bugs, but maybe some only with a certain probability.


H. Gersbach, A. Mamageishvili and F. Pitsuwan
Decentralized Attack Search and the Design of Bug Bounty Schemes
In Proceedings of the 16th International Symposium on Algorithmic Game Theory (SAGT), 2023 [pdf]

Working Papers and Reprints:

H. Gersbach, A. Mamageishvili and F. Pitsuwan, “Decentralized Attack Search and the Design of Bug Bounty Schemes,” Preprint, 2023 [pdf]

[1], accessed November 22, 2022.