In recent years, tools from game theory and no-regret learning have led to a powerful and practical framework for designing machine learning training algorithms that enforce various notions of fairness. This framework (necessarily) results in empirical tradeoffs (Pareto frontiers) between fairness and accuracy that can be chosen by stakeholders. After surveying these developments, I will describe an alternative approach inspired by the concept of “bias bounties” that manages to sidestep such tradeoffs. In this approach, a simple algorithm can incorporate proposed improvements to a trained model in a way that monotonically improves not only subgroup fairness but overall accuracy. Time permitting, I will describe an ongoing communal experiment in such a bias bounty.