Selection guides stochastically evolving populations towards fit genotypes and phenotypes, which would be highly unlikely under neutrality. A perspective dating back to Kimura 1961 is that this constitutes accumulation of information in the genome, and that such information is costly. But defining this information and quantifying its cost has been challenging. We propose a framework to to this, which leads to intuitive and very general results. The central quantity is the Kullback-Leibler divergence between the distributions of genotype frequencies with and without selection. First, we show that this population-level information sets an upper bound on the information at the level of the genotype and the phenotype, limiting how precisely they can be specified by selection. Then we ask how much information can be accumulated and maintained at a given cost, measured as variation in fitness. We find links with control theory, and that information is cheapest when it is distributed among many loci.
MathBio Seminar
Tuesday, February 15, 2022 - 4:00pm
Michal Hledik
IST Austria
Other Events on This Day
-
Scaling limits of the Laguerre unitary ensemble
Probability and Combinatorics
3:30pm