Math 480 Course Notes -- May 23, 1996

Crash Course on Probability, Part II

Two interesting random variables came up in response to homework problems from preceding lecture. the first could be called a geometric random variable (with parameter r). It is defined on the sample space {0,1,2,3,...} of non-negative integers and, given a fixed value of r between 0 and 1, has probability function given by

(Actually in class we only considered the case where r=1/2, the p(k) from class is p(k-1) here, but the general case isn't any harder). The geometric series formula:

shows that p(k) is a probability function. We can compute the expectation of a random variable X having a geometric distribution with parameter r as follows (the trick is reminiscent of what we used in the last lecture on the Poisson distribution):

Another distribution given in response to the homework assignment was again infinite discrete, defined on the positive integers {1,2,3,...} by the function p(k)=. This is a probability function based on the famous formula due to Euler:

But an interesting thing about this distribution concerns its expectation: Because the harmonic series diverges to infinity, we have that

So the expectation of this random variable is infinite (Can you interpret this in terms of an experiment whose outcome is a random variable with this distribution?)


Moments, variance, etc.

The preceding example leads more or less naturally to a discussion of the "moments" of a random variable. Before a discussion of these, we note that if X is a random variable (discrete or continuous), it is possible to define related random variables by taking various functions of X, say or sin(X) or whatever.


Functions of a random variable

If Y=f(X) for some function f, then the probability of Y being in some set A is defined to be the probability of X being in the set .

As an example, consider an exponentially distributed random variable X with parameter =1. Let Y=. Since X can only be positive, the probability that Y is in the interval is the same as the probability that X is in the interval .

We can calculate the probability density function p(y) of Y by recalling that the probability that Y is in the interval [0,y] (actually ) is the integral of p(y) from 0 to y. In other words, p(y) is the integral of the function h(y), where h(y) = the probability that Y is in the interval [0,y]. But h(y) is the same as the probability that X is in the interval [0,]. We calculate:

There are two ways to calculate the expectation of Y. The first is obvious: we can integrate yp(y). The other is to make the change of variables y= in this integral, which will yield (Check this!) that the expectation of Y= is

More generally, if f(x) is any function, then the expectation of the function f(X) of the random variable X is

where p(x) is the probability density function of X if X is continuous, or the probability function of X if X is discrete.


Now we can talk about the moments of a random variable. The rth moment of X is defined to be the expected value of . In particular the first moment of X is its expectation. If s>r, then having an sth moment is a more restrictive condition than having and rth one (this is a convergence issue as x approaches infinity, since for large values of x.

A more useful set of moments is called the set of central moments. These are defined to be the rth moments of the variable X-E(X). In particular, the second moment of X-E(X) is called the variance of X (it is a crude measure of the extent to which the distribution of X is spread out from its expectation value). It is a useful exercise to work out that

As an example, we compute the variance of the uniform and exponential distributions: