This project is due for presentation on Tuesday, April 14. I’ll be traveling, but Matthew Stone will attend your presentation and also give a related guest lecture.
The goal of this project is to use Monte Carlo integration to compute a conditional expectation of interest to you. Mathematically speaking, a conditional expectation is defined by three things:
- a probability distribution, which you can express as a program that generates random samples;
- a condition, which you can express as a program that filters the samples;
- a random variable whose expected value interests you, which you can express as a program that maps each sample to a number.
For example, if you want to know how many words there are in the average English sentence that begins with the word “colorless”, then you can define a conditional expectation by writing
- a program to generate random English sentences;
- a program to check if an English sentence begins with the word “colorless”;
- a program to count the number of words in an English sentence.
It is simple to put these parts together into a slow but obviously correct program that computes the conditional expectation you want. Do that first, then try to make it go faster by generating samples with more even weights (without sacrificing correctness—check!).
Often you are interested not in a particular conditional expectation but a family of them. For example, you might want to tabulate, for each word, the average length of English sentences that begin with that word. Or you might just want to know, for each word, the percentage of English sentences that begin with that word. When simulating light, you might want to know the percentage of photons that enter the eye from each direction range (i.e., the percentage of photons that arrive at each sector of the hemisphere that approximates the eye). You can compute many conditional expectations at once by keeping multiple running totals at once.
For ideas about how probabilities and Bayesian reasoning apply to cognition, you may (read the class mailing list! and) check out some of the following papers. Several of them are from Trends in Cognitive Sciences special issue on probabilistic models of cognition (10(7), 2006).
- “Motion illusions as optimal percepts” (Weiss, Simoncelli, and Adelson, 2002) introduces “Illusions, perception and Bayes” (Geisler and Kersten, 2002)
- “Vision as Bayesian inference: Analysis by synthesis?” (Yuille and Kersten, 2006)
- “Probabilistic models of language processing and acquisition” (Chater and Manning, 2006)
- “Theory-based Bayesian models of inductive learning and reasoning” (Tenenbaum, Griffiths, and Kemp, 2006)
No matter what domain you are interested in, please be sure to start small!
Compare the convergence speed of the simpler and more sophisticated methods: do you get a more stable average or a less noisy image when you make your sampling algorithm more clever?
This project description is surely incomplete. Please ask questions and express concerns about it.