Proper Treatment 正當作法/ cs504/ cs504: Computational Modeling
2009-01-18 23:57

16:198:504, Spring 2008
Classes: Mon 5–8pm, RuCCS Playroom (Psych A139)

Your instructors are:

Please keep in touch with the class by sending email to “comp dash model at rams dot rutgers dot edu”. If you are not on this mailing list, please ask Chung-chieh Shan to add you. You must be on the list in order to post to it. All postings are archived.

Synopsis

Computational models are executable explanations of complex phenomena. In this course, we will build and debug computational models that include theories of perception that scientists argue about as well as theories of the world that agents entertain to figure out what to do. These explanations bring to life the tradeoff between truth and beauty.

The first half of this course concerns interactive simulations (games) and generalizations about their behavior (types). The second half of this course concerns the declarative vs procedural specification of hypotheses (search) and their probabilities (weights).

Requirements

The requirements of this course are:

The central place of skills development in the class means that typical students will not be well served by attending as if they were auditors.

Audience

This course is for students both in and outside of computer science who want to collaborate on interdisciplinary research. It requires the ability to talk and think computationally about programs, data, processes, abstraction, and proofs, such as taught and exercised in the course “Computational Thinking” (16:198:503). While the course will not count towards graduate requirements in computer science (other than the graduate school’s general requirement that PhD students take 48 credits of coursework), students who complete it will be ready for Rutgers’s advanced graduate courses in artificial intelligence.

Evolving outline

1/28

Explanation ≈ simulation + debugging

Homework: the same-fringe problem

2/4

Modular, interactive streams of discrete events

Reading and writing: modules

Project 1 begins.

The same fringe homework is due on Thursday 2/7; please email your work to me (and note your collaborators if any) if you have not already.

2/11

Probability 1

Reading: basic probability.

Problem set 1 on probability, due on Thursday 2/21. Please email or hand in your work (scanned is just fine).

2/18

Probability 2

Reading: more basic probability.

Problem set 2 on probability, due on Monday 3/3. Please email or hand in your work (scanned is just fine).

Reading and writing: perception as inference

2/25

Present project 1.

Why does the sun appear with uniform color and brightness to the eye?

3/3

Monte Carlo

Project 2 begins: Monte Carlo integration.

Please read at least one of the following chapters.

3/10

Causal structure

3/24

Class canceled.

3/31

Reducing naïveté in sampling

4/7

Learning. Not only can objective truth be thought of as a probability (the likelihood), but so can subjective beauty such as Occam’s razor, modeled by a prior distribution (aka innate universal): Bayesian probability treats ignorance as uncertainty, learning as inference, and belief as a distribution over hypotheses, where rational action maximizes expected utility. The Bayes rule lets us update our prior belief with new experience to give a posterior belief, which pits truth against beauty, fit against generalization, variance against bias. Examples: Kalman filtering; betting on an unknown coin; learning the words of an artificial language.

4/14

Guest lecture by Kobi Gal: decision making in graphical models and multiagent systems.

4/18 (specially (re)scheduled class)

Present project 2.

4/21

Constraints. Besides merging samples, another way to reduce computational complexity in the face of a combinatorial explosion of possibilities is to propagate constraints, again following dependency structure. Two examples of this strategy is how humans solve Sudoku puzzles and how computers interpret line drawings. The latter task is already illustrative in the simple case of trihedral polyhedra. In the worst case, we still need to resort to backtracking search. When the search space is large, approximation (such as pruning) and heuristics become essential.

Project 3 begins: Constraint satisfaction.

Reading: line-drawing interpretation as constraint satisfaction.

Reading and writing: Abstract the second project (due on 4/24). Please email your abstract to the class.

Reading and writing: In an often-cited review article “Approaches to biological information processing” (Science 190(4217):875–876, 1975), David Marr writes:

The important question is, What processes need implementing? … Again, the primary unresolved issue is what functions you want implemented, and why.

Do you agree with this sentiment? To elaborate more concretely, pick a question regarded as important in your field that either confronts Marr’s issue or neglects it. In an email message to the course mailing list (due 5/1), describe the question and discuss whether Marr’s critique applies to it.

4/28

Disabbreviation demo: the dependency structure of a hidden Markov model in action.

Local search. Sometimes we can compute the best model just using calculus. Other times we only know to guess a model then improve it step by step. This iterative process is called gradient descent because it is like trying to hike to the bottom of some terrain: one usually follows a steep downhill, but bumpy terrain calls for jumping, inertia, and earthquakes. Examples: Gaussians and their mixtures; a hidden Markov model of visual input over time.

Tuesday 5/6 3pm

Present project 3.