When discussing epistemology,
and particularly when disparaging
Karl Popper as a half-wit
(which still makes him waaaay ahead of
the usual crowd of witless zeroes who are paraded as philosophers),
I often like to tell about
this compelling solution to the
problem of induction.
Popper believed such solution was impossible
(and redefined "induction" so as to prove his point with Hume's argument),
even though all humans (nay, mammals)
had been showing him wrong for millenia,
and though a formal solution had been hinted as
(tell me about
Well, it so happens that Ray Solomonoff has written a nice historical account of his discovery (kudos to CiteSeer). Lots of reflective epistemological insight there — and a lot of other great heroes who played a role in this story, including (surprise!?) a key contribution by the founders of LISP.
Of course, Solomonoff Induction is not meant to formalize exactly the way we either think or ought to think. Most importantly, it doesn't take into account the cost of thought, which limits the depth and precision of our probability computations. But that's not the point. The point is that we can formalize the way that induction is being done, and capture in an understandable way the essence of understanding — and its limits, as Chaitin would no doubt remark.
PS: of course, there exist or can be invented infinitely many variants of Solomonoff Induction, that will take into account constraints on computation such as costs, reversibility, etc. These variants can help serve a variety of purposes, from modelling human learning processes to building machines that learn and otherwise mine data. But the details of these variants is secondary and comes naturally; the great discovery was in Solomonoff Induction.