Tuesday, October 25, 2011

BAYES RULE





REVISED: Wednesday, February 4, 2015



You will learn Bayes Rule.

I.  BAYES RULE

Invented by 18th century British mathematician, Rev. Thomas Bayes, a Presbyterian minister.

P( A | B ) = ( P( B | A ) * P(A) ) / P( B )

P( A | B ) is the posterior, probability of event A given event B occurred.

P( B | A ) is the likelihood, probability of event B given event A occurred.

P(A) ) is the prior.

P( B ) is the marginal likelihood.

Links a conditional probability to its inverse.

posterior = (likelihood*prior)/(marginal likelihood)

Bayesian reasoning is very counterintuitive.

Bayes' theorem is particularly useful for inferring causes from their effects.

Initial Beliefs + Recent Objective Data = A New and Improved Belief.

Bayesian reasoning brings clarity when information is scarce and outcomes uncertain.

By expressing all information in terms of probability distributions, Bayes can produce reliable estimates from scant and uncertain evidence.

Probability is the likely percentage of times an event is expected to occur if the experiment is repeated for a large number of trials.

Joint probability is the statistical measure where the likelihood of two events occurring together and at the same point in time are calculated.

The probability of two events, A and B, both occurring is expressed as:

P(A,B)

Joint probability can also be expressed as:


P(A ∩ B)

And, is read as the probability of the intersection of A and B.


A.  Complimentary Event of Bayes Rule

"A" is not observable.

"B" is our test, it is observable.

We know the prior probability for "A" which is P(A).

We know the conditional probability for "B" given "A" which is P(B|A).

What we care about is the Diagnostic Reasoning which is the inverse of the Causal Reasoning.

P(A|B)

P(A|  ¬ B) 

It takes three parameters to describe the entire Bayes network:

One parameter for P(A) from which we can derive P(  ¬ A).

Two parameters for P(B|A) from which we can derive
P(B|  ¬ A); P(  ¬ B|A) and P(  ¬ B| ¬ A).

Therefore it is a total of three parameters for the Bayes network.

B.  Bayes Rule

P( A | B ) = ( P( B | A ) * P(A) ) / P( B )

The complementary event is   ¬ A .

P(  ¬ A | B ) = ( P( B |  ¬ A ) * P( ¬ A) ) / P( B )

The normalizer is P(B).

We know:

P( A | B ) + P(  ¬ A | B ) = 1

We can compute the Bayes Rule by ignoring the normalizer.

P '( A | B ) = ( P( B | A ) * P( A ) ) 

P ' ¬ A | B ) = ( P( B |  ¬ A ) * P( ¬ A ) )

P( A | B ) = ζP '( A | B )

P(  ¬ A | B ) = ζP ' ¬ A | B )

 ζ = (P '( A | B ) + P ' ¬ A | B ))-1

C.  Examples

Two test cancer example.

It takes three parameters to describe the entire Bayes network:

P( c ) = 0.1 is the probability of a patient having cancer.

P( + | c ) = 0.9 is the probability of receiving a positive test given the patient has cancer.

P( - | ¬ c ) = 0.8 is the probability of receiving a negative test given the patient is cancer free.

========================================

P( ¬c ) = 0.99 is the probability of a patient not having cancer.

P( - | c ) = 0.1 is the probability of receiving a negative test given the patient has cancer.

P( + | ¬ c ) = 0.2 is the probability of receiving a positive test given the patient is cancer free.


You have learned Bayes Rule.

Elcric Otto Circle





-->

-->

-->




How to Link to Elcric Otto Circle's home page!

It will appear on your website as:

Link to: "ELCRIC OTTO CIRCLE's Home Page".




No comments:

Post a Comment