In the Bayesian approach, it is necessary to calculate the posterior probability. This is the probability, which comes from the previous calculations and experiments, which describe the range of possibilities given the previous data. Once this has been calculated, it is then possible to predict the likelihood of an outcome, i.e.
This approach is used extensively in all sorts of situations involving uncertainty, i.e. from forecasting the outcome of sporting events to making estimations of the probability of a patient contracting a disease, based on his medical history.
Bayes also allows the calculation of prior probabilities or the range of possibilities, which are not included in the prior data, to be derived from the available data. This approach allows for the prediction of unknown events with known or suspected causes, or unknown events in which the cause is not known. The process may involve estimation of prior distributions, which are necessary to specify the range of possible results.
The posterior probability of the presence or absence of an event (i.e. the conditional probability) is known as the posterior likelihood, which is derived by calculating the posterior probability of each of the alternative events. The posterior likelihood can be computed over a range of possible values, such as a range of values over which the condition does not apply, or a range over which the condition applies but there is no evidence in support of that result.
Bayesian methods are based on the fact that when an event is observed, there is some degree of uncertainty about the event – uncertainty that can only be quantified in terms of the posterior probability. This uncertainty is measured by the conditional probability, which is associated with the data which is used as the basis for the posterior probability. – the information which is associated with the conditional probability.
Bayesian techniques also allow the estimation of posterior probabilities over intervals of time, rather than over whole time periods. A common application of this approach involves the calculation of the mean-average law, which shows the variation in distribution over time, and which describes how the average of any random sample would look if a uniform distribution had been used instead. This gives the variance and standard deviation of the random sample, which are essential measures of the uncertainty which exists within the distribution of the data.
Bayes can also be used for the calculation of other distributions, which involve the distribution of probability over an infinite number of values. The most basic example is the probability of a set of independent variables, such as numbers or the log normal. It is possible to calculate Bayesian distributions in any number of ways, depending on the circumstances of the problem.
There are two types of distributions which are commonly used in the context of Bayesian analysis, and they are the binomial and the normal distributions. Both these distributions are based on the theory of probability, which is based on the assumption that all the possible outcomes are equally likely, in the sense that their frequencies are equal to each other. Binomial distributions, in particular, use the assumption that the outcome is Gaussian in shape, so that the distributions are distributed in a straight line between their points – they are normally referred to as normal distributions.
Another example of a distribution which can be used as a basis for Bayesian analysis is the binomial distribution, which involve the probability of obtaining at least one of every value from a set of independent variables, and so the probability of getting a single value from every independent variable. In this case, it is not possible to calculate a distribution which incorporates the effects of other independent variables, such as the effect of the frequency with which the independent variables occur – the distribution is purely dependent on the assumption that a random sample is unbiased.
This distribution is called the null distribution, because it assumes that there is no probability of obtaining any value from the independent variable, and that the only probability is its expected frequency. If this assumption is broken, however, then the distribution can be used as a basis for Bayesian analysis, and this has the advantage of allowing the probability of all other independent variables to be estimated from the data obtained – so that the distribution of the data will be completely independent of the independent variables.