Wikipedia
A Bayesian Confidence Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem: node activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posteriori probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH.
The basic network is a feedforward neural network with continuous activation. This can be extended to include spiking units and hypercolumns, representing mutually exclusive or interval coded features. This network has been used for classification tasks and data mining, for example for discovery of adverse drug reactions. The units can also be connected as a recurrent neural network (losing the strict interpretation of their activations as probabilities) but becoming a possible abstract model of biological neural networks and memory.