Historical dating customs

Being retro is back in, now lets shag like it. For too long our dating tastes have travelled down a path of pure misery and banal dinners. Let us take a lesson from our hereditary cousins in order to…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Filters Prerequisite Math

Random Variable

Each time you roll a die the outcome will be between 1 and 6. If we rolled a fair die a million times we’d expect to get a one 1/6 of the time. Thus we say the probability, or odds of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you’d reply 1/6.

Probability distribution & Random Variable

Example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as

P(X = H) = 0.5

P(X=T) = 0.5

To be a probability distribution the probability of each value Xi must be Xi >0, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one.

Unimodal and MultiModal

Histogram

Histograms graphically depict the distribution of a set of numbers.

Bayes Theorem

Bayesian statistics takes past information (the prior) into account. We observe that it rains 4 times every 100 days. From this I could state that the chance of rain tomorrow is 1/25. This is not how weather prediction is done. If I know it is raining today and the storm front is stalled, it is likely to rain tomorrow. Weather prediction is Bayesian.

Bayes theorem tells us how to compute the probability of an event given previous information.

To review, the prior is the probability of something happening before we include the probability of the measurement (the likelihood) and the posterior is the probability we compute after incorporating the information from the measurement.

Bayes theorem is

Mean — Summing the values and dividing by the number of values

Mode — The mode of a set of numbers is the number that occurs most often.

Median — The median of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median.

Variance

The variance is the expected value for how much the sample space X varies from the mean.

Gaussian Distribution

A Gaussian is a continuous probability distribution that is completely described with two parameters, the mean and the variance.

Recall that a Gaussian distribution is continuous. Think of an infinitely long straight line — what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being exactly 2°C is 0% because there are an infinite number of values the reading can take.

The Kalman filter uses Gaussians instead of arbitrary distributions.

Covariance

It indicates how 2 variables are related. A positive covariance means the variable are positively related, while a negative covariance means the variable are inversely related. Formula is stated as

Jacobian matrix — It is used to convert non linear function into linear function. The matrix containing all the partial derivative.

For more probability info , please follow below course note of vivek yadav

Add a comment

Related posts:

Core Java Interview Questions

As an Full Stack professional, it is essential to know the Core Java, learn the right technologies and prepare the right answers to commonly asked Core Java Interview Questions. Here’s a definitive…