-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chapter 5: Summary added #50
base: main
Are you sure you want to change the base?
Conversation
|
||
We use the Julia library Turing.jl to instantiate a model where we set the prior probability and the distribution of the outcomes of our experiment. Then we use the Markov Chain Monte Carlo algorithm for sampling and saw how our posterior distribution updates with the input of new outcomes. | ||
|
||
Finally, we experiment on how changes in our prior distributions affect the results we obtain" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change: experiment on
for: experiment with
|
||
In this chapter, we gave an introduction to probabilistic programming languages exploring the classic coin flipping example in a Bayesian way. | ||
|
||
First, we saw that in this kind of Bernoulli trial scenario, where the experiment has two possible outcomes 0 or 1, it is a good idea to set our likelihood to have a binomial distribution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set our likelihood to a binomial distribution.
In this chapter, we gave an introduction to probabilistic programming languages exploring the classic coin flipping example in a Bayesian way. | ||
|
||
First, we saw that in this kind of Bernoulli trial scenario, where the experiment has two possible outcomes 0 or 1, it is a good idea to set our likelihood to have a binomial distribution. | ||
We also learned what sampling is and saw why we use it to make an update on our beliefs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to update our beliefs
|
||
First, we saw that in this kind of Bernoulli trial scenario, where the experiment has two possible outcomes 0 or 1, it is a good idea to set our likelihood to have a binomial distribution. | ||
We also learned what sampling is and saw why we use it to make an update on our beliefs. | ||
Then we used the Julia library Turing.jl to create a probabilistic model setting our prior probability to be a uniform distribution and the likelihood to have a binomial one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setting our prior probability to a uniform distribution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and the likelihood to a binomial one
Then we used the Julia library Turing.jl to create a probabilistic model setting our prior probability to be a uniform distribution and the likelihood to have a binomial one. | ||
So we sampled our model with the Markov chain Monte Carlo algorithm and saw how the posterior probability was updated every time we input a new coin flip result. | ||
|
||
Finally, we created a new model with the prior probability set to a normal distribution centered on *p* equals 0.5 which gave us more accurate results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
centered on p = 0.5
md" ### Bayesian Bandits " | ||
md" ### Summary | ||
|
||
In this chapter, we gave an introduction to probabilistic programming languages exploring the classic coin flipping example in a Bayesian way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this chapter, we gave an introduction to probabilistic programming languages and explored the classic coin flipping example in a Bayesian way.
|
||
First, we saw that in this kind of Bernoulli trial scenario, where the experiment has two possible outcomes 0 or 1, it is a good idea to set our likelihood to a binomial distribution. | ||
We also learned what sampling is and saw why we use it to update our beliefs. | ||
Then we used the Julia library Turing.jl to create a probabilistic model setting our prior probability to a uniform distribution and the likelihood to a binomial one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probabilistic model, setting
First, we saw that in this kind of Bernoulli trial scenario, where the experiment has two possible outcomes 0 or 1, it is a good idea to set our likelihood to a binomial distribution. | ||
We also learned what sampling is and saw why we use it to update our beliefs. | ||
Then we used the Julia library Turing.jl to create a probabilistic model setting our prior probability to a uniform distribution and the likelihood to a binomial one. | ||
So we sampled our model with the Markov chain Monte Carlo algorithm and saw how the posterior probability was updated every time we input a new coin flip result. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We sampled our model
(delete So)
…edback-to-do-messages Chapter 5: Added feedback and to do messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Fix
In the previous chapter we introduced some of the basic mathematical tools we are going to make use of throughout the book. We talked about histograms, probability, probability distributions and the Bayesian way of thinking.
- Fix:
We will start this chapter by discussing
- Remove unnecessary italics and capitalization, fix typo in "probabilistic", remove apostrophe in PPLs, add fragment:
another useful tool, that is, probabilistic programming, and more specifically, how to apply it using probabilistic programming languages or PPLs.
-
There are a few occurrences of "PPL's" change them all for "PPLs"
-
Fix
These are systems, usually embedded inside a programming language,
- Fix
We will be focusing on some examples
- Replace:
We are going now to tackle a well known example, just to settle some ideas: flipping a coin. But this time, from a Bayesian perspective.
By:
Let's revisit the old example of flipping a coin, but from a Bayesian perspective, as a way to lay down some ideas.
- Change sentence:
To answer these questions we are going to build a simple model, with the help of Julia libraries that add PPL capabilities.
- Fix sentences:
Do we know anything else? Let's skip that question for the moment and suppose we don't know anything else about
$p$ . This complete uncertainty also constitutes information we can incorporate into our model. How so? Because we can assign equal probability to each value of$p$ while assigning 0 probability to the remaing values. This just means we don't know anything and that every outcome is equally likely. Translating this into a probability distribution, it means that we are going to use a uniform (NO CAPS, NO ITALICS) prior distribution for$p$ , and the function domain will be all numbers between 0 and 1.
@martinacantaro i made the corrections you mention |
No description provided.