Assignment 4¶
Assignment 4 has two purposes:
To give you more experience with linear regression
To introduce simulation as a means of studying the behavior of statistical techniques
This assignment is due October 24, 2020 at the end of the day. Upload your .ipynb
and .pdf
files to Canvas.
Notation
In this writeup, I will use capital letters (e.g. \(X\)) for random variables, and lowercase variables (e.g. \(x\)) for individual samples (or draws) from those random variables.
You will find the probability notes useful for understanding the derivations in this assignment, and the various tutorial notebooks will be helpful for writing the code.
Reading
Read the assignment very carefully. There are quite a few moving parts, and you need to carefully follow the instructions in order to complete it successfully.
For many of the tasks I ask several questions. You need to answer these questions in your assignment writeup.
Repeating
Several of the tasks have you repeat a simulation so that you can see the distribution (mean, variance, perhaps histogram) of a computational result across many repetitions of an experiment. The general structure of this code looks like this:
# initialize data structures for storing repitition outcomes
for i in range(ITER_COUNT):
# draw the synthetic data
# fit the model / compute the results
# extract parameters / statistics from model & store in data structures
You can use either Python lists or NumPy arrays for storing your results from repetitions.
Revision Log¶
- Oct. 12, 2021
Clarified the last task of the warmup section to document it needs 100-, 1000-, and 10000-draw simulations.
Simulation¶
One common way to understand the behavior of statistical techniques is to use simulation (often called Monte Carlo simulation). In a simulation, we use a psuedorandom number generator to make up data that follows particular patterns (or lack of patterns). We call this data synthetic data.
We then apply a statistical technique, such as correlation coefficient or a linear regression, to the synthetic data, and see how closely its results match the parameters we put in to our simulation. If the analysis reliably estimates the simulation’s parameters, we say it recovers the parameters. We can do this many times to estimate that reliability — we can run the simulation 1000 times, for example, and examine the distribution of the error of its parameter estimates to see if it is unbiased, and how broad the errors are.
This technique is commonly used in statistics research (that is, research about statistics itself, rather than research that uses statistics to study other topics) in order to examine the behavior of statistical methods. By simulating samples of different sizes from a population with known parameters, we can compare the results of analyzing those samples with the actual values the statistical method is supposed to estimate. Further, by mapping its behavior over a range of scenarios, we can gain insight into what a statistical technique is likely doing with the particular data we have in front of us.
This is distinct from bootstrapping. In bootstrapping, we are resampling our sample to try to estimate the sampling distribution of a statistic with respect to the population our sample was drawn from; we have actual data, but do not know the actual population parameters. In simulation, we know the population parameters, and do not have any actual data because we make it all up with the random number generator.
Generating Random Numbers¶
NumPy’s Generator
class is the starting point for generating random numbers.
It has methods for generating numbers from a range of distributions.
For more sophisticated distributions, the various distributions in the scipy.stats
also support random draws.
Random number generators have a seed that is the starting point for picking numbers. Two identical generators with the same seed will produce the same sequence of values.
We can create a generator with np.random.default_rng
:
rng = np.random.default_rng(20201014)
This will return a numpy.random.Generator
, just like numpy.random.default_rng()
.
In my class examples, I have been using the current date as my seed. If you do not specify a seed, it will pick a fresh one every time you start the program; for reproducibility, it is advised to pick a seed for any particular analysis. It’s also useful to re-run the analysis with a different seed and double-check that none of the conclusions changed.
We can then use the random number generator to generate random numbers from various distributions. It’s important to note that random does not mean uniform — then uniform distribution is just one kind of random distribution.
For example, we can draw 100 samples from the standard normal distribution (\(\mu = 0\), \(\sigma = 1\))
using standard_normal()
:
xs = rng.standard_normal(100)
SeedBank
If you wish to use SeedBank to manage your RNG seeds, you can do:
seedbank.initialize(20201014)
rng = seedbank.numpy_rng()
Warmup: Correlation (10%)¶
If two variables are independent, their correlation should be zero, right? We can simulate this by drawing two arrays of 100 standard normal variables each, and computing their correlation coefficient:
xs = pd.Series(rng.standard_normal(100))
ys = pd.Series(rng.standard_normal(100))
xs.corr(ys)
Code Meaning
This code takes 100 draws (or samples) from the standard normal, twice (once for xs
and again for ys
).
Mathematically, we write this using \(\sim\) as the operator “drawn from”:
✅ Run 1000 iterations of this simulation to compute 1000 correlation coefficients. What is the mean and variance of these simulated coefficients? Plot their distribution. Are the results what you expect for computing correlations of uncorrelated variables?
Tip
What you need to do for this is to run the code example above — that computes the correlation between two 100-item samples — one thousand times.
This will draw a total of 200,000 numbers (100 each for x
and y
, in each simulation iteration).
✅ Repeat the previous simulation, but using 1000 draws per iteration instead of 100. How does this change the mean and variance of the resulting coefficients?
Tip
Now you need to modify the code to draw 1000 normals for x
and 1000 normals for y
in each iteration.
Remember that the example above is drawing 100 normals for each variable.
Remember the covariance of two variables is defined as:
And the correlation is:
If we want to generate correlated variables, we can do so by combining two random variables to form a third:
We can draw them with:
xs = pd.Series(rng.standard_normal(100))
ys = pd.Series(rng.standard_normal(100))
zs = xs + ys
With these variables, we have:
This last identity is from a property called linearity of expectation. We can now determine the covariance between \(X\) and \(Z\). As a preliminary, since \(X\) and \(Y\) are independent, their covariance \(\Cov(X, Y) = 0\). Further, their independence implies that \(\E[X Y] = \E[X]\E[Y]\), which from the equations above is \(0\).
With that:
The correlation coefficient depends on \(\Cov(X,Z) = 1\), \(\Var(X) = 1\), and \(\Var(Z)\). We can derive \(\Var(Z))\) as follows:
Therefore we have \(\sigma_X = 1\) (from its distribution), and \(\sigma_Z = \sqrt{2}\), and so the correlation
Covariance
You can compute the covariance with the pandas.Series.cov()
method.
It’s instructive to also plot that!
Linear Regression (35%)¶
If we want to simulate a single-variable linear regression:
there are four things we need to control:
the distribution of \(x\)
the intercept \(\alpha\)
the slope \(\beta\)
the variance of errors \(\sigma_\epsilon^2\)
Remember that the linear regression model assumes errors are i.i.d. normal, and the OLS model will result in a mean error of 0; thus we have \(\epsilon \sim \mathrm{Normal}(0, \sigma_\epsilon)\). Sampling data for this model involves the following steps:
Sample \(x\)
Sample \(\epsilon\)
Compute \(y = \alpha + \beta x + \epsilon\)
Let’s start with a very simple example: \(x\) is drawn from a standard normal, \(\alpha=0\), \(\beta=1\), and \(\sigma_\epsilon^2 = 1\).
xs = rng.standard_normal(1000)
errs = rng.standard_normal(1000)
ys = 0 + 1 * xs + errs
data = pd.DataFrame({
'X': xs,
'Y': ys
})
✅ Fit a linear model to this data, predicting \(Y\) with \(X\). What are the intercept and slope? What is \(R^2\)? Are these values what you expect? Plot residuals vs. fitted and a Q-Q plot of residuals to check the model assumptions - do they hold?
✅ Repeat the simulation 1000 times, fitting a linear model each time. Show the mean, variance, and a distribution plot of the intercept, slope, and \(R^2\) from these simulations.
Extracting Parameters
The RegressionResults
class returned by
OLS.fit()
contains the model parameters. The .params
field has the coefficients (including intercept), and .rsquared
has the \(R^2\) value:
fit.params['X']
✅ Fit a model to data with \(\alpha=1\) and \(\beta=4\). Are the resulting model parameters what you expect? How did \(R^2\) change, and why? Do the linear model assumptions still hold? What are the distributions of the slope, intercept, and \(R^2\) if you do this 1000 times?
Nonlinear Data (15%)¶
✅ Generate 1000 data points with the following distributions and formula:
✅ Fit a linear model predicting \(y\) with \(x\). How well does the model fit? Do the assumptions seem to hold?
✅ Draw a scatter plot of \(x\) and \(y\).
Drawing Normals
You can draw from \(\mathrm{Normal}(0, 5)\) either by using tbe normal()
method of Generator
, or by drawing an array of standard normals and multiplying it by 5.
This is because the normal distribution is in the scale-location family of distributions.
Exponentiation
The NumPy function numpy.exp()
computes \(e^x\).
✅ Repeat with \(y = -2 + 3 x^3 + \epsilon\)
Non-Normal Covariates (15%)¶
✅ Generate 1000 data points with the model:
Plot the distributions of \(X\) and \(Y\)
Fit a linear model predicting \(y\) with \(x\)
How well does this model fit? How much of the variance does it explain? Do the assumptions seem to hold? Does the linear regression seem appropriate to the data?
Gamma Distributions
You can draw 1000 samples from the \(\mathrm{Gamma}(2, 1)\) distribution with numpy.random.Generator.gamma()
:
rng.gamma(2, 1, 1000)
Multiple Regression (10%)¶
Now we’re going to look at regression with two or more independent variables.
We will use the following data generating process:
Scale-Location Distribution
To draw from \(\mathrm{Normal}(\mu, \sigma)\), you can draw xs
from a standard normal and compute xs * σ + μ
.
✅ Fit a linear model y ~ x1 + x2
on 1000 data points drawn from this model. What are the intercept and coefficients from the model? Are they what you expect?
Check the model assumptions — do they hold?
Multivariate Normals
You can draw both \(x_1\) and \(x_2\) simultaneously with:
xs = rng.multivariate_normal([10, -2], [[2, 0], [0, 5]], 1000)
# turn into a data frame
xdf = pd.DataFrame(xs, columns=['X1', 'X2'])
The multivariate normal distribution is parameterized by a list (or array) of means, and a positive symmetric covariance matrix defined as follows:
That is, the diagonals of the matrix are the variances of the individual variables, and the other cells are the covariances between pairs of variables. The example code sets up the following matrix:
Reflection (5%)¶
Write a couple of paragraphs about what you learned from this assignment.
Expected Time¶
I’m providing here some estimates of how long I expect each part might take you.
Warmup: 1 hour
Linear Regression: 2.5 hours
Nonlinear Data: 1 hour
Non-normal Covariates: 1 hour
Multiple Regression: 1 hour
Correlated Predictors: 2 hours
Cleanup and Reflection: 1 hour