Discussion 7: Choosing Sample Sizes, Hypothesis Testing, and Permutation Testing

← return to practice.dsc10.com


These problems are taken from past quizzes and exams. Work on them on paper, since the quizzes and exams you take in this course will also be on paper.

We encourage you to complete these problems during discussion section. Solutions will be made available after all discussion sections have concluded. You don’t need to submit your answers anywhere.

Note: We do not plan to cover all of these problems during the discussion section; the problems we don’t cover can be used for extra practice.


Problem 1

You need to estimate the proportion of American adults who want to be vaccinated against Covid-19. You plan to survey a random sample of American adults, and use the proportion of adults in your sample who want to be vaccinated as your estimate for the true proportion in the population. Your estimate must be within 0.04 of the true proportion, 95% of the time. Using the fact that the standard deviation of any dataset of 0’s and 1’s is no more than 0.5, calculate the minimum number of people you would need to survey. Input your answer below, as an integer.

Answer: 625

Note: Before reviewing these solutions, it’s highly recommended to revisit the lecture on “Choosing Sample Sizes,” since this problem follows the main example from that lecture almost exactly.

While this solution is long, keep in mind from the start that our goal is to solve for the smallest sample size necessary to create a confidence interval that achieves certain criteria.

The Central Limit Theorem tells us that the distribution of the sample mean is roughly normal, regardless of the distribution of the population from which the samples are drawn. At first, it may not be clear how the Central Limit Theorem is relevant, but remember that proportions are means too – for instance, the proportion of adults who want to be vaccinated is equal to the mean of a collection of 1s and 0s, where we have a 1 for each adult that wants to be vaccinated and a 0 for each adult who doesn’t want to be vaccinated. What this means (😉) is that the Central Limit Theorem applies to the distribution of the sample proportion, so we can use it here too.

Not only do we know that the distribution of sample proportions is roughly normal, but we know its mean and standard deviation, too:

\begin{align*} \text{Mean of Distribution of Possible Sample Means} &= \text{Population Mean} = \text{Population Proportion} \\ \text{SD of Distribution of Possible Sample Means} &= \frac{\text{Population SD}}{\sqrt{\text{Sample Size}}} \end{align*}

Using this information, we can create a 95% confidence interval for the population proportion, using the fact that in a normal distribution, roughly 95% of values are within 2 standard deviations of the mean:

\left[ \text{Population Proportion} - 2 \cdot \frac{\text{Population SD}}{\sqrt{\text{Sample Size}}}, \: \text{Population Proportion} + 2 \cdot \frac{\text{Population SD}}{\sqrt{\text{Sample Size}}} \right]

However, this interval depends on the population proportion (mean) and SD, which we don’t know. (If we did know these parameters, there would be no need to collect a sample!) Instead, we’ll use the sample proportion and SD as rough estimates:

\left[ \text{Sample Proportion} - 2 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}}, \: \text{Sample Proportion} + 2 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}} \right]

Note that the width of this interval – that is, its right endpoint minus its left endpoint – is: \text{width} = 4 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}}

In the problem, we’re told that we want our interval to be accurate to within 0.04, which is equivalent to wanting the width of our interval to be less than or equal to 0.08 (since the interval extends the same amount above and below the sample proportion). As such, we need to pick the smallest sample size necessary such that:

\text{width} = 4 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}} \leq 0.08

We can re-arrange the inequality above to solve for our sample’s size:

\begin{align*} 4 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}} &\leq 0.08 \\ \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}} &\leq 0.02 \\ \frac{1}{\sqrt{\text{Sample Size}}} &\leq \frac{0.02}{\text{Sample SD}} \\ \frac{\text{Sample SD}}{0.02} &\leq \sqrt{\text{Sample Size}} \\ \left( \frac{\text{Sample SD}}{0.02} \right)^2 &\leq \text{Sample Size} \end{align*}

All we now need to do is pick the smallest sample size that satisfies the above inequality. But there’s an issue – we don’t know what our sample SD is, because we haven’t collected our sample! Notice that in the inequality above, as the sample SD increases, so does the minimum necessary sample size. In order to ensure we don’t collect too small of a sample (which would result in the width of our confidence interval being larger than desired), we can use an upper bound for the SD of our sample. In the problem, we’re told that the largest possible SD of a sample of 0s and 1s is 0.5 – this means that if we replace our sample SD with 0.5, we will find a sample size such that the width of our confidence interval is guaranteed to be less than or equal to 0.08. This sample size may be larger than necessary, but that’s better than it being smaller than necessary.

By substituting 0.5 for the sample SD in the last inequality above, we get

\begin{align*} \left( \frac{\text{Sample SD}}{0.02} \right)^2 &\leq \text{Sample Size} \\\ \left( \frac{0.5}{0.02} \right)^2 &\leq \text{Sample Size} \\ 25^2 &\leq \text{Sample Size} \implies \text{Sample Size} \geq 625 \end{align*}

We need to pick the smallest possible sample size that is greater than or equal to 625; that’s just 625.


Difficulty: ⭐️⭐️⭐️⭐️

The average score on this problem was 40%.


Problem 2

It’s your first time playing a new game called Brunch Menu. The deck contains 96 cards, and each player will be dealt a hand of 9 cards. The goal of the game is to avoid having certain cards, called Rotten Egg cards, which come with a penalty at the end of the game. But you’re not sure how many of the 96 cards in the game are Rotten Egg cards. So you decide to use the Central Limit Theorem to estimate the proportion of Rotten Egg cards in the deck based on the 9 random cards you are dealt in your hand.


Problem 2.1

You are dealt 3 Rotten Egg cards in your hand of 9 cards. You then construct a CLT-based 95% confidence interval for the proportion of Rotten Egg cards in the deck based on this sample. Approximately, how wide is your confidence interval?

Choose the closest answer, and use the following facts:

Answer: \frac{17}{27}

A Central Limit Theorem-based 95% confidence interval for a population proportion is given by the following:

\left[ \text{Sample Proportion} - 2 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}}, \text{Sample Proportion} + 2 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}} \right]

Note that this interval uses the fact that (about) 95% of values in a normal distribution are within 2 standard deviations of the mean. It’s key to divide by \sqrt{\text{Sample Size}} when computing the standard deviation because the distribution that is roughly normal is the distribution of the sample mean (and hence, sample proportion), not the distribution of the sample itself.

The width of the above interval – that is, the right endpoint minus the left endpoint – is

\text{width} = 4 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}}

From the provided hint, we have that

\text{Sample SD} = \sqrt{(\text{Prop. of 0s}) \cdot (\text{Prop of 1s})} = \sqrt{\frac{3}{9} \cdot \frac{6}{9}} = \frac{\sqrt{18}}{9}

Then, since we know that the sample size is 9 and that \sqrt{18} is about \frac{17}{4}, we have

\text{width} = 4 \cdot \frac{\text{Sample SD}}{\sqrt{\text{Sample Size}}} = 4 \cdot \frac{\frac{\sqrt{18}}{9}}{\sqrt{9}} = 4 \cdot \frac{\sqrt{18}}{9 \cdot 3} = 4 \cdot \frac{\frac{17}{4}}{27} = \frac{17}{27}


Difficulty: ⭐️⭐️⭐️

The average score on this problem was 51%.


Problem 2.2

Which of the following are limitations of trying to use the Central Limit Theorem for this particular application? Select all that apply.

Answer: Options 1 and 2

Option 1: We use Central Limit Theorem (CLT) for large random samples, and a sample of 9 is considered to be very small. This makes it difficult to use CLT for this problem.

Option 2: Recall CLT happens when our sample is drawn with replacement. When we are handed nine cards we are never replacing cards back into our deck, which means that we are sampling without replacement.

Option 3: This is wrong because CLT states that a large sample is approximately a normal distribution even if the data itself is not normally distributed. This means it doesn’t matter if our data had not been normally distributed if we had a large enough sample we could use CLT.

Option 4: This is wrong because CLT does apply to the sample proportion distribution. Recall that proportions can be treated like means.


Difficulty: ⭐️⭐️

The average score on this problem was 77%.



Problem 3

You want to estimate the proportion of DSC majors who have a Netflix subscription. To do so, you will survey a random sample of DSC majors and ask them whether they have a Netflix subscription. You will then create a 95% confidence interval for the proportion of “yes" answers in the population, based on the responses in your sample. You decide that your confidence interval should have a width of at most 0.10.


Problem 3.1

In order for your confidence interval to have a width of at most 0.10, the standard deviation of the distribution of the sample proportion must be at most T. What is T? Give your answer as an exact decimal.

Answer: 0.025


Difficulty: ⭐️⭐️⭐️⭐️

The average score on this problem was 46%.


Problem 3.2

Using the fact that the standard deviation of any dataset of 0s and 1s is no more than 0.5, calculate the minimum number of people you would need to survey so that the width of your confidence interval is at most 0.10. Give your answer as an integer.

Answer: 400


Difficulty: ⭐️⭐️

The average score on this problem was 81%.



Problem 4

Arya was curious how many UCSD students used Hulu over Thanksgiving break. He surveys 250 students and finds that 130 of them did use Hulu over break and 120 did not.

Using this data, Arya decides to test following hypotheses:


Problem 4.1

Which of the following could be used as a test statistic for the hypothesis test?

Answer: The proportion of students who did use Hulu minus the proportion of students who did not use Hulu.


Difficulty: ⭐️⭐️

The average score on this problem was 81%.


Problem 4.2

For the test statistic that you chose in part (a), what is the observed value of the statistic? Give your answer either as an exact decimal or a simplified fraction.

Answer: 0.04


Difficulty: ⭐️

The average score on this problem was 90%.


Problem 4.3

If the p-value of the hypothesis test is 0.053, what can we conclude, at the standard 0.05 significance level?

Answer: We fail to reject the null hypothesis.


Difficulty: ⭐️⭐️

The average score on this problem was 87%.



Problem 5

At the San Diego Model Railroad Museum, there are different admission prices for children, adults, and seniors. Over a period of time, as tickets are sold, employees keep track of how many of each type of ticket are sold. These ticket counts (in the order child, adult, senior) are stored as follows.

admissions_data = np.array([550, 1550, 400])


Problem 5.1

Complete the code below so that it creates an array admissions_proportions with the proportions of tickets sold to each group (in the order child, adult, senior).

def as_proportion(data):
    return __(a)__

admissions_proportions = as_proportion(admissions_data)

What goes in blank (a)?

Answer: data/data.sum()

To calculate proportion for each group, we divide each value in the array (tickets sold to each group) by the sum of all values (total tickets sold). Remember values in an array can be processed as a whole.


Difficulty: ⭐️

The average score on this problem was 95%.


Problem 5.2

The museum employees have a model in mind for the proportions in which they sell tickets to children, adults, and seniors. This model is stored as follows.

model = np.array([0.25, 0.6, 0.15])

We want to conduct a hypothesis test to determine whether the admissions data we have is consistent with this model. Which of the following is the null hypothesis for this test?

Answer: Child, adult, and senior tickets are purchased in proportions 0.25, 0.6, and 0.15. (Option 2)

Recall, null hypothesis is the hypothesis that there is no significant difference between specified populations, any observed difference being due to sampling or experimental error. So, we assume the distribution is the same as the model.


Difficulty: ⭐️⭐️

The average score on this problem was 88%.


Problem 5.3

Which of the following test statistics could we use to test our hypotheses? Select all that could work.

Answer: sum of squared differences in proportions, mean of squared differences in proportions (Option 2, 4)

We need to use squared difference to avoid the case that large positive and negative difference cancel out in the process of calculating sum or mean, resulting in small sum of difference or mean of difference that does not reflect the actual deviation. So, we eliminate Option 1 and 3.


Difficulty: ⭐️⭐️

The average score on this problem was 77%.


Problem 5.4

Below, we’ll perform the hypothesis test with a different test statistic, the mean of the absolute differences in proportions.

Recall that the ticket counts we observed for children, adults, and seniors are stored in the array admissions_data = np.array([550, 1550, 400]), and that our model is model = np.array([0.25, 0.6, 0.15]).

For our hypothesis test to determine whether the admissions data is consistent with our model, what is the observed value of the test statistic? Give your answer as a number between 0 and 1, rounded to three decimal places. (Suppose that the value you calculated is assigned to the variable observed_stat, which you will use in later questions.)

Answer: 0.02

We first calculate the proportion for each value in admissions_data \frac{550}{550+1550+400} = 0.22 \frac{1550}{550+1550+400} = 0.62 \frac{400}{550+1550+400} = 0.16 So, we have the distribution of the admissions_data

Then, we calculate the observed value of the test statistic (the mean of the absolute differences in proportions) \frac{|0.22-0.25|+|0.62-0.6|+|0.16-0.15|}{number\ of\ goups} =\frac{0.03+0.02+0.01}{3} = 0.02


Difficulty: ⭐️⭐️

The average score on this problem was 82%.


Problem 5.5

Now, we want to simulate the test statistic 10,000 times under the assumptions of the null hypothesis. Fill in the blanks below to complete this simulation and calculate the p-value for our hypothesis test. Assume that the variables admissions_data, admissions_proportions, model, and observed_stat are already defined as specified earlier in the question.

simulated_stats = np.array([]) 
for i in np.arange(10000):
    simulated_proportions = as_proportions(np.random.multinomial(__(a)__, __(b)__))
    simulated_stat = __(c)__
    simulated_stats = np.append(simulated_stats, simulated_stat)

p_value = __(d)__

What goes in blank (a)? What goes in blank (b)? What goes in blank (c)? What goes in blank (d)?

Answer: (a) admissions_data.sum() (b) model (c) np.abs(simulated_proportions - model).mean() (d) np.count_nonzero(simulated_stats >= observed_stat) / 10000

Recall, in np.random.multinomial(n, [p_1, ..., p_k]), n is the number of experiments, and [p_1, ..., p_k] is a sequence of probability. The method returns an array of length k in which each element contains the number of occurrences of an event, where the probability of the ith event is p_i.

We want our simulated_proportion to have the same data size as admissions_data, so we use admissions_data.sum() in (a).

Since our null hypothesis is based on model, we simulate based on distribution in model, so we have model in (b).

In (c), we compute the mean of the absolute differences in proportions. np.abs(simulated_proportions - model) gives us a series of absolute differences, and .mean() computes the mean of the absolute differences.

In (d), we calculate the p_value. Recall, the p_value is the chance, under the null hypothesis, that the test statistic is equal to the value that was observed in the data or is even further in the direction of the alternative. np.count_nonzero(simulated_stats >= observed_stat) gives us the number of simulated_stats greater than or equal to the observed_stat in the 10000 times simulations, so we need to divide it by 10000 to compute the proportion of simulated_stats greater than or equal to the observed_stat, and this gives us the p_value.


Difficulty: ⭐️⭐️

The average score on this problem was 79%.


Problem 5.6

True or False: the p-value represents the probability that the null hypothesis is true.

Answer: False

Recall, the p-value is the chance, under the null hypothesis, that the test statistic is equal to the value that was observed in the data or is even further in the direction of the alternative. It only gives us the strength of evidence in favor of the null hypothesis, which is different from “the probability that the null hypothesis is true”.


Difficulty: ⭐️⭐️⭐️

The average score on this problem was 64%.


Problem 5.7

The new statistic that we used for this hypothesis test, the mean of the absolute differences in proportions, is in fact closely related to the total variation distance. Given two arrays of length three, array_1 and array_2, suppose we compute the mean of the absolute differences in proportions between array_1 and array_2 and store the result as madp. What value would we have to multiply madp by to obtain the total variation distance array_1 and array_2? Give your answer as a number rounded to three decimal places.

Answer: 1.5

Recall, the total variation distance (TVD) is the sum of the absolute differences in proportions, divided by 2. When we compute the mean of the absolute differences in proportions, we are computing the sum of the absolute differences in proportions, divided by the number of groups (which is 3). Thus, to get TVD, we first multiply our current statistics (the mean of the absolute differences in proportions) by 3, we get the sum of the absolute differences in proportions. Then according to the definition of TVD, we divide this value by 2. Thus, we have \text{current statistics}\cdot 3 / 2 = \text{current statistics}\cdot 1.5.


Difficulty: ⭐️⭐️⭐️

The average score on this problem was 65%.



Problem 6

You survey 100 DSC majors and 140 CSE majors to ask them which video streaming service they use most. The resulting distributions are given in the table below. Note that each column sums to 1.

Service DSC Majors CSE Majors
Netflix 0.4 0.35
Hulu 0.25 0.2
Disney+ 0.1 0.1
Amazon Prime Video 0.15 0.3
Other 0.1 0.05

For example, 20% of CSE Majors said that Hulu is their most used video streaming service. Note that if a student doesn’t use video streaming services, their response is counted as Other.


Problem 6.1

What is the total variation distance (TVD) between the distribution for DSC majors and the distribution for CSE majors? Give your answer as an exact decimal.

Answer: 0.15


Difficulty: ⭐️⭐️

The average score on this problem was 89%.


Problem 6.2

Suppose we only break down video streaming services into four categories: Netflix, Hulu, Disney+, and Other (which now includes Amazon Prime Video). Now we recalculate the TVD between the two distributions. How does the TVD now compare to your answer to part (a)?

Answer: less than (a)


Difficulty: ⭐️

The average score on this problem was 93%.



Problem 7

The DataFrame bikes contains a sample of 500 bikes for sale locally. Columns are:

You want to know if there is a significant difference in the sale prices of "road" and "hybrid" bikes using a permutation test. The hypotheses are:


Problem 7.1

Using the bikes DataFrame and the difference in group means (in the order "road" minus "hybrid") as your test statistic, fill in the blanks so the code below generates 10,000 simulated statistics for the permutation test.

def find_diff(df):
    group_means = df.groupby("shuffled").mean().get("price")
    return group_means.loc["road"] - group_means.loc["hybrid"]

some_bikes = __(x)__
diffs = np.array([])
for i in np.arange(10000):
    shuffled_df = some_bikes.assign(shuffled = __(y)__)  
    diffs = np.append(diffs, find_diff(shuffled_df))

Answer:

(x): bikes[(bikes.get("style") == "road") | (bikes.get("style") == "hybrid")]
(y): np.random.permutation(some bikes.get("style"))


Difficulty: ⭐️⭐️⭐️

The average score on this problem was 54%.


Problem 7.2

Do large values of the observed statistic make us lean towards the null or alternative hypothesis?

Answer: Null Hypothesis


Difficulty: ⭐️⭐️⭐️

The average score on this problem was 63%.


Problem 7.3

Suppose the p-value for this test evaluates to 0.04. What can you conclude based on this? Select all that apply.

Answer: Fail to reject the null hypothesis at a significance level of 0.01, Reject the null hypothesis at a significance level of 0.05.


Difficulty: ⭐️⭐️⭐️⭐️

The average score on this problem was 48%.



Problem 8

For this question, we will use data from the 2021 Women’s National Basketball Association (WNBA) season for the next several problems. In basketball, players score points by shooting the ball into a hoop. The team that scores the most points wins the game.

Kelsey Plum, a WNBA player, attended La Jolla Country Day School, which is adjacent to UCSD’s campus. Her current team is the Las Vegas Aces (three-letter code 'LVA'). In 2021, the Las Vegas Aces played 31 games, and Kelsey Plum played in all 31.

The DataFrame plum contains her stats for all games the Las Vegas Aces played in 2021. The first few rows of plum are shown below (though the full DataFrame has 31 rows, not 5):

Each row in plum corresponds to a single game. For each game, we have:

Consider the definition of the function diff_in_group_means:

def diff_in_group_means(df, group_col, num_col):
    s = df.groupby(group_col).mean().get(num_col)
    return s.loc[False] - s.loc[True]


Problem 8.1

It turns out that Kelsey Plum averages 0.61 more assists in games that she wins (“winning games”) than in games that she loses (“losing games”). Fill in the blanks below so that observed_diff evaluates to -0.61.

observed_diff = diff_in_group_means(plum, __(a)__, __(b)__)
  1. What goes in blank (a)?

  2. What goes in blank (b)?

Answers: 'Won', 'AST'

To compute the number of assists Kelsey Plum averages in winning and losing games, we need to group by 'Won'. Once doing so, and using the .mean() aggregation method, we need to access elements in the 'AST' column.

The second argument to diff_in_group_means, group_col, is the column we’re grouping by, and so blank (a) must be filled by 'Won'. Then, the second argument, num_col, must be 'AST'.

Note that after extracting the Series containing the average number of assists in wins and losses, we are returning the value with the index False (“loss”) minus the value with the index True (“win”). So, throughout this problem, keep in mind that we are computing “losses minus wins”. Since our observation was that she averaged 0.61 more assists in wins than in losses, it makes sense that diff_in_group_means(plum, 'Won', 'AST') is -0.61 (rather than +0.61).


Difficulty: ⭐️

The average score on this problem was 94%.


Problem 8.2

After observing that Kelsey Plum averages more assists in winning games than in losing games, we become interested in conducting a permutation test for the following hypotheses:

To conduct our permutation test, we place the following code in a for-loop.


won = plum.get('Won')
ast = plum.get('AST')
shuffled = plum.assign(Won_shuffled=np.random.permutation(won)) \
               .assign(AST_shuffled=np.random.permutation(ast))

Which of the following options does not compute a valid simulated test statistic for this permutation test?

Answer: diff_in_group_means(shuffled, 'Won', 'AST')

As we saw in the previous subpart, diff_in_group_means(shuffled, 'Won', 'AST') computes the observed test statistic, which is -0.61. There is no randomness involved in the observed test statistic; each time we run the line diff_in_group_means(shuffled, 'Won', 'AST') we will see the same result, so this cannot be used for simulation.

To perform a permutation test here, we need to simulate under the null by randomly assigning assist counts to groups; here, the groups are “win” and “loss”.

  • Option 2: Here, assist counts are shuffled and the group names are kept in the same order. The end result is a random pairing of assists to groups.
  • Option 3: Here, the group names are shuffled and the assist counts are kept in the same order. The end result is a random pairing of assist counts to groups.
  • Option 4: Here, both the group names and assist counts are shuffled, but the end result is still the same as in the previous two options.

As such, Options 2 through 4 are all valid, and Option 1 is the only invalid one.


Difficulty: ⭐️⭐️⭐️

The average score on this problem was 68%.


Problem 8.3

Suppose we generate 10,000 simulated test statistics, using one of the valid options from part 1. The empirical distribution of test statistics, with a red line at observed_diff, is shown below.

Roughly one-quarter of the area of the histogram above is to the left of the red line. What is the correct interpretation of this result?

Answer: Under the assumption that Kelsey Plum’s number of assists in winning games and in losing games come from the same distribution, and that she wins 22 of the 31 games she plays, the chance of her averaging at least 0.61 more assists in wins than losses is roughly a quarter. (Option 3)

First, we should note that the area to the left of the red line (a quarter) is the p-value of our hypothesis test. Generally, the p-value is the probability of observing an outcome as or more extreme than the observed, under the assumption that the null hypothesis is true. The direction to look in depends on the alternate hypothesis; here, since our alternative hypothesis is that the number of assists Kelsey Plum makes in winning games is higher on average than in losing games, a “more extreme” outcome is where the assists in winning games are higher than in losing games, i.e. where \text{(assists in wins)} - \text{(assists in losses)} is positive or where \text{(assists in losses)} - \text{(assists in wins)} is negative. As mentioned in the solution to the first subpart, our test statistic is \text{(assists in losses)} - \text{(assists in wins)}, so a more extreme outcome is one where this is negative, i.e. to the left of the observed statistic.

Let’s first rule out the first two options.

  • Option 1: This option states that the probability that the null hypothesis (the number of assists she makes in winning and losing games comes from the same distribution) is true is roughly a quarter. However, the p-value is not the probability that the null hypothesis is true.
  • Option 2: The significance level is the formal name for the p-value “cutoff” that we specify in our hypothesis test. There is no cutoff mentioned in the problem. The observed significance level is another name for the p-value, but Option 2 did not contain the word observed.

Now, the only difference between Options 3 and 4 is the inclusion of “at least” in Option 3. Remember, to compute a p-value we must compute the probability of observing something as or more extreme than the observed, under the null. The “or more” corresponds to “at least” in Option 3. As such, Option 3 is the correct choice.


Difficulty: ⭐️⭐️⭐️

The average score on this problem was 70%.



👋 Feedback: Find an error? Still confused? Have a suggestion? Let us know here.