1 and σ_{2} were known, then \( \overline{x} \)_{1} – \( \overline{x} \) _{2} would have a Normal distribution.
]]>1 and σ_{2}?
]]>1 with s_{1} and σ_{2} with s_{2}, the sample standard deviations for the two samples.
]]>1 and s_{2} to estimate both σ_{1} and σ_{2}.
]]>1 and σ_{2}.
]]>t
]]>t
]]>1 – µ_{2} in the test statistic for testing H_{0}: µ_{1} = µ_{2}?
]]>1 – µ_{2} in a confidence interval?
]]>P-value for \(t = \frac{ \overline{x} _{1} - \overline{x} _{2} }{\sqrt{ \frac{ s _{1} ^{2} }{ n_{1} } + \frac{ s _{2} ^{2} }{ n_{2} } } } \) with df = smaller of (n_{1} – 1, n_{2} – 1)?
]]>t table
]]>1 – 1, n_{2} – 1)?
]]>t table
]]>1 + n_{2} < 40?
]]>1 + n_{2} ≥ 40, we can apply the Central Limit Theorem.
]]>1 + n_{2} ≥ 40.
]]>1 + n_{2} ≥ 40.
]]>t test for means?
]]>t procedure, degrees of freedom are the smaller of (n_{1} – 1) and (n_{2} – 1). What are the degrees of freedom for this example?
]]>P-value < 0.05 and α = 0.05, should we reject H_{0}?
]]>1 – µ_{2}. What is this parameter in context?
]]>1 – µ_{2}. On the basis of this interval, can we say that µ_{1} – µ_{2} ≠ 0?
]]>