Statistics deals with the organization, description and the analysis of quantitative data ( Upton & Cook, 1996) . Likewise, the term can be used to mean the indices that are derived from the data, using statistical approaches. On the other hand, an index that describes a sample is also called a statistic. However, if the same index describes a population, its called a parameter. Thus, the field of statistics entails various concepts. Some of these include the following;
Random Sampling
Random sampling is a sampling procedure that allows for the identification of a representative sample. In this case, irrespective of its size every sample in the targeted population has equal chances of being selected ( Upton & Cook, 1996) . Thus, this random sampling allows generalization of a large population, using a statistically determinable margin of error. Consequently, the procedure facilitates the use of inferential statistics.
Delegate your assignment to our experts and they will do the rest.
Probability Distribution
This is the mathematical characterization of a phenomenon that may be given in the form of probabilities . The term may, therefore, apply in survey or experiments. Further, it’s defined in the context of all likely outcomes of the particular random phenomenon in consideration.
Continuous probability distribution
This is one form of a probability distribution and is characterized by a cumulative distribution function. Further, the function in this case is also continuous .
Expected Value
In statistics, this refers to the average value that can be obtained in the long-run after a particular experiment is repeated .
Normal Distribution
This refers to the data set arrangement whereby most values are clustered in the middle, while the others are symmetrically found in either extreme. This distribution can be represented graphically using a bell curve that appears flared. In this distribution, the mode, mean and median is all the same.
Type I Error
This error is arrived at in statistics when a true null hypothesis is rejected . Thus, when an absent effect is detected, Type I error is committed. For instance, in a study on pregnancy, the null hypothesis could be, “There is no difference in the likelihood of occurrence of pregnancy in light skinned women and dark skinned women.” Since this null hypothesis is true, its rejection results in a Type I error.
Type II Error
This error occurs when a false null hypothesis is retained . Informed by the study above , the null hypothesis could be, “There is no difference in the likelihood of occurrence of pregnancy in women taking contraceptives and the women that don’t take contraceptives.” This null hypothesis is already false. Therefore, if it is not rejected , a Type II error is committed.
t Variable/Score
In statistics, this refers to an estimated parameter’s departure from its notional value as well as its standard error. It is expressed as a ration and is vital in the testing of hypotheses.
Null Hypothesis
Also referred to as a statistical hypothesis , a null hypothesis states that no difference or relationship exists between two variables. Thus, its implication is that any difference or relationship is due to error or chance.
Research Hypothesis
This is also termed as an alternative hypothesis that is non-directional. A research hypothesis acknowledges the existence of a difference or relationship, but the nature of the difference or relationship is not known. An example would be, “There is a difference in the occurrence of pregnancies among women aged below twenty years and those aged above twenty years.
Alternative Hypothesis
This hypothesis states that a difference or relationship exists between two variables. The alternative hypothesis can also be directional or non-directional.
T-test
This is a unique form of the Analysis of Variance (ANOVA) that is used to test whether significant differences exist between two means. The two means in this case must have been derived from two groups or samples at a particular level of probability.
Chi-square distribution
This refers to the distribution of the summation of the squares of the x independent standard normal random variables, in which x is the degrees of freedom.
The normal curve can also be referred to as a normal distribution. It is symmetrical and has one central peak that is also the mean of the data in question. The normal curve is bell-shaped, and the graph falls off evenly on both sides of the mean . Thus half (50%) of the distribution is to the left of the mean and the other half (50%) on the mean ’s right. The standard deviation controls the spread of a normal distribution. In this case, a smaller standard deviation implies that the data is more concentrated and vice versa ( Upton & Cook, 1996) . Therefore, the data is expected to be more concentrated at one standard deviation point compared to the three standard deviation point. 68% of the distribution can be found within one standard deviation of the mean. Secondly, 95% of the distribution lies within two standard deviations from the mean . Lastly, 99.7% of the distribution is expected to be within three standard deviations from the mean . These percentages are based on the “empirical rule” ( Upton & Cook, 1996 ).
This concept can be used to explain cases of social deviance in the society. Most often members of the society are guided by a particular set of expectations or rules. Thus, it is expected that most people will conform. In this respect, social deviance entails non-conformity to the existing norms and very few people are likely to break away. In the contemporary society, same-sex marriage is a form of social deviance. In light of the normal distribution, these couples are likely to be found within three standard deviations from the mean. This is justified because they are still few and continue to face rejection in most societies. Conversely, most couples are heterosexual and are likely to be concentrated within one standard deviation from the mean.
References
Upton, G., & Cook, I. (1996). Understanding statistics . Oxford University Press.
1