Hello Friends? Greetings of the day to everyone! Today I am going to address a very pertinent issue related to validity in research. Validity is an essential aspect of any research, as the measurement of errors cannot only affect the ability to prepare good research but also the overall findings. The main aim for measuring validity in research is to allow for replicability and existence of data soundness, which indicates the results are accurate. Today, I will highlight the general term “validity” and two fundamental concepts of validity, in this case, congruent and divergent validity.
The Validity Concept
Truman Lee Kelly formulated the validity concept in his book “ Interpretation of Educational Measurement: ” in 1927. Kelly believed that the validity of any given test is evidenced if it measures what it is intended to measure. In this case, a measure of an individual’s intelligence ought to measure the intelligence and not any other variable such as the memory. There are various types of validity such as internal, external, face and construct. However, my focus is going to be on the two types of construct validity; convergent and divergent validity.
Delegate your assignment to our experts and they will do the rest.
Cornball and Meehl invented construct validity in 1955. The construct validity aims at measuring the degree to which a test captures a particular theoretical concept, and it overlaps with some form of the other validity aspects (Cronbach & Meehl, 1955). Construct validity does not involve itself with whether a test has an ability to measure a particular variable or trait, instead, it is focused on whether there is consistency between the theoretical and observational terms nomological network and the interpretation of test scores. When carrying out a costs validity test, the attribute being measured must first exist. In order to allow researchers to determine the construct validity of variables; they must evaluate the divergent and convergent relationships of the measures in terms of a specific nomological network’s properties, behaviors and constructs. Therefore, the construct validity is formed of the convergent and divergent (discriminant) validity.
Convergent Validity
Construct validity is established through convergent validity after a high correlation is experienced in tests having similar constructs. In case a researcher is using different research methodologies such as survey and observation to gather data on various constructs, convergent validity shall be utilized to establish the construct validity. The strength of the results obtained from the two research approaches will determine the extent of the convergent validity (Chin & Yao, 2014). A convergence of the test scores despite using different data gathering techniques indicates that the researcher measures the same construct.
It is evident that in convergent validity, a researcher uses two measures. One of the measures will assist in highlighting the correlation between the two assessment tools scores, which are considered as measuring a similar construct (Dmitrienko et al., 2007). In order for convergent validity to exist, the correlation between the two measuring tools must be moderate to high. Another means through which convergent validity is determined is by employing the MTMM approach (multitrait-multimethod matrix) (Chin & Yao, 2014). The MTMM approach is used in measuring the correlation existing between multiple measuring methodologies and various constructs. The primary aim for using the multitrait-multimethod matrix approach is to analyze a similar construct using several measuring tools the existence of convergent validity using the MTMM approach is determined when a moderate to high correlation is attained (Furr, 2018; Campbell & Fiske, 1959).
Divergent Validity
Construct validity is established through divergent validity when the constructs that a researcher is interested in show no form of relationship to one another. However, for one to determine the construct validity in research, a researcher must first determine the convergent validity before testing for divergent validity (Hubley, 2014). The principal aim for determining divergent validity is to allow researcher in discriminating between dissimilar construct measures. The other word used in referring to divergent validity is discriminant validity though it is not commonly used. In divergent validity, the test scores should be near zero or very low making them below the test scores observed in determining the existence of convergent validity (Dmitrienko et al., 2007).
The correlation between the two measure scores can be either positive or negative. However, in case a researcher obtains high negative scores on any construct being measured; it does not indicate that divergent validity exists (Hubley, 2014). A researcher must first obtain the convergent validity co-efficient from the same sample before determining the divergent validity. Therefore, similar measures that have a high correlation can be said to have a relatively high convergent validity, and different measures that have relatively low scores indicate the presence of divergent validity. From this analysis, it is thus evident that for divergent validity to exist a researcher must first measure the convergent validity and in case the correlation scores obtained are lower than those obtained while determining convergent validity, divergent validity is evidenced.
Methods of Evaluating Divergent and Convergent Validity
The degree upon which measures can be used in showing convergent and divergent validity is determined using three crucial measures. These measures include sets of correlations, multitrait-multimethod matrices, and quantifying construct validity.
Correlation Sets
The set of correlations allows a researcher to explore a more extensive range of criterion variables when determining convergent and divergent validity. In this case, a researcher must first compute the criterion the variables are measuring and construct of interest correlations. Using the nomological network forming the construct of interest, a researcher develops an “eyeball of correlations” by ascertaining the degree upon which similarities exist between the correlations (Furr, 2018). However, a researcher must use his/her judgment in this case. As evidenced in a study by Hill et al. (2004), the convergent and divergent validity were determined using this procedure. The researchers asked their participants to fill the Perfectionism Inventory to measure eight perfectionism facets. The PI scale had a total of 23 measures and to measure convergent and divergent validity the participants had to fill the PI along with the twenty-three criterion variable measures. The researchers presented the correlation between the 23 criterion scales and PI scales in a correlation matrix with over 200 correlations. The researchers used their judgments in interpreting the correlations to determine convergent and divergent validity
Multitrait-Multi-method matrices
Campbell and Fiske formulated the MTMMM in 1959 based on an expansion of Cronbach and Meehl’s work (Campbell & Fiske, 1959). In order to use this approach, researchers’ must first obtain measures of different traits each of which shall be measured using different techniques. The main aim of the MTMMM method is to determine the method and trait variance, which affect the correlations between two measures. In case one finds a strong positive or negative correlation it would mean that the construct being measured have a similar trait variance as they measure same traits. On the other hand, a strong negative or positive method variance would mean the similarity between the measures is brought about by using a similar measurement method. Therefore, in case the correlation between the measures is weak, it would indicate the construct being measured lack similarities as represented by trait variance. Using the MTMMM strategy, a researcher is able to determine four different types of correlation; heterotrait-heteromethod correlations, heterotrait-monomethods correlations, monotrait-heteromethod correlations and monotrait-monomethods correlations (Furr, 2018). The monotrait-heteromethod correlation represents a convergent validity as they have larger values as compared to the other correlations.
Quantifying Construct Validity
The quantifying construct validity methodology is the recent development in the measuring of convergent and divergent validity, and Western and Rosenthal (2003) formulated it. In this method, the researchers must determine the “fit” degree between the actual correlation set obtained and the convergent and divergent correlation predictions that had initially been set by the researchers. The primary reason for formulating the quantifying construct validity concept was to ensure that a more objective and precise quantitative procedure was used in providing detailed evidence the two forms of divergent (Furr, 2018). This is very different from the other two measuring techniques, as they do not focus more on objectivity and precision. According to Westen and Rosenthal (2003, p. 609), the principal aim for this type of measuring technique is to answer one question “Does this measure predict an array of other measures in a way predicted by theory?” During the quantifying construct validity, two types of results are obtained. The first results are the “degree of fit” effect size results ranging between -1 and +1 between the correlations pattern predicted by the researchers and actual correlations patterns (Furr, 2018). The actual pattern of convergent and divergent validity is determined by large positive effect sizes of the actual and predicted correlations (Furr, 2018). The second test obtained from the quantifying of construct validity is the test for significance. These results will be used in determining whether the degree of fit between predicted and actual correlations was caused by chance. To carry out the quantifying of construct validity procedure, a researcher must complete three crucial phases. The first phase involves making convergent and divergent validity prediction correlations that are accurate and clear (Furr, 2018). This prediction of each of the validity is made based on its correlation with the measure of interest. The second phase involves researchers gathering data and computing the actual divergent and convergent validity correlations (Furr, 2018). The third phase involves determining the degree of fit between the predicted and actual correlations of the convergent and divergent validity (Furr, 2018). In case of close fit between the two, it indicates the validity of the intended interpretation is good and in case the fit is week, the validity is poor.
Examples of Convergent and Divergent Validity Applications
Convergent Validity
Study construct: Sleep quantity
In a study involving the relationship between the quantity of sleep and body fitness level, in this case, what is the impact of daily exercise on the sleeping behavior of an individual. In this study, the dependent variable is the sleep quantity making it the construct of interest. A self- the completed survey will be used in gathering data from participants every time they wake up in the morning. However, the reliability of the instrument is not known as biases mar self- completed measurement. To reduce biases, video monitoring is added to observe the participants as they sleep, the scores of their sleeping patterns are recorded as required, and thus, this study will have to data gathering methodologies. The study will end up having two types of scores gathered using two different methodologies. At this point, one is required to establish the convergent validity between the self-completing survey and video camera scores. In case there is a strong relationship between the two scores, it indicates that there is convergent validity and thus the two procedures used in measuring sleep quality are valid measures of the construct of sleep quality.
Divergent Validity
Construct 1: sleep quality
Construct 2: sleep quantity
From the first study, there was a strong relationship between the scores obtained using two different measurement techniques. However, in case the study introduced two more variables, sleep quality and sleep quantity, from the first study, the set of questions used in measuring sleep quantity are also used in determining sleep quality. However, one is not sure whether sleep quality and sleep quantity are part of a similar or different construct. However, in case they are two different constructs and the methods were used in, studying treated them as if they are similar; the internal validity of the research is reduced due to the introduction of a confounding variable. It is thus required for one to determine whether sleep quality and sleep quantity are different constructs. In this case, the research measurements used in the first study are used. However, the self-completing survey will contain questions on sleep quality and sleep quantity. Also during observation, scores recorded must be different for sleep quality and sleep quantity. To determine the relationship between the two, it is crucial to find out the convergent validity, and this indicates the relationship between the survey and observation scores should be strong. To determine whether the two constructs are different, the scores of survey on sleep quality, quantity, and observation for sleep quality and quantity should have very little or no relationship and this represents divergent validity.
Thank You!
References
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin , 56 (2), 81.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin , 52 (4), 281.
Chin, C.-L., & Yao, G. (2014). Convergent Validity. Encyclopedia of Quality of Life and Well-Being Research, 1275–1276. Doi: 10.1007/978-94-007-0753-5_573
Dmitrienko, A., Chuang-Stein, C., & D'Agostino, R. B. (2007). Pharmaceutical statistics using SAS: A practical guide . Cary, N.C: SAS Institute.
Furr, R. M. (2018). Psychometrics: An introduction . Thousand Oaks, California: SAGE Publications.
Hill, R. W., Huelsman, T. J., Furr, R. M., Kibler, J., Vicente, B. B., & Kennedy, C. (2004). A new measure of perfectionism: The Perfectionism Inventory. Journal of Personality Assessment , 82 (1), 80-91.
Hubley, A. M. (2014). Discriminant Validity. Encyclopedia of Quality of Life and Well-Being Research, 1664–1667. Doi: 10.1007/978-94-007-0753-5_751
Westen, D., & Rosenthal, R. (2003). Quantifying construct validity: two simple measures. Journal of Personality and Social Psychology , 84 (3), 608.