by Christos Giannoulis, PhD
Attributes are often measured using multiple variables with different upper and lower limits. For example, we may have five measures of political orientation, each with a different range of values.
Each variable is measured in a different way. The measures have a different number of categories and the low and high scores on each measure are different.
|Low Score||High Score||Variable Name|
|Trust in Government||1||10 (high trust)||TRUST|
|Political Efficacy||0||4 (high efficacy)||POLEF|
|Feeling towards President||0||100 (positive energy)||LEAD|
|Alienation from politics||8||24 (not alienated)||ALIEN|
|Frequency of reading about politics per week||0||20 (frequent)||POLR|
The different scales of the variables present two important problems.
It is very difficult to compare across these variables. The usual way of comparing across variables is to calculate the mean for each variable and to compare the means.
However, since each of the variables is measured on a different scale these means will be extremely difficult to compare.
For example, the mean on trust in government might be 4. The mean for alienation might be 10. But because the scales are different length and have different starting points comparisons of means are not meaningful.
Since each question is measured in different units, it is like comparing apples with oranges.
2. Creating Indices
When creating multi-item scales, items that have different lower and upper points will contribute differently to the final multi-item scale score if used in their raw form.
This means that some items will count for more in the computation of a final score. This is usually not what we want.
It is similar to having three pieces of assessment to arrive at a final mark for a subject at a university. One piece of work might be marked out of 80, another out of 10 and another out of 20.
If we simply added up raw scores, then the piece of work marked out of 80 would count for much more in the final mark.
If this is what was desired then all is well, but if each piece of work was meant to contribute equally to the final mark then we would need to adjust the items to equalize the contribution.
How to make Variables more comparable
The solution to these problems is to convert the scales into a common measurement scale so that they can be compared. This can be achieved in two ways:
- Converting each scale to have the same lower and upper levels
- Standardizing the variables and expressing scores at standard deviation units. (z-scores)
You are probably familiar with the latter option, so let me describe the former.
How to Convert Variables to have the same Lower and Upper Limits
This solution involves adjusting the scale on each variable, “stretching” some measures and “squeezing” others. For any numerical scale the conversion is achieved using this formula:
Where Y is the adjusted variable, X is the original variable, Xmin is the minimum observed value on the original variable and Xrange is the difference between the maximum potential score and the minimum potential score on the original variable and n is the upper limit of the rescaled variable.
This conversion can easily be accomplished with a variable transformation in any statistical software.
For example, let’s suppose we want all variables converted to a scale of 0-10. Let us convert POLEF (table above). From the table with see that the minimum potential score was 0, and the maximum observed score was 4. The range therefore is 4. Our formula is thus:
An individual with a score 4 on POLEF would score (4/4) x 10 = 10 on POLEFADJ; a score of 0 on POLEF would convert to (0/4) x 10 = 0 on POLEFADJ, while a score of 2 on POLEF would become a POLEFADJ score on (2/4) x 10 = 5.
Having converted all five variables to a range of 0-10, it becomes much easier to compare scores and averages across them… It may be possible to compare apples and oranges after all!