Sunday, 1 December 2013

A method for calculating item misplacement

I have been struggling to think of a way to calculate and compare the difference in the ordering of items between different Mokken scales, eg different samples or between small and large samples from the same population. The problem is that you need a reference point and that should be the definitive ordering of the items; something that we can only know in theory but, in practice - unless we sample the whole population - we can't know.  So, what about a very large sample - in the region of 10,000 - that we can assume gives the ordering in the real population?

So, we have the reference point, how do we compare the ordering in sub-samples.  We can count the number of times an item is not in its usual place in the reference ordering, seems sensible, but the problem is that when one item is out of place there must be at least two items out of place as another one will be moved in the hierarchy. The misplacement of any single item means that another is automatically misplaced but we do not know which one was misplaced first. The solution is simple, calculate the total number of items misplaced and subtract 1; therefore, the number of misplaced items - compared with a reference sample - is 'm - 1' where 'm' is the number of misplaced items. The '- 1' accounts for the item that has to move for any misplacement of items to occur. In this way it should be possible to compare different samples from the same population or from a very large sample and to study, for example, the effect of taking different samples or a series of samples of the same size or a series of sample of a different size.

Admittedly, the point I am making is quite minor.  However, whether or not you take the '-1' correction into account any comparisons will be the same but - while the minimum number of misplaced items is clearly zero - then it is more sensible to have the next smallest number as '1' and then the sequence continues up to 'n-1' where 'n' is the number of items in the scale.  Therefore, for a scale of 'n' items the maximum number of misplaced items is 'n-1' because if all the items are displaced then 'n = m'; again accounting for the fact that if all items are misplaced then the minimum number of misplacement that needed to occur was 1.

Sunday, 13 January 2013

R syntax for Mokken Scaling Analysis

RR syntax for Mokken Scaling Analysis (MSA)
Installing R
2. Click on Windows
3. Click on base
4. Click on Download R for Windows (the most recent version).
5. Click on Install R for the first time.
6. Click on Download R 3.0.1 for Windows.
7. Save the R-3.0.1-win.exe to your computer.
6. Run the R-3.0.1-win.exe from your computer and choose all the default values in the installation Wizard.

Installing MSA

1. Open R.
2. In the pull down menu choose Packages, Install package(s), choose a location nearby you, and choose the package ‘mokken’. MSA is now installed on your computer and need not be installed again.
3. If that does not work then use the following syntax:

> install.packages("mokken", dependencies=TRUE, repos="")

Ignore any error messages and the Mokken package should load; you will know if no errors are returned after using the first command below.

Using MSA

Open R and type:

> library(mokken)

Converting an SPSS file for use in R

> library(foreign)

> FileR <- data.frame(read.spss("C:/FileSPSS.sav"))

You may get some errors or warnings at this stage which may have to be fixed before proceeding, then:

> fix(FileR)

This will show you the data as they appear in R, then:

> save(FileR, file = "C:/FileR.Rdata")

Once you have create an R file is can be uploaded again by:

> load("C:/FileR.Rdata")

Generating scales
To partition items in the FileR database into Mokken scales type:

> aisp(FileR)

Scalability coefficients
To produce scalability coefficients for items and the overall scale(s) type:
> coefH (FileR)

Mean item scores
To produce the mean values for all of the items in the scale type:
> apply(FileR,2,mean)

To check monotonicity type:
> summary(check.monotonicity(FileR))

Plotting item step response functions
To plot item step response functions type:
> plot(check.monotonicity(FileR))
NB: if this does not work and you get:
*****Error in est - qnorm(1 - * se : non-conformable arrays
In addition: Warning messages:
1: In (x - x^2)/n :
  longer object length is not a multiple of shorter object length****
this is a problem in R and you should use:
> plot(check.monotonicity(FileR), = FALSE

Invariant item ordering
To check invariant item ordering type:
> summary(check.iio(FileR))


> iio.results <- check.iio(FileR)
> summary(check.iio(FileR, item.selection = FALSE))

Generating pair plots
To generate pair plots:
> plot(check.iio(FileR))
The confidence intervals can be omitted by:
> plot(check.iio(FileR), = FALSE)
To select item pairs, eg 1st, 3rd & 7th:
> plot(check.iio(FileR), item.pair = c(1, 3, 7) )

Saving plots
To save plots in a file (eg as pdf) in eg drive C:\
> NameOfFigure = "FileR.pdf"
> setwd("C:")
> pdf(NameOfFigure)
> plot(check.iio(FileR), ask = FALSE)

Without confidence intervals
> NameOfFigure = "FileR.pdf"
> setwd("C:")
> pdf(NameOfFigure)
> plot(check.iio(FileR), = FALSE, ask = FALSE)

Additional information about plotting item pairs
# The complete command, where everything is set to default values
plot(check.iio(FileR), = TRUE, = c("orange", "yellow"), = .05, ask = TRUE)

# Because default values can be omitted, the above command equals

# Without colors
plot(check.iio(FileR), = TRUE, = c("white", "white"), = .05, ask = TRUE)

# No more hitting Enters
plot(check.iio(FileR), = TRUE, = c("white", "white"), = .05, ask = FALSE)

# Only the third item pair Pair1 = 1,2; Pair2 = 1,3; Pair3 = 1,4

To check reliability type:
> check.reliability(FileR)

Selecting items to analyse
To select specific items you need to create a new file as follows, type:
> FileRy <- FileR[ ,c(1,2,3,4)] - this will select items 1, 2, 3 & 4
> FileRy <- FileR[ ,c(1,2,3:10)] - this will select items 1, 2, 3, 4, 5, 6, 7, 8, 9 & 10
In both cases you analyse FileRy

Selecting individual for analysis
To select specific individuals you need to create a new file as follows, type:
> FileRx <- FileR[c(1,2:5)] - this will select individuals 1, 2, 3, 4 & 5

Removing R files from memory
> rm(list = ls())

Person item fit for polytomous data (with thanks to Jorge Tendeiro)

Load PerFit from R packages

NB: Ncat= number of response categories; Blvl=percentage cutoff level

> library(PerFit)
> load("C:/FileR.Rdata")
> x.Gnormedpoly <- Gnormed.poly(FileR, Ncat)
> plot(x.Gnormedpoly)
> Gnormedpoly.out <- Gnormed.poly(FileR, Ncat)
> Gnormedpoly.cut <- cutoff(Gnormedpoly.out, Blvl=.01)
> flagged.resp(Gnormedpoly.out, Gnormedpoly.cut, scores=FALSE)$PFSscores

> library(PerFit)
> load("G:/ItADL1PF.Rdata")
> x.Gnormedpoly <- Gnormed.poly(ItADL1PF, 5)
> plot(x.Gnormedpoly)
> Gnormedpoly.out <- Gnormed.poly(ItADL1PF, Ncat=5)
> Gnormedpoly.cut <- cutoff(Gnormedpoly.out, Blvl=.01)
> flagged.resp(Gnormedpoly.out, Gnormedpoly.cut, scores=FALSE)$PFSscores

When packages won’t load this syntax is useful:

install.packages("", repos=c("", ""))

RW 25 July 2016

Thursday, 21 April 2011

Mokken scaling and invariant item ordering (IIO)

 From Guttman to Mokken scaling
My first encounter with scaling came while I was developing the Edinburgh Feeding Evaluation in Dementia (EdFED) Scale when Ian Atkinson, then of The University of Edinburgh, suggested the use of Guttman scaling  to see if the items formed a hierarchy.  The items in the scale - which had to be collapsed into categories - formed a Guttman scale and this was replicated and Guttman scaling was also used to develop the Caring Dimensions Inventory (CDI-25 ).

Subsequently, Ian Deary - still at The University of Edinburgh - and who was instrumental in helping me carry out a multivariate analysis of the EdFED and also the CDI, met someone at a conference who suggested we should be using Mokken scaling.  We gathered more EdFED data and, combining this with existing data, I discovered that 6 behavioural items from the EdFED formed a Mokken scale.  The EdFED has since been translated into Chinese  and the psychometric properties, including Mokken scaling, replicated in a

Further applications of Mokken scaling

More recently, with several colleagues, I have been engaged in applying Mokken scaling to a range of psychological instruments including: the NEO-FFI; the GHQ-30; the EPI; the Townsend ADL scale; the Oxford Happiness Inventory; the CORE-OM; the DSSI/sAD ; and the Religious Involvement Inventory and the Spiritual Well-Being Scale.  However, in the process of publishing the above work, it emerged that my understanding of invariant item ordering (IIO) was incomplete.  The concept had first been drawn to my attention in the process of revising the NEO-FFI paper and I was directed to a review of IIO by Sijtsma & Junker (1996).  The situation was compounded by the inclusion of a method for estimating IIO in the MSP for Windows Version 5.0 using the diagnostics for the double-monotonicity model (DMM) - the non-intersection of item step response functions (ISRFs); but this only applies to dichotmous items where the ISRFs are the same as the item response functions (IRFs).  My error was pointed out by Rob Meijer and, with Ian Deary, I replied (doi:10.1016/j.paid.2009.11.025).  It is obvious in the last few pages of the MSP for Windows 5.0 manual that the methods for estimating IIO in polytomous items were still being developed.

At around this time, a method for estimating IIO had been developed and was available in the R Project for Statistical Computing (‘R’), specifically the Mokken Scaling Analysis (MSA) in R.  The application of R and the estimation of IIO in polytomously scored items is explained in a landmark paper by Ligtvoet et al (2010) and expounded on further in relation to our recent applications of Mokken scaling by Sijtsma et al (2011).  For anyone in any doubt about what IIO is and how it can be estimated, these papers are obligatory reading.

What is IIO?
According to Ligtvoet et al (2010) IIO is 'An item ordering that is the same for all respondents' and: 'the assumption of an IIO is both omnipresent and implicit in the application of many tests, questionnaires, and inventories.'  but also that 'IIO research is new, and experience on how to interpret results has to accumulate as more applications become available.'  Ligtvoet et al (2010), as do Sijtsma et al (2011), show that even though the DMM applies to ISRFs it may not, necessarily, apply to their resulting IRFs.  Using the MSA in R, Ligtvoet et al (2010) show how IIO can be estimated in a set of items an also how the accuracy of the IIO can be estimated using Htrans (analogous to Loevinger's coefficient H).  The larger the value of Htrans the more accurate the IIO and the accuracy of IIO arises from IRFs that are far apart and Htrans 'expresses the degree to which the scores of respondents have the same ordering as the item totals.'  The further apart the the IRFs the more accurate the IIO.

Two points arise:

First how the combination of serendipity and good colleagues, and the will to act on the advice of people you trust can lead to new discoveries.  The road to Mokken scaling was illuminated by Ian Atkinson and Ian Deary and the most recent papers have arisen due to the willingness of colleagues - too numerous to mention but all acknowledged through co-authorship in the papers referred to above - willing to share their data and allow secondary analyses.  Rob Meijer's comments on our work and the willingness of L Andries van der Ark to walk me through the use of MSA in R have been instrumental in deepening my understanding of IIO and Mokken scaling and item response theory.  Their generosity in collaborating in a paper on Mokken scaling (under review) with me and several colleagues has been a lesson to me of how science works.

Second a new and powerful method for investigating the psychometric properties of questionnaires is now available.  Its application to existing questionnaires is providing some interesting insights into old databases.  However, its potential in the development of new questionnaires remains to be explored.