Report Viewer

NINDS CDE Notice of Copyright
PROMIS%20Fatigue%20Short%20Form
Availability
Freely available at PROMIS Fatigue Short Form
Classification
Supplemental: MS, TBI
 
Exploratory: Myalgic encephalomyelitis/Chronic fatigue syndrome (ME/CFS)
Short Description of Instrument
Construct measured: Fatigue frequency, duration and intensity and Impact of fatigue on physical, mental and social activities.
Generic vs. disease specific: Generic.
Means of administration: Self-report.
Intended respondent: Patient.
# of items: 95.
# of subscales and names of sub-scales: N/A.
# of items per sub-scale: N/A.
Comments/Special Instructions
Scoring: Each question has five response options ranging in value from one to five. To find the total raw score, sum the values of the response to each question. For example, for the seven-item form, the lowest possible raw score is 7; the highest possible raw score is 35. A higher PROMIS T-score represents more of the concept being measured. For negatively-worded concepts like fatigue, a T-score of 60 is one SD worse than average. By comparison, a fatigue T-score of 40 is one SD better than average. You can upload data to a free computer program called PROMIScore, which will score your data one person at a time or as a group. PROMIScore is particularly useful because it can calculate T-scores even when there are missing responses. The PROMIScore software and user manual are available for download at PROMIScore.
 
Background: The fatigue item bank evaluates a range of self-reported symptoms, from mild subjective feelings of tiredness to an overwhelming, debilitating, and sustained sense of exhaustion that likely decreases one's ability to execute daily activities and function normally in family or social roles. It assesses fatigue over the past seven days.
Rationale/Justification
Strengths/Weaknesses: The strength of the seven-item instrument lies in its focus on item content and its ability to assess the full range of fatigue measured by the fatigue item bank. When selecting a short form, the main difference is instrument length. Reliability and precision of short forms within a domain are highly similar. Longer short forms generally offer greater correlation (strength of relationship) with the full item bank, as well as greater precision. When choosing between computerized adaptive testing (CAT) and a short form, it is useful to consider the demands of computer-based assessment, and the psychological, physical, and cognitive burden placed on respondents as a result of the number of questions asked. Longer CAT offers greater correlation with the full item bank, as well as greater precision. When evaluating precision, not all questions are equally informative. The flexibility of CAT to choose more informative questions offers more precision.
 
Psychometric Properties: The validity, reliability and sensitivity to change of PROMIS Fatigue short forms have not been established in multiple sclerosis patient populations.
 
Administration: There are two administration options for assessing fatigue: short forms and CAT. When administering a short form, instruct participants to answer all of the items (i.e., questions or statements) presented. With CAT, participant responses guide the computer's choice of subsequent items from the full item bank (95 items in total). Although items differ across respondents taking CAT, scores are comparable across participants. Some administrators may prefer to ask the same question of all respondents or of the same respondent over time, to enable a more direct comparability across people or time. In these cases, or when paper administration is preferred, a short form would be more desirable than CAT.
References
Cella D, Riley W, Stone A, Rothrock N, Reeve B, Yount S, Amtmann D, Bode R, Buysse D, Choi S, Cook K, Devellis R, DeWalt D, Fries JF, Gershon R, Hahn EA, Lai JS, Pilkonis P, Revicki D, Rose M, Weinfurt K, Hays R; PROMIS Cooperative Group. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005-2008. J Clin Epidemiol. 2010 Nov;63(11):1179-1194.
 
Garcia SF, Cella D, Clauser SB, Flynn KE, Lad T, Lai JS, Reeve BB, Smith AW, Stone AA, Weinfurt K. Standardizing patient-reported outcomes assessment in cancer clinical trials: a patient-reported outcomes measurement information system initiative. J Clin Oncol. 2007;25(32):5106-5112. Review. Erratum in: J Clin Oncol. 2008;26(6):1018.

 

Document last updated June 2019
Recommended Instrument for
ME/CFS, MS, TBI