Skip to main content

Advertisement

You are viewing the new article page. Let us know what you think. Return to old version

Review | Open | Published:

Comparison of psychometric properties between usual-week and past-week self-reported physical activity questionnaires: a systematic review

Abstract

The aim was to critically appraise the methodological quality of studies and determine the psychometric qualities of Past-week and Usual-week Physical Activity Questionnaires (PAQs). Data sources were obtained from Pubmed and Embase. The eligibility criteria for selecting studies included: 1) at least one psychometric property of PAQs was examined in adults; 2) the PAQs either had a recall period of usual 7-days (Usual-week PAQs) within the past 12 months or during the past 7-days (Past-week PAQs); and 3) PAQs were self-administered. Study quality was evaluated using the COSMIN taxonomy and the overall psychometric qualities evaluated using pre-established psychometric criteria. Overall, 45 studies were reviewed to assess the psychometric properties of 21 PAQs with the methodological quality of most studies showing good to excellent ratings. When the relationship between PAQs and other instruments (i.e., convergent validity) were compared between recall methods, Past-week PAQs appeared to have stronger correlations than Usual-week PAQs. For the overall psychometric quality, the Incidental and Planned Exercise Questionnaire for the Usual-week (IPEQ-WA) and for the Past-week (IPEQ-W) had the greatest number of positive ratings. For all included PAQs, very few psychometric properties were assessed with poor ratings for the majority of the overall qualities of psychometric properties indicating the limitation of current PAQs. More research that covers a greater spectrum of psychometric properties is required to gain a better understanding of the qualities of current PAQs.

Background

Increasing the level of physical activity (PA) is paramount for improving physical and psycho-social health across a wide range of populations [1]. In fact, physical inactivity is now considered to be one of the four leading risk factors for developing chronic disease and global mortality [2]. Subsequently, measuring the level of PA is important to ascertain at-risk populations and monitor interventions aimed at reducing chronic disease development. However, PA determination is only viable when implementing valid and reliable measures that: a) determine frequency, intensity and type of PA; b) identify individuals that meet health recommendations; and c) evaluate the effectiveness of various PA modalities on specific outcome measures [3].

Several objective measures of PA have been developed including accelerometers, pedometers and heart rate monitors [4]. Whilst these methods are considered valid and reliable for determining PA level [4], they are often too costly and/or cumbersome to use. Furthermore, the validity of accelerometer-based estimates of PA has also been called into question [5]. Prior to these objective measuring devices, subjective measures such as PA questionnaires (PAQs) were used to determine PA level and still remain the preferred method as they can be self-administered and convenient and cost-effective, particularly in large-scale clinical trials [6]. However, misreporting of PA is common with PAQs, particularly due to difficulties recalling the intensity and type of PA performed previously [7]. Subsequently, greater attention is needed to determine the quality of psychometric properties of a range of PAQs.

Currently, there are two main recall methods that determine previous PA level. The first method identifies recent PA level over the past 7 days (i.e., Past-week PAQs) [8]. The second method assesses average week PA level within the past 1–12 months (i.e., Usual-week PAQs) [9]. Both types of PAQs have several advantages and disadvantages. For example, Usual-week PAQs can provide habitual PA patterns minimising the inherent weekly variation in PA [10]. However, respondents may experience difficulty in recalling their PA patterns over a longer period of time, particularly at light-moderate intensities [11]. Conversely, Past-week PAQs result in more accurate recall of recent PA patterns and therefore may better represent objective measures [12]. However, Past-week PAQs do not account for week-to-week variability in PA level and thus may misclassify individuals as physically active/inactive. Therefore, Past-week and Usual-week PAQs provide distinct characteristics of PA which researchers need to consider when selecting PAQs for their intervention. Delbaere et al. [13] compared different recall versions (i.e., Past-Week [W] vs. Average Weekly PA over the past three months [WA]) of the Incidental and Planned Exercise Questionnaire (IPEQ) in older people noting that IPEQ-WA had better psychometric properties overall, with better internal consistency and higher test-retest reliability than the IPEQ-W. However, examination of convergent validity against objective measures (e.g., accelerometers, pedometers) was not conducted for each recall method of IPEQ, despite using objective measures considered as the best approach for establishing PAQ validity [14]. Furthermore, whilst [13] measured test-retest reliability, convergent validity, structural validity and internal consistency, they did not compare measurement error between IPEQ-W and IPEQ-WA and content validity was not addressed. In order to identify the delimitations of PAQs due to different recall methods, and to assist practitioners and researchers with the best selection of robust PAQs, all psychometric properties of PAQs should be evaluated.

The Consensus-based Standards for the selection of health Measurement Instrument (COSMIN) group developed a critical appraisal tool to evaluate the methodological quality of studies that examined the psychometric properties of health measurement instruments [15]. This appraisal tool, known as the COSMIN checklist, allows for determination of the quality of study design and statistical analyses on validity, reliability and responsiveness of questionnaires [15]. Silsbury et al. [16] recently examined the methodological quality of studies examining the psychometric properties of ten selected self-reported PAQs using the COSMIN checklist. The authors reported fair-to-good test-retest reliability of PAQs and variable convergent validity against other objective measures. Whilst these findings provide insight on the usability of the 10 selected PAQs, the authors did not provide a clear description of the inclusion/exclusion criteria used for selecting PAQs nor give consideration for PAQs recall methods which introduces bias. Furthermore, appropriate search strategies for literature database using ‘subject headings’ and ‘free texts’ were not reported, limiting the replicability of the searches. Moreover, [16] did not interpret the psychometric quality of PAQs based on an established quality criterion. Terwee et al. [17] developed a quality criterion to interpret results from studies assessing the psychometric properties of questionnaires based on previously existing guidelines and consensus amongst experts. Furthermore, [18] suggested synthesising and combining results from COSMIN rating of study quality and [17] rating of psychometric quality to report the overall quality of psychometric properties of each questionnaire.

Indeed, previous studies have used similar quality criteria to review the psychometric quality of self-reported PAQs [1921]. However, these review papers appeared to have been derived by the same literature search and were separated according to PAQs for youth [20], adults [19] and the elderly [21]. Combining results of studies that have examined the psychometric qualities of PAQs amongst different population groups may provide a more holistic understanding of the usability of existing PAQs. Furthermore, the computerised search for these systematic reviews [1921] was conducted in May 2009 and thus warrants an update considering the constant growing body of literature in psychometrics. Importantly, none of the systematic reviews published to date have systematically compared the quality of psychometric properties between PAQs with different recall methods (e.g., usual-week versus past-week PAQs) using previously established quality criteria.

Therefore, the aims of this systematic review were to critically appraise the methodological quality of studies that have examined the psychometric properties of past-week and usual-week PAQs in adult and elderly populations using the COSMIN checklist to determine the overall psychometric quality for each PAQ, and to compare the quality of measurement properties between past-week and usual-week PAQs. Identification of recall differences would substantially assist practitioners and researchers with their selection and implementation of robust and high quality PAQs.

Methods

The methodology and reporting of this systematic review was based on the PRISMA guidelines which enables transparent and complete reporting of systematic reviews [22].

Inclusion/exclusion criteria

The following inclusion criteria for studies were adhered to: 1) studies that examined at least one measurement property of PAQs used in adults (i.e., ≥ 18 years of age); 2) studies that were written in English; 3) studies that examined PAQs with a recall period of 7-days PA within the past 12 months (i.e., Usual-week PAQs) or studies that examined PAQ during the past 7-days (i.e., Past-week PAQs); 4) studies that examined self-administered PAQs; and 5) studies where the PAQ identified the following PA characteristics: duration, intensity and/or type of PA performed. Studies were excluded if: 1) questionnaires were based on physical function measures; 2) PAQs were administered as an interview; and 3) results were published as a conference abstract, review or case report. Studies were excluded if questionnaires were translated into a language other than English.

Search strategy

A systematic literature search was conducted to identify all relevant studies examining the measurement properties of PAQs in adults. Two electronic data bases (Medline and EMBASE) were used with searches conducted between July 1st 2016 and July 15th 2016, using both free-text words and subject headings (Table 1). All primary sources (i.e., journal articles) up to July 2016 were considered as part of the search.

Table 1 Search terms and databases

From the search strategy, a total of 4056 abstracts were retrieved including duplicates. Duplicates (n = 75) were removed and which resulted in 3981 abstracts that underwent further screening. The summary of the search process is presented in Fig. 1.

Fig. 1
figure1

Flowchart of included studies and physical activity questionnaires

Selection process

Two independent reviewers conducted the stepwise literature search. Firstly, all titles and abstracts that potentially met the eligibility criteria were screened as either meeting the eligibility criteria (“yes”), potentially meeting the eligibility criteria (“maybe”) or not meeting the eligibility criteria (“no”). Following abstract screening, a random sample (40%) of the abstracts was reviewed to determine the inter-rater reliability between both reviewers. A Weighted Kappa calculation of 0.76 (95% CI: 0.71–0.82) was obtained and considered as acceptable for inter-rater reliability [23]. Following this confirmation, all corresponding original journal articles (both “yes” and “maybe”) were retrieved and further screening was undertaken based on the inclusion/exclusion criteria.

Methodological quality using COSMIN taxonomy

The methodological quality of included studies was assessed using the COSMIN taxonomy of measurement properties with definitions for health-related patient-reported outcomes shown in Table 2. The COSMIN checklist consists of nine domains: internal consistency, reliability (test-retest reliability, inter-rater reliability and intra-rate reliability), measurement error (absolute measures), content validity, structural validity, hypothesis testing, cross-cultural validity, criterion validity and responsiveness [15]. Of these domains, responsiveness, cross-cultural validity and criterion validity were not assessed for the following reasons: responsiveness – determination of the instrument’s sensitivity to changes over time was beyond the scope of the current review; cross-cultural validity – questionnaires assessed in languages other than English were excluded during screening; and criterion validity – currently, there is no globally-accepted ‘golden standard’ based on consensus for assessing PA level [24, 25]. Interpretability was not examined as this component is not considered as a psychometric property. Each domain of the COSMIN checklist was assessed using scales consisting of 5 to 18 items that addressed issues on study design and statistical analyses. To determine the overall methodological quality per domain, [15] suggested to report the lowest item rating within the domain using their 4-point rating system (i.e., excellent, good, fair and poor, respectively). However, as this scoring system does not account for subtle differences in the psychometric qualities of each study, a revised version was implemented as previously described [26]. The raw item scores were transformed into a percentage of rating using the following formula:

$$ Total\ score\ of\ each\ domain = \frac{\left( Total\ score\ obtained - minimum\ score\ possible\right)}{\left( Highest\ score\ possibe - minimum\ score\ possible\right)} \times 100 $$
Table 2 Definitions for aspects of domains and measurement properties from the COSMIN checklist by Mokkink et al. (2010)

The final rating percentage for each domain was then qualitatively defined using the following categories: Poor = 0–25.0%, Fair = 25.1–50.0%, Good = 50.1–75.0%, Excellent = 75.1–100.0% [26]. Furthermore, all studies were appraised by two raters, independently with differences in ratings resolved via consensus.

Quality of the psychometric properties

To compare the strength of reliability (i.e., test-retest reliability) between Usual-week and Past-week PAQs, we calculated the weighted mean of correlation coefficients (i.e., r-values) using the following formula:

$$ \overline{x} = \frac{{\displaystyle {\sum}_{i=1}^n}{w}_i{x}_i}{{\displaystyle {\sum}_{i=1}^n}{w}_i} $$

Where w = r-value of each study and x = sample size of each study

The weighted means of the r-values were calculated to account for sample size varying between comparisons within studies or between studies. When the sample size of each comparison was identical, the normal non-weighted r-values were averaged. The mean r-values were also calculated to compare the strength of convergent validitybetween Usual-week and Past-week PAQs and between PAQs compared with direct measures (e.g., accelerometers, pedometers, PA diaries) and PAQs with indirect measures (e.g., maximal oxygen consumption test [VO2max]). The strength of the r-values was interpreted based on Cohen’s classifications in the order of 0.10 as weak, those of 0.30 as moderate, and those of 0.50 as strong in terms of magnitude [27].

We also classified the psychometric quality of each measurement property for each study as either “positive” (+),“conflicting” (±), “indeterminate” (?), “negative” (−) “not reported” (NR) or “not evaluated” (NE) using quality criteria as previously described (Table 3) [17, 28]. For example, if the reported intra-class correlation coefficient (ICC) was 0.9 (≥0.7 classified as acceptable), then the psychometric quality for that particular psychometric property of the study will be classified as “positive”. Conversely, if the reported ICC was 0.6 (not acceptable given that it is less than 0.7), then the psychometric quality of the study will be classified as “negative”. If a number of reliability analyses had ICC values of above (i.e., ≥ 0.7) and below (i.e., < 0.7) acceptable standards within the same study, than the psychometric quality of the study will be classified as “conflicting”. Studies that received a poor COSMIN rating were excluded from further analysis and were classified as “not evaluated” (NE).

Table 3 Modified criteria of psychometric quality rating based on Terwee, Bot [15] and Cordier, Chen [26]

To determine the overall quality per psychometric property for each PAQ, the methodological quality based on the COSMIN checklist and the psychometric quality based on [17] of each study were combined to determine the Level of Evidence [18], thus generating an overall psychometric quality rating.

Data items and synthesis of results

Relevant items from the COSMIN checklist and from the quality criteria by [17] and [18] were analysed for each included study. Results were assessed and reported using the following sequence: 1) the description of the systematic literature search; 2) the characteristics of the instruments and description of all studies included in this review; 3) the methodological quality of each study reporting on psychometric properties of included PAQs based on the COSMIN checklist; 4) the psychometric quality based on the criterion by [17] for each psychometric property per study, including a comparison of the magnitude of weighted r-values of test-retest reliability and convergent validity; 5) the overall rating of psychometric properties using the Levels of Evidence by [18] for each PAQ and its comparison between Usual-week and Past-week PAQs.

Results

Systematic literature search

A total of 3981 abstracts were screened based on the inclusion criteria after removal of duplicate abstracts from the two databases. Following screening, 255 original articles and their corresponding 76 PAQs were assessed for eligibility. Of these, 21 PAQs met the inclusion criteria, while 55 PAQs were excluded. Reasons for exclusion of PAQs included: recall period of only 24 h; single-item PAQs; no specific recall periods; recall periods of over 7 days; recall periods of less than 7 days; and a combination of various recall periods. Accordingly, the psychometric properties of 21 PAQs were evaluated using 44 of the corresponding original articles.

Included physical activity questionnaires

The characteristics of the 21 included PAQs and description of studies for the development and validation of PAQs are displayed in Tables 4 and 5, respectively. Seven PAQs assessed 7-days of Usual PA level with a 12-month recall period for three PAQs, a 3-month recall period for three PAQs, and a 1-month recall period for one PAQ. Conversely, 14 PAQs assessed PA level over the Past 7-days. The subscales for the majority of PAQs were separated by intensity of PA level (e.g., light, moderate and vigorous) although a number of other PAQs were categorised according to mode of activity (e.g., walking, stairs, transportation, occupational and yard activities).

Table 4 Characteristics of instruments assessing level of physical activity
Table 5 Description of studies for the development and validation of usual-week and past-week physical activity questionnaires

Psychometric properties of PAQs

Based on the COSMIN rating method for all included 21 PAQs (Table 6), none of the studies showed “poor” ratings and thus the psychometric qualities of all studies were rated. The most frequently reported psychometric properties were hypothesis testing (all 21 PAQs) which ranged from good to excellent quality. This was followed by reliability testing (18 PAQs), which ranged from fair to excellent quality; content validity (7 PAQs), which ranged from fair to excellent quality; and internal consistency (6 PAQs), which ranged from fair to excellent quality. The least reported psychometric properties were structural validity (2 PAQs) with good qualities and measurement error (2 PAQs) ranging from good to excellent quality.

Table 6 Overview of the methodological quality assessment of usual-week and past-week physical activity questionnaires using the COSMIN checklist

Table 7 provides a comparison of the magnitude of the weighted mean of the r-values for test-retest reliability and convergent validity. The magnitude of the weighted mean of the r-values of PAQs were compared with direct measures (e.g., other PAQs, diaries or objective measures) or indirect measures (e.g., VO2max test). A further comparison was done between the magnitude of the weighted mean of the r-values for test-retest reliability of Usual-week and Past-week PAQs. The magnitude of the r-values for both Usual-week and Past-week PAQs were comparable (r = 0.62) with similar sample sizes (n = 1071 and 901, respectively). Only one study (Stanford Usual Activity Questionnaire) compared test-retest reliability between both direct (accelerometer) and indirect (VO2max test) measures with both objective measures showing higher test-retest reliability (r = 0.67 and 0.68, respectively) than the Stanford Usual Activity Questionnaire (Subjective measure; r = 0.46). When comparing convergent validity between recall methods, the magnitude of the weighted mean of the r-values appeared greater for Past-week than Usual-week, particularly when PAQs were compared against direct measures with a moderately strong relationship for the Past-week (r = 0.33) versus a weak relationship for the Usual-week (r = 0.20) PAQs. When examining the weighted mean of the r-values between PAQs compared against direct measures and indirect measures, similar results were found for Usual-week PAQs (r = 0.20 and 0.13, respectively) and when Usual-week and Past-week PAQs were combined (r = 0.25 and 0.22, respectively). However, there was a moderate relationship between Past-week PAQs and direct measures (r = 0.33) compared to a weak relationship between Past-week PAQs and indirect measures (r = 0.24).

Table 7 The weighted mean of the correlation coefficients (r-value) for reliability testing and validity of Past-week and Usual-week PAQs

Table 8 provides the quality of psychometric properties of Usual-week and Past-week PAQs based on the quality criteria set out by [17]. Table 9 summarises the overall rating of psychometric properties for each PAQ using the levels of evidence by [18]. Overall, the majority of psychometric properties showed “moderate negative” to “strong negative” ratings for both Usual-week and Past-week PAQs. Of these, IPEQ-WA, SDANA, IPAQ-LF, IPEQ-W, OSPAQ, OSWEQ, SPAQ2 and TPAQ were PAQs that did not include psychometric properties with “negative” ratings. Both IPEQ-WA and IPEQ-W demonstrated “indeterminate” and “conflicting” ratings for internal consistency and reliability testing, respectively, with “moderate positive” ratings for structural validity and hypothesis testing. For SPAQ2, “limited positive” to “moderate positive” ratings were reported for reliability testing and hypothesis testing, respectively. When compared between different PAQ recall methods, Past-week PAQs had a greater proportion of “limited positive” to “strong positive” ratings (10 out of 36 ratings = 27.8%) than Usual-week PAQs (4 out of 20 ratings = 20.0%). However, Past-week PAQs had a greater proportion of “moderate negative” to “strong negative” ratings (14 out 36 ratings = 38.9%) than Usual-week PAQs (7 out of 20 ratings = 35.0%). Only few studies reported on internal consistency, measurement error and structural validity. When compared between psychometric properties irrespective of PAQ recall methods, content validity had the greatest proportion of PAQs with “limited positive” to “strong positive” ratings (5 out of 7 ratings = 71.4%), whereas reliability testing had the greatest proportion of PAQs with “moderate negative” to “strong negative” ratings (10 out of 18 ratings = 55.6%). Overall, only few psychometric properties were reported with a majority of ratings having received ‘negative’ ratings.

Table 8 Quality of psychometric properties based on the criteria by Terwee et al. (2007) and Schellingerhout et al. (2011)
Table 9 Overall rating of psychometric properties for each PAQ using the levels of evidence by Schellingerhout et al. (2011)

Discussion

The current review examined the methodological quality of a large number of studies examining 7-day PAQs and the psychometric quality of included PAQs. We identified 21 PAQs, of which seven were Past-week PAQs and 14 were Usual-week PAQs, which led to the retrieval of 44 corresponding original articles reporting on the psychometric properties of the included PAQs. According to the COSMIN taxonomy, reliability and hypothesis testing were the most commonly reported psychometric properties, while internal consistency, measurement error, content validity and structural validity were seldom examined. The methodological quality of the studies for PAQs was good to excellent although the overall quality of a majority of psychometric properties of PAQs showed “negative” ratings. According to the magnitude of the weighted mean r-values, Past-week PAQs appeared to have better convergent validity compared to Usual-week PAQs, although the overall psychometric qualities of both Past-week PAQs and Usual-week PAQs were weak. Despite weak overall psychometric qualities, IPEQ-WA had the greatest number of “moderate positive” ratings with no “negative” ratings for Usual-week PAQ. For the Past-week PAQs, IPEQ-W had the greatest number of “moderate positive” ratings with no “negative” ratings and SPAQ2 had “limited positive” to “moderate positive” ratings with no “negative” ratings. The overall finding, however, is that a substantial number of psychometric properties were either not reported or showed “moderate negative” to “strong negative” ratings irrespective of PAQ type.

Quality of studies using the COSMIN taxonomy

According to the COSMIN taxonomy, the reliability domain consists of internal consistency, reliability testing and measurement error [15]. Of these psychometric properties, reliability testing was reported in a majority of PAQs, in the form of test-retest reliability, with the exception of three PAQs (YPAS, Checklist Questionnaire and IPAQ-LF). Internal consistency was only detailed in six PAQs (IPEQ-WA, SDANA, CAQ-PAI, IPEQ-W, PASE and PAR). Most of these PAQs showed moderate to excellent methodological quality for reliability testing, which are in line with previously published systematic reviews that have examined the methodological quality of self-reported PAQs in the adults [19] and elderly [21]. However, our current findings are in contrast to those reported by [16], where half of their ratings for the methodological quality of test-retest reliability were ‘fair’. These discrepancies could be due to the current review incorporating a modified COSMIN criteria by [26] which accounts for subtle differences in the psychometric quality of each study. Given that only few studies reported on internal consistency with 4 out of 7 COSMIN ratings scored as “indeterminate”, determining the quality of this psychometric property for Usual-week and Past-week PAQs is at present not possible in the current review.

Undoubtedly, the greatest deficiency for the reliability domain was the lack of examination of measurement error, which was only reported in two PAQs (EPIC PAQ and PASE) based on two studies[29, 30]. Not knowing the measurement error of a PAQ means that we cannot say with confidence that the reported PA level of a person is indeed accurate (i.e., a true reflection of the construct being measured). A framework to improve accuracy of PAQs has been published [10], although further studies are needed to determine the measurement errors of popular PAQs to provide practitioners and researchers with robust measures.

With respect to validity, hypothesis testing was reported in all PAQs with good to excellent study qualities. A majority of hypothesis testing involved studies assessing convergent validity of PAQs by comparing its properties with other comparator instruments (e.g., accelerometers). These results differ to those reported by previous reviews that examined the psychometric properties of PAQs in the adults and elderly [16, 19, 21] by reporting poor to fair study quality. Again, these discrepancies in findings may be attributed to differences in the types of criteria used to assess the psychometric qualities of PAQs. Content validity was seldom reported (only seven PAQs) although the study quality ranged from good to excellent. Structural validity was only assessed for two PAQs with good study qualities..In the current review, the quality for structural validity was not assessed in a majority of studies given that the underlying constructs of PAQs were not assessed using statistical analyses to determine the uni-dimensionality of subscales (e.g., factor analysis, principle component analysis, Rasch analysis). Only the IPEQ [13] incorporated factor analyses and Rasch analyses to determine the overall structure and measurement properties of IPEQ. Subsequently, caution should be taken as assessment of internal consistency and structural validity are only relevant when instruments form a reflective model (i.e., when items are indicative of the same underlying constructs), rather than a formative model (i.e., when items together form the construct). When exploring the underlying constructs of various PAQs, future research should address whether studies are based on a formative or reflective model.

Quality of psychometric properties

A key aim of the current review was to examine the differences between Usual-week and Past-week PAQs. Previously, different recall versions of the IPEQ were examined in the one study [13] with IPEQ-WA (i.e., Usual-week PAQ) exhibiting greater test-retest reliability compared to the IPEQ-W (i.e., Past-week PAQ). This is not surprising, given that Usual-week PAQs control for week-to-week variation in PA patterns [10]. Interestingly, our findings showed comparable test-retest reliability between Usual-week PAQ and Past-week PAQ according to the magnitude of the weighted mean r-values. These discrepancies in findings between [13] (i.e., differences in test-retest reliability between IPEQ-W and IPEQ-WA) and the current review (i.e., similar test-retest reliability between Usual-week and Past-week PAQs) is possibly due to differences in acceptable cut-offs for test-retest reliability. For example, an ICC of ≥0.6 was considered as acceptable by [13], whereas ICC of ≤0.7 in the current review (based on use of the criteria by [17]) was below the acceptable cut-off and was therefore rated as “negative”.

Whilst comparable test-retest reliability was reported between Usual-week and Past-week PAQs in the current review, Past-week PAQs exhibited stronger convergent validity than Usual-week PAQs when compared against direct measures (e.g., accelerometers). Such findings are expected, since recall of Past-week PAQs typically coincide with data collected from direct measures during the past week. Subsequently, Past-week PAQs may be more accurate in reporting actual PA patterns than Usual-week PAQs. Whilst the magnitude of weighted r-values between PAQs with direct measures and PAQs with indirect measures were similar for Usual-week PAQs (both were in the weak range), there was a moderate relationship between Past-week PAQs and direct measures whilst a weak relationship shown between Past-week PAQs and indirect measures. Accordingly, while it would be expected that individuals who reported higher levels of physical activity would demonstrate greater physical fitness, determining the validity of PAQs with indirect measures may not be as appropriate as direct measures, given that the dimension of measures are different [31] (e.g., two different types of measures that report level of PA would be more similar than measures that report level of PA and physical fitness).

For the overall psychometric qualities, only minor differences were evident between the PAQs. However, for each recall method, the strongest PAQ identified according to psychometric quality was IPEQ-WA for Usual-week PAQs and IPEQ-W for Past-week PAQs given that 4 out of 6 psychometric properties were evaluated of which structural validity and hypothesis testing had “moderate positive” results. However, internal consistency and reliability had “indeterminate” and “conflicting” results, respectively, warranting further research in the psychometric properties of IPEQ-WA and IPEQ-W. Furthermore, SPAQ2 indicated positive ratings for reliability testing and hypothesis testing, demonstrating good validity and reliability of Past-week PAQ. However, only two psychometric properties were assessed for SPAQ2 which appears to be a common limitation for all included PAQs. Subsequently, future studies should assess other psychometric properties to determine the overall quality of PAQs.

While a majority of PAQs consisted of reliability testing and hypothesis testing, irrespective of recall methods, these psychometric properties also had the most number of “moderate negative” to “strong negative” ratings. These findings are in line with findings from other systematic reviews that have reported the psychometric qualities of self-reported PAQs, even though these reviews were smaller in scope [16, 19, 21]. Interestingly, the findings from the current systematic review, and of others [16, 19, 21], conflict with interpretations of the quality of reported validity and reliability values of PAQs as reported and interpreted by the authors themselves in a majority of included studies. This is because many of the authors in the included studies have interpreted test-retest reliability and convergent validity as being acceptable based on associations reported at a statistically significant level, with minimal regard to the strength of the relationship. According to previously established and accepted criteria [17, 18, 26], acceptable test-retest reliability for correlations (r or rho) and ICC were 0.8 and 0.7, respectively. Furthermore, convergent validity of a questionnaire is acceptable if the correlation with its comparator instrument is at a statistically significant level (p ≤ 0.05) and the strength of the correlation is at least moderate (r ≥ 0.5) [17, 18, 26]. Accordingly, whilst the included studies reported associations at a statistically significant level for both reliability testing and hypothesis testing, the results were classified as “negative” ratings in the current review given that the magnitude of the association was not met in accordance to the psychometric criteria (i.e., r ≥ 0.5). Consideration for the strength of the relationship is essential, given that a large sample size will exhibit associations at a statistically significant level, despite weak associations, as reported in a number of studies included in the current review. Indeed, an appropriate sample size must be met for studies exploring psychometric properties of instruments in order to reach clinically relevant conclusions, given that a limited sample size may not be generalisable to a wider population [32]. Furthermore, future studies should interpret correlations based on the magnitude of the correlation, rather than the statistical significance (i.e., p ≤ 0.05) when determining validity of PAQs [32]. Subsequently, interpretation regarding validity and reliability of PAQs should consider both the statistical significance and the corresponding magnitude of the association between measured variables.

Limitation

There are a number of limitations that should be elaborated upon. First, the PAQs with recall timeframes other than 7-days were outside the scope of this systematic review and may have different psychometric properties. Second, the PAQs in the current review were limited to those used by English speaking adults and those that were self-reported. Future studies may compare different recall methods of PAQs using other populations (e.g., children, individuals from non-English speaking backgrounds, etc.) and different PA collection methods (e.g., PAQs with recall time frames other than 7-day periods, studies that administered PAQs as interviews etc.). Fourth, the PAQs selected for the current review is one of energy expenditure. It is important to acknowledge that PA level can be influenced by social, physical and policy environments [33, 34]. Subsequently, further research is warranted to analyse the psychometric properties of other PAQs that account for these factors. Finally, while evaluation of responsiveness was beyond the scope of the current review, comparison of this psychometric property between different PAQ types may support the suitability of PAQs to assess PA level.

Conclusion

In conclusion, the current review identified that most PAQs did not report on several psychometric properties. Based upon well-defined analyses, the overall psychometric quality of PAQs showed multiple “negative” ratings, indicating that current 7-day PAQs are rather weak and caution should be taken when interpreting PA level using these PAQs. When comparing different recall methods, Past-week PAQs showed a stronger correlation with direct measures compared to that of Usual-week PAQs, suggesting that Past-week PAQs may be a more accurate measure of PA patterns. However, minimal differences were noted between the Usual-week and Past-week PAQs for the overall psychometric quality. While IPEQ-W and IPEQ-WA demonstrated the strongest psychometric properties with positive ratings, followed by SPAQ2, there were still a substantial number of psychometric qualities that were not assessed which limits the usability of these PAQs. To resolve the issues identified in the current review, future studies are encouraged to investigate a greater range of psychometric properties for those 7-day PAQs that are promising (e.g., IPEQ-WA, IPEQ-W and SPAQ2). However, further investigation is warranted for all 7-day PAQs with ‘negative’ ratings by incorporating item response theory.

Abbreviations

AAS:

Active Australia survey

CAQ-PAI:

College alumnus questionnaire physical activity index

COSMIN:

Consensus-based standards for the selection of health measurement instrument

EPAQ2:

EPIC physical activity questionnaire 2

EPIC PAQ:

EPIC physical activity questionnaire

GPPAQ:

General practice physical activity questionnaire

ICC:

Intra-class correlation coefficient

IPAQ-LF:

International physical activity questionnaire – long form

IPAQ-SF:

International physical activity questionnaire – short form

IPEC-W:

Incidental and planned exercise questionnaire for the past-week

IPEC-WA:

Incidental and planned exercise questionnaire for the usual-week

IPEQ:

Incidental and planned exercise questionnaire

NE:

Not evaluated

NHS II:

Nurse’s health study

OSPAQ:

Occupational sitting & physical activity questionnaire

OSWEQ:

Online self-reported walking and exercise questionnaire

PA:

Physical activity

PAQ:

Physical activity questionnaire

PAR:

Stanford 7-day physical activity recall

PASE:

Physical activity scale for the elderly

SDANA:

Seven-day adventists and non-adventists

SPAQ2:

Scottish physical activity questionnaire

TPAQ:

Transport physical activity questionnaire

W:

Past-week physical activity

WA:

Average weekly physical activity

YPAS:

Yale physical activity survey

References

  1. 1.

    Cunningham GO, Michael YL. Concepts guiding the study of the impact of the built environment on physical activity for older adults: a review of the literature. Am J Health Promot. 2004;18:435–43.

  2. 2.

    Bauer UE, Briss PA, Goodman RA, Bowman BA. Prevention of chronic disease in the 21st century: elimination of the leading preventable causes of premature death and disability in the USA. Lancet. 2014;384:45–52.

  3. 3.

    Rennie KL, Wareham NJ. The validation of physical activity instruments for measuring energy expenditure: problems and pitfalls. Public Health Nutr. 1998;1:265–71.

  4. 4.

    Ainsworth B, Cahalin L, Buman M, Ross R. The current state of physical activity assessment tools. Prog Cardiovasc Dis. 2015;57:387–95.

  5. 5.

    Bornstein DB, Beets MW, Byun W, McIver K. Accelerometer-derived physical activity levels of preschoolers: a meta-analysis. J Sci Med Sport. 2011;14:504–11.

  6. 6.

    Lee PH, Macfarlane DJ, Lam TH, Stewart SM. Validity of the International Physical Activity Questionnaire Short Form (IPAQ-SF): a systematic review. Int J Behav Nutr Phys Act. 2011;8:115.

  7. 7.

    Durante R, Ainsworth BE. The recall of physical activity: using a cognitive model of the question-answering process. Med Sci Sports Exerc. 1996;28:1282–91.

  8. 8.

    Pettee Gabriel K, McClain JJ, Schmid KK, Storti KL, Ainsworth BE. Reliability and convergent validity of the past-week Modifiable Activity Questionnaire. Public Health Nutr. 2011;14:435–42.

  9. 9.

    Matthews CE, Ainsworth BE, Hanby C, Pate RR, Addy C, Freedson PS, Jones DA, Macera CA. Development and testing of a short physical activity recall questionnaire. Med Sci Sports Exerc. 2005;37:986–94.

  10. 10.

    Ainsworth BE, Caspersen CJ, Matthews CE, Masse LC, Baranowski T, Zhu W. Recommendations to improve the accuracy of estimates of physical activity derived from self report. J Phys Act Health. 2012;9 Suppl 1:S76–84.

  11. 11.

    Bernstein M, Sloutskis D, Kumanyika S, Sparti A, Schutz Y, Morabia A. Data-based approach for developing a physical activity frequency questionnaire. Am J Epidemiol. 1998;147:147–54.

  12. 12.

    Blair SN, Haskell WL, Ho P, Paffenbarger Jr RS, Vranizan KM, Farquhar JW, Wood PD. Assessment of habitual physical activity by a seven-day recall in a community survey and controlled experiments. Am J Epidemiol. 1985;122:794–804.

  13. 13.

    Delbaere K, Hauer K, Lord SR. Evaluation of the incidental and planned activity questionnaire (IPEQ) for older people. Br J Sports Med. 2010;44:1029–34.

  14. 14.

    Kim Y, Park I, Kang M. Convergent validity of the international physical activity questionnaire (IPAQ): meta-analysis. Public Health Nutr. 2013;16:440–52.

  15. 15.

    Terwee CB, Mokkink LB, Knol DL, Ostelo RW, Bouter LM, de Vet HC. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist. Qual Life Res. 2012;21:651–7.

  16. 16.

    Silsbury Z, Goldsmith R, Rushton A. Systematic review of the measurement properties of self-report physical activity questionnaires in healthy adult populations. BMJ Open. 2015;5, e008430.

  17. 17.

    Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, Bouter LM, de Vet HC. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60:34–42.

  18. 18.

    Schellingerhout JM, Verhagen AP, Heymans MW, Koes BW, de Vet HC, Terwee CB. Measurement properties of disease-specific questionnaires in patients with neck pain: a systematic review. Qual Life Res. 2012;21:659–70.

  19. 19.

    van Poppel MN, Chinapaw MJ, Mokkink LB, van Mechelen W, Terwee CB. Physical activity questionnaires for adults: a systematic review of measurement properties. Sports Med. 2010;40:565–600.

  20. 20.

    Chinapaw MJ, Mokkink LB, van Poppel MN, van Mechelen W, Terwee CB. Physical activity questionnaires for youth: a systematic review of measurement properties. Sports Med. 2010;40:539–63.

  21. 21.

    Forsen L, Loland NW, Vuillemin A, Chinapaw MJ, van Poppel MN, Mokkink LB, van Mechelen W, Terwee CB. Self-administered physical activity questionnaires for the elderly: a systematic review of measurement properties. Sports Med. 2010;40:601–23.

  22. 22.

    Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700.

  23. 23.

    Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess. 1994;6:284–90.

  24. 24.

    Aparicio-Ugarriza R, Mielgo-Ayuso J, Benito PJ, Pedrero-Chamizo R, Ara I, Gonzalez-Gross M, Group ES. Physical activity assessment in the general population; instrumental methods and new technologies. Nutr Hosp. 2015;31 Suppl 3:219–26.

  25. 25.

    Godfrey A, Rochester L. Body-worn monitors: a lot done, more to do. J Epidemiol Community Health. 2015;69:1139–40.

  26. 26.

    Cordier R, Speyer R, Chen YW, Wilkes-Gillan S, Brown T, Bourke-Taylor H, Doma K, Leicht A. Evaluating the psychometric quality of social skills measures: a systematic review. PLoS One. 2015;10:e0132299.

  27. 27.

    Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale: Lawrence Erlbaum Associates; 1988.

  28. 28.

    Cordier R, Chen YW, Speyer R, Totino R, Doma K, Leicht A, Brown N, Cuomo B. Child-report measures of occupational performance: a systematic review. PLoS One. 2016;11:e0147751.

  29. 29.

    Cust AE, Smith BJ, Chau J, van der Ploeg HP, Friedenreich CM, Armstrong BK, Bauman A. Validity and repeatability of the EPIC physical activity questionnaire: a validation study using accelerometers as an objective measure. Int J Behav Nutr Phys Act. 2008;5:33.

  30. 30.

    DePew ZS, Garofoli AC, Novotny PJ, Benzo RP. Screening for severe physical inactivity in chronic obstructive pulmonary disease: the value of simple measures and the validation of two physical activity questionnaires. Chron Respir Dis. 2013;10:19–27.

  31. 31.

    Wareham NJ, Jakes RW, Rennie KL, Mitchell J, Hennings S, Day NE. Validity and repeatability of the EPIC-Norfolk Physical Activity Questionnaire. Int J Epidemiol. 2002;31:168–74.

  32. 32.

    Walter SD, Eliasziw M, Donner A. Sample size and optimal designs for reliability studies. Stat Med. 1998;17:101–10.

  33. 33.

    Prince SA, Reed JL, Martinello N, Adamo KB, Fodor JG, Hiremath S, Kristjansson EA, Mullen KA, Nerenberg KA, Tulloch HE, Reid RD. Why are adult women physically active? A systematic review of prospective cohort studies to identify intrapersonal, social environmental and physical environmental determinants. Obes Rev. 2016;17:919–44.

  34. 34.

    Link BG, Phelan J. Social conditions as fundamental causes of disease. J Health Soc Behav. 1995;Spec No:80–94.

  35. 35.

    Pols MA, Peeters PH, Ocke MC, Slimani N, Bueno-de-Mesquita HB, Collette HJ. Estimation of reproducibility and relative validity of the questions included in the EPIC Physical Activity Questionnaire. Int J Epidemiol. 1997;26 Suppl 1:S181–9.

  36. 36.

    Belanger C, Speizer FE, Hennekens CH, Rosner B, Willett W, Bain C. The nurses’ health study: current findings. Am J Nurs. 1980;80:1333.

  37. 37.

    Singh PN, Tonstad S, Abbey DE, Fraser GE. Validity of selected physical activity questions in white Seventh-day Adventists and non-Adventists. Med Sci Sports Exerc. 1996;28:1026–37.

  38. 38.

    Sallis JF, Haskell WL, Wood PD, Fortmann SP, Rogers T, Blair SN, Paffenbarger Jr RS. Physical activity assessment methodology in the Five-City Project. Am J Epidemiol. 1985;121:91–106.

  39. 39.

    Dipietro L, Caspersen CJ, Ostfeld AM, Nadel ER. A survey for assessing physical activity among older adults. Med Sci Sports Exerc. 1993;25:628–42.

  40. 40.

    Commission AS. In: Commission AS, editor. Active Australia physical activity survey 1997. Canberra: Australian Sports Commission; 1999.

  41. 41.

    Paffenbarger Jr RS, Wing AL, Hyde RT. Physical activity as an index of heart attack risk in college alumni. Am J Epidemiol. 1978;108:161–75.

  42. 42.

    Masse LC, Fulton JE, Watson KB, Tortolero S, Kohl 3rd HW, Meyers MC, Blair SN, Wong WW. Comparing the validity of 2 physical activity questionnaire formats in African-American and Hispanic women. J Phys Act Health. 2012;9:237–48.

  43. 43.

    General practice physical activity questionnaire (GPPAQ). https://www.gov.uk/government/publications/general-practice-physical-activity-questionnaire-gppaq.

  44. 44.

    Tudor-Locke C, Ainsworth BE, Thompson RW, Matthews CE. Comparison of pedometer and accelerometer measures of free-living physical activity. Med Sci Sports Exerc. 2002;34:2045–51.

  45. 45.

    Cust AE, Armstrong BK, Smith BJ, Chau J, van der Ploeg HP, Bauman A. Self-reported confidence in recall as a predictor of validity and repeatability of physical activity questionnaire data. Epidemiology. 2009;20:433–41.

  46. 46.

    Chau JY, Van Der Ploeg HP, Dunn S, Kurko J, Bauman AE. Validity of the occupational sitting and physical activity questionnaire. Med Sci Sports Exerc. 2012;44:118–25.

  47. 47.

    Taylor N, Lawton R, Conner M. Development and initial validation of the determinants of physical activity questionnaire. Int J Behav Nutr Phys Act. 2013;10:74.

  48. 48.

    Washburn RA, Smith KW, Jette AM, Janney CA. The Physical Activity Scale for the Elderly (PASE): development and evaluation. J Clin Epidemiol. 1993;46:153–62.

  49. 49.

    Timperio A, Salmon J, Crawford D. Validity and reliability of a physical activity recall instrument among overweight and non-overweight men and women. J Sci Med Sport. 2003;6:477–91.

  50. 50.

    Lowther M, Mutrie N, Loughlan C, McFarlane C. Development of a Scottish physical activity questionnaire: a tool for use in physical activity interventions. Br J Sports Med. 1999;33:244–9.

  51. 51.

    Adams EJ, Goad M, Sahlqvist S, Bull FC, Cooper AR, Ogilvie D. iConnect C: reliability and validity of the transport and physical activity questionnaire (TPAQ) for assessing physical activity behaviour. PLoS One. 2014;9:e107039.

  52. 52.

    Espana-Romero V, Golubic R, Martin KR, Hardy R, Ekelund U, Kuh D, Wareham NJ, Cooper R, Brage S, scientific N, data collection t. Comparison of the EPIC Physical Activity Questionnaire with combined heart rate and movement sensing in a nationally representative sample of older British adults. PLoS One. 2014;9:e87085.

  53. 53.

    Golubic R, Martin KR, Ekelund U, Hardy R, Kuh D, Wareham N, Cooper R, Brage S, scientific N, data collection t. Levels of physical activity among a nationally representative sample of people in early old age: results of objective and self-reported assessments. Int J Behav Nutr Phys Act. 2014;11:58.

  54. 54.

    Wareham NJ, Jakes RW, Rennie KL, Schuit J, Mitchell J, Hennings S, Day NE. Validity and repeatability of a simple index derived from the short physical activity questionnaire used in the European Prospective Investigation into Cancer and Nutrition (EPIC) study. Public Health Nutr. 2003;6:407–13.

  55. 55.

    Wolf AM, Hunter DJ, Colditz GA, Manson JE, Stampfer MJ, Corsano KA, Rosner B, Kriska A, Willett WC. Reproducibility and validity of a self-administered physical activity questionnaire. Int J Epidemiol. 1994;23:991–9.

  56. 56.

    Singh PN, Fraser GE, Knutsen SF, Lindsted KD, Bennett HW. Validity of a physical activity questionnaire among African-American Seventh-day Adventists. Med Sci Sports Exerc. 2001;33:468–75.

  57. 57.

    Jacobs Jr DR, Ainsworth BE, Hartman TJ, Leon AS. A simultaneous evaluation of 10 commonly used physical activity questionnaires. Med Sci Sports Exerc. 1993;25:81–91.

  58. 58.

    Resnicow K, McCarty F, Blissett D, Wang T, Heitzler C, Lee RE. Validity of a modified CHAMPS physical activity questionnaire among African-Americans. Med Sci Sports Exerc. 2003;35:1537–45.

  59. 59.

    Brown WJ, Burton NW, Marshall AL, Miller YD. Reliability and validity of a modified self-administered version of the Active Australia physical activity survey in a sample of mid-age women. Aust N Z J Public Health. 2008;32:535–41.

  60. 60.

    Ainsworth BE, Berry CB, Schnyder VN, Vickers SR. Leisure-time physical activity and aerobic fitness in African-American young adults. J Adolesc Health. 1992;13:606–11.

  61. 61.

    Ainsworth BE, Leon AS, Richardson MT, Jacobs DR, Paffenbarger Jr RS. Accuracy of the college alumnus physical activity questionnaire. J Clin Epidemiol. 1993;46:1403–11.

  62. 62.

    Albanes D, Conway JM, Taylor PR, Moe PW, Judd J. Validation and comparison of eight physical activity questionnaires. Epidemiology. 1990;1:65–71.

  63. 63.

    Bassett Jr DR, Cureton AL, Ainsworth BE. Measurement of daily walking distance-questionnaire versus pedometer. Med Sci Sports Exerc. 2000;32:1018–23.

  64. 64.

    Strath SJ, Bassett Jr DR, Swartz AM. Comparison of the college alumnus questionnaire physical activity index with objective monitoring. Ann Epidemiol. 2004;14:409–15.

  65. 65.

    Washburn RA, Goldfield SR, Smith KW, McKinlay JB. The validity of self-reported exercise-induced sweating as a measure of physical activity. Am J Epidemiol. 1990;132:107–13.

  66. 66.

    Ahmad S, Harris T, Limb E, Kerry S, Victor C, Ekelund U, Iliffe S, Whincup P, Beighton C, Ussher M, Cook DG. Evaluation of reliability and validity of the General Practice Physical Activity Questionnaire (GPPAQ) in 60–74 year old primary care patients. BMC Fam Pract. 2015;16:113.

  67. 67.

    McKeon M, Slevin E, Taggart L. A pilot survey of physical activity in men with an intellectual disability. J Intellect Disabil. 2013;17:157–67.

  68. 68.

    Kaleth AS, Ang DC, Chakr R, Tong Y. Validity and reliability of community health activities model program for seniors and short-form international physical activity questionnaire as physical activity assessment tools in patients with fibromyalgia. Disabil Rehabil. 2010;32:353–9.

  69. 69.

    Tierney M, Fraser A, Kennedy N. Criterion validity of the International Physical Activity Questionnaire Short Form (IPAQ-SF) for use in patients with rheumatoid arthritis: comparison with the SenseWear Armband. Physiotherapy. 2015;101:193–7.

  70. 70.

    Warner ET, Wolin KY, Duncan DT, Heil DP, Askew S, Bennett GG. Differential accuracy of physical activity self-report by body mass index. Am J Health Behav. 2012;36:168–78.

  71. 71.

    Allison MJ, Keller C, Hutchinson PL. Selection of an instrument to measure the physical activity of elderly people in rural areas. Rehabil Nurs. 1998;23:309–14.

  72. 72.

    Ewald B, McEvoy M, Attia J. Pedometer counts superior to physical activity scale for identifying health markers in older adults. Br J Sports Med. 2010;44:756–61.

  73. 73.

    Garfield BE, Canavan JL, Smith CJ, Ingram KA, Fowler RP, Clark AL, Polkey MI, Man WD. Stanford Seven-Day Physical Activity Recall questionnaire in COPD. Eur Respir J. 2012;40:356–62.

  74. 74.

    Granger CL, Parry SM, Denehy L. The self-reported Physical Activity Scale for the Elderly (PASE) is a valid and clinically applicable measure in lung cancer. Support Care Cancer. 2015;23:3211–8.

  75. 75.

    Harada ND, Chiu V, King AC, Stewart AL. An evaluation of three self-report physical activity instruments for older adults. Med Sci Sports Exerc. 2001;33:962–70.

  76. 76.

    Martin KA, Rejeski WJ, Miller ME, James MK, Ettinger Jr WH, Messier SP. Validation of the PASE in older adults with knee pain and physical disability. Med Sci Sports Exerc. 1999;31:627–33.

  77. 77.

    Washburn RA, Ficker JL. Physical Activity Scale for the Elderly (PASE): the relationship with activity measured by a portable accelerometer. J Sports Med Phys Fitness. 1999;39:336–40.

  78. 78.

    Washburn RA, McAuley E, Katula J, Mihalko SL, Boileau RA. The physical activity scale for the elderly (PASE): evidence for validity. J Clin Epidemiol. 1999;52:643–51.

  79. 79.

    Zalewski KR, Smith JC, Malzahn J, VanHart M, O’Connell D. Measures of physical ability are unrelated to objectively measured physical activity behavior in older adults residing in continuing care retirement communities. Arch Phys Med Rehabil. 2009;90:982–6.

Download references

Acknowledgements

The authors would like to acknowledge Dr Peter Fowler with assistance of retrieving original articles and Ms Colette Thomas for assistance with psychometric analyses.

Funding

No funding was required for this manuscript.

Availability of data and materials

Not applicable.

Authors’ contributions

KD analysed, interpreted and prepared the manuscript; RS conducted the search strategy and provided assistance in abstract screening and edited the manuscript; ASL conducted abstract screening and psychometric evaluation with KD and edited the manuscript; RC provided finalising of psychometric evaluation and edited the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Not applicable.

Author information

Correspondence to Kenji Doma.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Physical activity questionnaires
  • Recall methods
  • Psychometrics
  • Validity
  • Reliability