Increase Font Size Decrease Font Size View as PDF Print
Citation:

Yarrow L, Remig VM, Higgins MM. Food safety educational intervention positively influences college students' food safety attitudes, beliefs, knowledge and self-reported practices. J Environ Health. 2009; 71 (6): 30-35.


PubMed ID: 19192742
Study Design:
Non-randomized trial
Class:
C - Click here for explanation of classification scheme.
NEUTRAL: See Research Design and Implementation Criteria Checklist below.
Research Purpose:

To explore relationships among food safety attitudes, beliefs, knowledge and self-reported practices of current college students in health and non-health majors and to determine whether an educational intervention could improve those variables of interest.

Inclusion Criteria:
  • College students enrolled in one of two courses (clinical nutrition or public relations campaigns)
  • Senior level or graduate level students
  • Students living in a house or apartment rather than residence halls or Greek housing.
Exclusion Criteria:
  • Students not enrolled in one of two courses (clinical nutrition or public relations campaign)
  • Students who were not seniors or graduate students
  • Students who lived in residence halls or Greek housing.

 

Description of Study Protocol:

Recruitment

Participants were recruited into study by in-class invitations.

Design

Non-randomized trial that involved college students completing a food safety questionnaire (FSQ) prior to educational intervention involving three interactive modules, and then after subjects completed modules, FSQ was administered after exposure to the intervention and five weeks later to determine changes in food safety attitudes, beliefs, knowledge and self-reported practices. The University survey system, an online platform for conducting surveys, was used to administer the FSQ. Subjects completed the FSQ in this time order:

  • Pre-intervention (prior to viewing educational food safety modules)
  • Post-intervention (up to one week after module completion)
  • Post-postintervention (five weeks after module completion). 

Tests assessed food safety knowledge and self-reported food safety behaviors.

Dietary Intake/Dietary Assessment Methodology

Not applicable

Blinding used

None mentioned by authors

Intervention

Interactive instructional modules were developed (using a lesson-building program to create interactive Web lessons, Softchalk, 2002) and pilot-tested. Three educational modules included food safety instruction with clip art, animated graphics, flash card activities, crossword puzzles, audio clips and hyperlinks to Web. Each module required 30-60 minutes to complete, followed by post-test of 10-15 minutes. On modules:

  • Module 1 covered food safety history and recommended food handling guidelines
  • Module 2 covered review of food safety literature, common food safety beliefs and practices and industry requirements
  • Module 3 covered older adults' foodborne illness risks and preferred food safety handling practices. 

Statistical Analysis

  • Food safety questionnaire responses were analyzed after each administration of the questionnaire
  • Statistical analysis included Wilcoxin Signed Rank, Friedman, Mann-Whitney U, McNemar, Cochran Q,Chi-square and Spearman's rho tests
  • Cronbach's alpha was used to test internal consistency reliability of the food safety questionnaire
  • Most FSQ response options were seven-point Likert scales with assigned values.

 

Data Collection Summary:

Timing of Measurements

The food safety questionnaire was completed prior to viewing educational modules (pre-intervention), up to one week after module completion (post-intervention) and five weeks after module completion (post-postintervention).

Dependent Variables

  • Food-safety attitudes
  • Food-safety beliefs
  • Knowledge about food safety
  • Self-reported food safety practices, including high risk food intake. 

Independent Variables

  •  Completion of three educational modules on food safety-related subjects
  •  Prior completion of a food safety-related or nutrition course by health majors.

Control Variables

None mentioned

 

Description of Actual Data Sample:
  • Initial N: 71
  • Attrition (final N):
    • 59 (38 females and 21 males)
    • Drop-outs: 21 of 32 subjects with non-health majors remained in study, while 38 of 39 subjects with health majors remained. Data was eliminated for students who did not view the modules
  • Age: 21 to 49
  • Ethnicity: Not reported
  • Other relevant demographics:
    • Students were either health majors (N=38) or non-health majors (N=21)
    • Of 38 health majors: 29 held a job as a food server, 24 held job as a food preparer (cook) and 22 had food safety certification
    • Of 21 non-health majors: 15 held a job as a food server, eight held job as a food preparer (cook) and six had food safety certification
  • Anthropometrics: Not reported 
  • Location: Participants were students at Kansas State University.
Summary of Results:

Key Findings

  • Self-reported safe food practices became more frequent over time, with scores increasing from 19 to 21 of 27 possible points (P≤0.001)
  • Students became less likely to prepare food for others if they had diarrhea (P≤0.001) and more likely to use food thermometers (P≤ 0.01)
  • The reported changes can be attributed to health majors' improvement in not preparing food for others if they had diarrhea (P≤0.002), thermometer use (P≤0.006) and not leaving cooked items out for use later in the day (P=0.046) such as a buffet or party
  • Non-health majors did not improve in self-reported practices. Non-health majors whose food safety beliefs and knowledge improved after exposure to a food safety educational intervention, showed no improvements in self-reported practices of risky behaviors, including not using thermometers and eating “risky foods” as a result
  • As a total group and subgroups, no significant changes occurred among the students' self-reported practices for food sanitation, hygiene, storage, thawing or high-risk food intake
  • Health majors scored higher than non-health majors for all indices in each time period except for high risk food intake (P≤0.001).

 Other Findings

  • As a group, students' food safety attitudes improved from 114.5 to 122.2 out of 147 possible (P≤0.001) from pre-test to post-posttest
  • Students' FSQ belief index scores increased from 85.8 to 97.6 of 119 (P≤0.001) from Time 1 to Time 3, representing more positive food safety beliefs after intervention
  • Immediately after intervention, students' FSQ score for total knowledge increased (P≤0.001), with scores changing from 11.2 to 12.6 out of 14 possible points.

 

Author Conclusion:
  • Interactive food safety education resulted in improved food safety knowledge and beliefs for college students
  • The greatest improvements were seen in students who viewed food safety as important to their professions
  • Improving college students' attitudes abut food safety may be a first step toward influencing their food safety behaviors 
  • Because college students' behaviors place them at increased risk for foodborne illness, more educational interventions that address food safety are needed
  • The educational modules had a positive impact on food safety knowledge immediately after intervention, however at post-postintervention, non-health majors' food safety  knowledge showed no improvement, indicating that they did not retain newly acquired information five weeks after intervention
  • Even after food safety beliefs and knowledge improved with exposure to the intervention, non-health majors were not more inclined to change their risky behaviors (such as using thermometers and eating fewer risky foods).
Reviewer Comments:

Authors noted these weaknesses of study

  • Non-representative small sample of college students
  • Internal validity threats related to testing and mortality (drop-out rate); students may have become sensitized to food safety issues due to repeated testing (though both groups had same exposures) and non-health majors had higher drop-out rate
  • Possible external validity threats include interaction of testing and treatment (intervention); although all subjects received intervention in same order, performance from earlier treatment could have affected treatment test performance from later treatment
  • Reactivity could pose threat because incentive to complete required steps may have differed between health and non-health majors.

Other Comments

  • Regarding the fourth bullet above, non-health majors may not have viewed the education as important to their professions
  • Prior nutrition education courses taken by health majors could influence scores on all variables. 

     


Research Design and Implementation Criteria Checklist: Primary Research
Relevance Questions
  1. Would implementing the studied intervention or procedure (if found successful) result in improved outcomes for the patients/clients/population group? (Not Applicable for some epidemiological studies)
Yes
  2. Did the authors study an outcome (dependent variable) or topic that the patients/clients/population group would care about?
Yes
  3. Is the focus of the intervention or procedure (independent variable) or topic of study a common issue of concern to nutrition or dietetics practice?
Yes
  4. Is the intervention or procedure feasible? (NA for some epidemiological studies)
Yes
 
Validity Questions
1. Was the research question clearly stated?
Yes
  1.1. Was (were) the specific intervention(s) or procedure(s) [independent variable(s)] identified?
Yes
  1.2. Was (were) the outcome(s) [dependent variable(s)] clearly indicated?
Yes
  1.3. Were the target population and setting specified?
Yes
2. Was the selection of study subjects/patients free from bias?
No
  2.1. Were inclusion/exclusion criteria specified (e.g., risk, point in disease progression, diagnostic or prognosis criteria), and with sufficient detail and without omitting criteria critical to the study?
Yes
  2.2. Were criteria applied equally to all study groups?
Yes
  2.3. Were health, demographics, and other characteristics of subjects described?
No
  2.4. Were the subjects/patients a representative sample of the relevant population?
No
3. Were study groups comparable?
No
  3.1. Was the method of assigning subjects/patients to groups described and unbiased? (Method of randomization identified if RCT)
No
  3.2. Were distribution of disease status, prognostic factors, and other factors (e.g., demographics) similar across study groups at baseline?
No
  3.3. Were concurrent controls used? (Concurrent preferred over historical controls.)
No
  3.4. If cohort study or cross-sectional study, were groups comparable on important confounding factors and/or were preexisting differences accounted for by using appropriate adjustments in statistical analysis?
N/A
  3.5. If case control or cross-sectional study, were potential confounding factors comparable for cases and controls? (If case series or trial with subjects serving as own control, this criterion is not applicable. Criterion may not be applicable in some cross-sectional studies.)
N/A
  3.6. If diagnostic test, was there an independent blind comparison with an appropriate reference standard (e.g., "gold standard")?
N/A
4. Was method of handling withdrawals described?
???
  4.1. Were follow-up methods described and the same for all groups?
Yes
  4.2. Was the number, characteristics of withdrawals (i.e., dropouts, lost to follow up, attrition rate) and/or response rate (cross-sectional studies) described for each group? (Follow up goal for a strong study is 80%.)
Yes
  4.3. Were all enrolled subjects/patients (in the original sample) accounted for?
Yes
  4.4. Were reasons for withdrawals similar across groups?
???
  4.5. If diagnostic test, was decision to perform reference test not dependent on results of test under study?
N/A
5. Was blinding used to prevent introduction of bias?
No
  5.1. In intervention study, were subjects, clinicians/practitioners, and investigators blinded to treatment group, as appropriate?
No
  5.2. Were data collectors blinded for outcomes assessment? (If outcome is measured using an objective test, such as a lab value, this criterion is assumed to be met.)
No
  5.3. In cohort study or cross-sectional study, were measurements of outcomes and risk factors blinded?
N/A
  5.4. In case control study, was case definition explicit and case ascertainment not influenced by exposure status?
N/A
  5.5. In diagnostic study, were test results blinded to patient history and other test results?
N/A
6. Were intervention/therapeutic regimens/exposure factor or procedure and any comparison(s) described in detail? Were interveningfactors described?
???
  6.1. In RCT or other intervention trial, were protocols described for all regimens studied?
N/A
  6.2. In observational study, were interventions, study settings, and clinicians/provider described?
N/A
  6.3. Was the intensity and duration of the intervention or exposure factor sufficient to produce a meaningful effect?
???
  6.4. Was the amount of exposure and, if relevant, subject/patient compliance measured?
Yes
  6.5. Were co-interventions (e.g., ancillary treatments, other therapies) described?
???
  6.6. Were extra or unplanned treatments described?
???
  6.7. Was the information for 6.4, 6.5, and 6.6 assessed the same way for all groups?
???
  6.8. In diagnostic study, were details of test administration and replication sufficient?
N/A
7. Were outcomes clearly defined and the measurements valid and reliable?
???
  7.1. Were primary and secondary endpoints described and relevant to the question?
Yes
  7.2. Were nutrition measures appropriate to question and outcomes of concern?
N/A
  7.3. Was the period of follow-up long enough for important outcome(s) to occur?
???
  7.4. Were the observations and measurements based on standard, valid, and reliable data collection instruments/tests/procedures?
Yes
  7.5. Was the measurement of effect at an appropriate level of precision?
???
  7.6. Were other factors accounted for (measured) that could affect outcomes?
No
  7.7. Were the measurements conducted consistently across groups?
Yes
8. Was the statistical analysis appropriate for the study design and type of outcome indicators?
Yes
  8.1. Were statistical analyses adequately described and the results reported appropriately?
Yes
  8.2. Were correct statistical tests used and assumptions of test not violated?
Yes
  8.3. Were statistics reported with levels of significance and/or confidence intervals?
Yes
  8.4. Was "intent to treat" analysis of outcomes done (and as appropriate, was there an analysis of outcomes for those maximally exposed or a dose-response analysis)?
N/A
  8.5. Were adequate adjustments made for effects of confounding factors that might have affected the outcomes (e.g., multivariate analyses)?
N/A
  8.6. Was clinical significance as well as statistical significance reported?
No
  8.7. If negative findings, was a power calculation reported to address type 2 error?
N/A
9. Are conclusions supported by results with biases and limitations taken into consideration?
Yes
  9.1. Is there a discussion of findings?
Yes
  9.2. Are biases and study limitations identified and discussed?
Yes
10. Is bias due to study’s funding or sponsorship unlikely?
???
  10.1. Were sources of funding and investigators’ affiliations described?
???
  10.2. Was the study free from apparent conflict of interest?
???