How to Select and Use Various Assessment Methods in Your Program

Version 2023.1

This section provides guidance on the selection and use of a variety of assessment methods and tools such as surveys, inventories, and classroom observation to measure progress and drive future action. The section includes (1) general guidance on how to engage in actionable assessment and specific guidance on assessment methods for (2) departmental function and initiatives, (3) teaching effectiveness, and (4) student learning. For more details, see the supplements on Possible Key Performance Indicators for Departments; How to Design Surveys, Interviews, and Focus Groups; and How to Analyze, Visualize, and Interpret Departmental Data. This section is organized somewhat differently from other sections in the EP3 Guide, with a focus on implementation strategies for specific assessment methods, rather than general actionable practices. While the guidance in this section on how to assess student learning provides many useful tools for assessment of

Program-Level Student Learning Outcomes

Statements describing what your students should be able to do as a result of completing your degree program. Outcomes emphasize the integration and application of knowledge rather than coverage of material, and are observable, measurable, and demonstrable. They use specific, active verbs (e.g., “identify,” “develop,” “communicate,” “demonstrate”) rather than “understand.” Program-level student learning outcomes are often abbreviated as program-level SLOs or as PLOs, and are also known as program-level learning goals. The term “outcomes” is becoming preferred over “goals” or “objectives” because it makes it clearer that these are defined expectations upon completion of the program, rather than aspirational goals that may or may not be achieved. Examples include:

  • Identify, formulate, and solve broadly defined technical or scientific problems by applying knowledge of mathematics and science and/or technical topics to areas relevant to the discipline
  • Develop and conduct experiments or test hypotheses, analyze and interpret data, and use scientific judgment to draw conclusions
  • Communicate scientific ideas and results in written and oral form according to professional standards and norms
  • Demonstrate and exemplify an understanding of ethical conduct in scientific and professional settings

Program-level student learning outcomes generally focus on overall program outcomes, in contrast to course-level student learning outcomes, which are specific to the knowledge and skills addressed in individual courses. Accreditation requirements typically require program-level student learning outcomes to be defined separately for each degree program (e.g., BA, BS, or minor), even though there will often be considerable overlap among these sets of outcomes. For more details, see the section on How to Assess Student Learning at the Program Level. For examples, see the supplement on Sample Documents for Program-Level Assessment of Student Learning or the PhysPort expert recommendation How do I develop student learning outcomes for physics courses?

, it does not directly address how to design and conduct such assessment. For that, see the section on How to Assess Student Learning at the Program Level.

Benefits

Using a variety of assessment methods to measure your program’s outcomes enables you to better understand your department’s strengths and areas for growth and to engage in a cyclic process of program improvement. Knowing which assessment methods are appropriate for your needs and how to use each method effectively enables your program to get useful data to guide your efforts. The recommendations in this section will help ensure your assessment is actionable.

The Cycle of Reflection and Action

Effective Practices

Effective Practices

  1. Engage in actionable assessment

  2. Assess departmental function and initiatives

  3. Assess teaching effectiveness

  4. Assess student learning

Resources within the EP3 Guide

Tracking departmental metrics

Rubrics

  • Rubrics on PhysPort: Includes several research-based rubrics available for free download along with information about how to use them.
  • P. Arcuria and M. Chaaban, Best Practices for Designing Effective Rubrics: A short article that discusses best practices for designing rubrics, getting started, and evaluating your rubric.
  • University of Nebraska-Lincoln Center for Transformative Teaching, How to Design Effective Rubrics: Discusses five steps for designing effective rubrics.
  • R. C. Hauhart and J. E. Grahe, Designing and Teaching Undergraduate Capstone Courses, Jossey-Bass (2014): Provides an extensive survey and a wealth of information about capstone courses across many disciplines, including a chapter on using the capstone course for assessment, which includes information on rubrics and program assessment.
  • University of Colorado Boulder Center for Teaching & Learning, Rubrics: Includes an overview of rubrics, steps for creating a rubric, types of rubrics, best practices for designing and implementing rubrics, and sample rubrics.

Surveys, interviews, and focus groups

  • How to Design Surveys, Interviews, and Focus Groups
  • How to Analyze, Visualize, and Interpret Departmental Data
  • Pew Research Center, Writing Survey Questions: Provides useful guidance for developing survey questions, choosing among

    Open-Ended Question

    A survey question in which the respondent can form their own answer. Also known as an open-response question or short-answer question. Open-ended questions always give qualitative data, but may be coded into themes which can be treated quantitatively. Open-ended questions take more time to answer and to analyze than closed-ended questions but may provide richer insight into respondents’ thinking. It is often best to use a mix of open-ended and closed-ended questions to provide a balance between quality and efficiency.

    and

    Closed-Ended Question

    A survey question in which the respondent must select from a set of answer choices. Also known as a closed-response question or multiple-choice question. Closed-ended questions can include yes/no or true/false questions, rating scales, choose-all-that-apply checklists, or other sets of options. Closed-ended question responses can be treated categorically or numerically, so they can be considered quantitative data. Closed-ended questions are relatively quick to answer and to analyze, but offer limited information compared to open-ended questions. It is often best to use a mix of open-ended and closed-ended questions to provide a balance between quality and efficiency.

    , question wording, and question order.
  • University of Washington Office of Educational Assessment, Tips for writing questionnaire items: A very useful overview of survey concepts.
  • S. B. Robinson and K. F. Leonard, Designing quality survey questions, Sage Publications, Inc. (2019): A comprehensive text about practical survey design, including a very good checklist.
  • The STEM Learning and Research Center (STELAR): Provides a variety of tools to measure aspects of students’ experiences in STEM, such as confidence, and interest in STEM, or experiences with particular course materials. These tools can be used to inform survey design.
  • S. A. Jacob and S. P. Furgerson, “Writing Interview Protocols and Conducting Interviews,” The Qualitative Report 17, 1–10 (2012): This overview for students learning to conduct qualitative research is a very useful guide to interviewing.
  • CU Science Education Initiative and Carl Wieman Science Education Initiative, SEI Research Interview Guide (2018). This handout, from S. V. Chasteen and W. J. Code, “The Science Education Initiative Handbook” (2018), documents good practices for interviews of STEM faculty and students.
  • W. K. Adams and C. E. Wieman, “Development and validation of instruments to measure learning of expert-like thinkingInternational Journal of Science Education 33(9), 1289–1312 (2011): Provides a guide for interviewing students to uncover their thinking about concepts.
  • S. D. H. Evergreen, Effective Data Visualization: The Right Chart for the Right Data, 2nd edition, Sage Publications, Inc. (2019) and S. D. H. Evergreen, Presenting Data Effectively: Communicating Your Findings for Maximum Impact, Sage Publications, Inc. (2017): A series of books and online guides with clear advice on quantitative data visualization using Excel, plus a data visualization checklist.

Measuring teaching effectiveness

Peer review of teaching

  • University of Kansas Center for Teaching Excellence, Peer Evaluation: Provides an overview of best practices for peer review of teaching and peer review protocols.

Teaching portfolios

  • University of Kansas Center for Teaching Excellence, Benchmarks for Teaching Effectiveness: Provides sources of evidence to use for evaluating different dimensions of teaching, guidelines for using this evidence to document teaching quality, a sample portfolio, and tools for departments to use in annual reviews of teaching through a Guide for Evaluating Teaching and a Benchmarks Evaluation Form.
  • Vanderbilt University Center for Teaching, Teaching Portfolios: Provides guidelines and uses of teaching portfolios, as well as samples.

Teaching reflection

  • University of Georgia Scientists Engaged in Education Research Center, Teaching Evaluation Resources: See the section on Instructor (Self) Voice for a Comprehensive guide to self-reflection that provides detailed information on the

    Instructional Staff

    Faculty, instructors, adjuncts, teaching staff, and others who serve as instructors of record for courses. This term does not include instructional support staff who support the teaching of courses.

    self-reflection process, including what, when, where, how, a template, a rubric for evaluation, and examples of written self-reflections; a template for self-reflection on teaching that includes templates for reflecting on teaching for next year and reflecting on teaching last year, and a list of Self-Reflection Resources with links to self-reflection templates and forms from a variety of institutions.

Classroom observations

  • University of Georgia Scientists Engaged in Education Research Center, Teaching Evaluation Resources: See the section on Peer Voice for examples of peer observation forms and a list of Peer Observation Resources with links to examples of teaching observation forms from a variety of universities.
  • C. A. Paul, A. Madsen, S. McKagan, and E. C. Sayre, Which observation protocol should I use to observe teaching?, PhysPort (2021): Expert recommendation from PhysPort discussing and comparing commonly used

    Observation Protocol

    A tool (online or paper-and-pencil) that helps observers focus on specific aspects of the classroom they are watching. A wide variety of observation protocols are available in the PhysPort assessments database, and choosing one to use can help instructional staff articulate the specific teaching goals they are interested in noticing and improving. Observation protocols measure aspects of teaching such as how engaged students are, how much time the instructor spends on different actions during class, and/or how reformed the teaching is. Observation protocols provide a useful guide for conducting classroom observations, but observers need time to learn to use them.

    , with links to observation protocols on PhysPort and information on how to access and implement each protocol.

Student feedback forms for formative assessment

Student evaluations of teaching

Exams and homework

  • University of Waterloo, Centre for Teaching Excellence, Exam Questions: Types, Characteristics, and Suggestions: Describes many different exam question types and gives tips on how to write each type of question well.
  • Stephanie Chasteen, Group Exams: A short article on the effectiveness of group exams and implementation details for how to give and grade them.
  • Just-in-Time Teaching (JITT): A teaching method in which students answer questions online before class, promoting preparation for class and encouraging them to come to class with a “need to know.”

    Instructional Staff

    Faculty, instructors, adjuncts, teaching staff, and others who serve as instructors of record for courses. This term does not include instructional support staff who support the teaching of courses.

    use the responses to fine-tune their presentations and incorporate student quotes into the class.
  • Carnegie Mellon University Eberly Center, Considerations for Oral Assessment Approaches: Discusses four considerations

    Instructional Staff

    Faculty, instructors, adjuncts, teaching staff, and others who serve as instructors of record for courses. This term does not include instructional support staff who support the teaching of courses.

    should take into account when giving their students oral assessments, and ways to make oral assessments more effective.

Student self-assessments

  • Cornell University Center for Teaching Innovation, Self-Assessment: Includes an introduction to student self-assessment, and ideas on getting started.
  • Carnegie Mellon University Eberly Center for Teaching Excellence & Educational Innovation, Exam Wrappers: Gives basic information about exam wrappers (sets of written reflection questions students answer after they complete an exam), and provides examples of exam wrappers for exams and homework assignments in different disciplines.
  • The Education Hub, How to successfully introduce self-assessment in your classroom: Gives an overview of self-assessment, ideas on how to set up successful self-assessment, pitfalls to avoid, and tools to use.

Research-based assessments

The references below provide evidence for the recommendations throughout this section [1–5] and for the specific recommendations about teaching portfolios [6, 7], teaching reflection [3, 8, 9], classroom observations [9, 10], student feedback forms [11-12], student evaluations of teaching [13–15], inventories and tracking departmental metrics [4, 5, 16], exams and homework [9, 17, 18], rubrics of students performance [18–21], student self-assessments [17, 19, 20], and research-based assessments [18, 22, 23].

  1. J. Diamond, M. Horn, and D. H. Uttal, Practical Evaluation Guide: Tools for museums and other informal educational settings, 3rd Edition, Rowman & Littlefield Publishers (2016).
  2. E. J. Davidson, Actionable evaluation basics: Getting succinct answers to the most important questions, 2nd Edition, Real Evaluation Ltd (2013).
  3. K. Hogan and V. Sathy, Inclusive Teaching: Strategies for promoting equity in the college classroom, West Virginia University Press (2022).
  4. G. D. Kuh, S. O. Ikenberry, N. A. Jankowski, T. Reese Cain, P. T. Ewell, P. Hutchings, and J. Kinzie, Using Evidence of Student Learning to Improve Higher Education, Jossey-Bass (2015).
  5. A. Kezar, How Colleges Change: Understanding, Leading, and Enacting Change, 2nd Edition, Routledge (2018).
  6. M. Kaplan, The Teaching Portfolio, CRLT Occasional Papers No. 11, The Center for Research on Learning and Teaching, University of Michigan, 1998.
  7. G. C. Weaver, A. E. Austin, A. F. Greenhoot, and N. D. Finkelstein, “Establishing a Better Approach for Evaluating Teaching: The TEval Project,” Change: The Magazine of Higher Learning 52(3), 25–31 (2020).
  8. University of Georgia Scientists Engaged in Education Research Center, Instructor Self-Reflection On Teaching: Provides detailed information on the

    Instructional Staff

    Faculty, instructors, adjuncts, teaching staff, and others who serve as instructors of record for courses. This term does not include instructional support staff who support the teaching of courses.

    self-reflection process including what, when, where, how, a template, rubric for evaluation and examples of written self-reflections.
  9. K. Bain, What the best college teachers do, Harvard University Press (2004).
  10. J. A. Fletcher, “Peer observation of teaching: A practical tool in higher education,” The Journal of Faculty Development 32(1), 51–64 (2018).
  11. C. A. Hurney, N. L. Harris, S. C. B. Prins, and S. E. Kruck, “The Impact of a Learner-Centered, Mid-Semester Course Evaluation on Students,” The Journal of Faculty Development 28(3), 55-61 (2014).
  12. L. Mandouit, “Using student feedback to improve teaching,” Educational Action Research 26(5), 755-769 (2018).
  13. R. J. Kreitzer and J. Sweet-Cushman, “Evaluating Student Evaluations of Teaching: a Review of Measurement and Equity Bias in SETs and Recommendations for Ethical Reform,” Journal of Academic Ethics (2021).
  14. B. Uttl, C. A. White, and D. W. Gonzalez, “Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related,” Studies in Educational Evaluation 54, 22–42 (2017).
  15. H. A. Hornstein, “Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance”, Cogent Education 4(1), 1304016 (2017).
  16. D. Dormant, The Chocolate Model of Change, Diane Dormant, Lulu (2011).
  17. M. Svinicki and W. J. McKeachie, McKeachie’s Teaching Tips: Strategies, Research, and Theory for College and University Teachers, 14th Edition, Cengage Learning (2014).
  18. R. M. Felder and R. Brent, Teaching and Learning STEM: A Practical Guide, Jossey-Bass (2024).
  19. D. William, Embedded Formative Assessment, Solution Tree (2017).
  20. S. A. Ambrose, M. W. Bridges, M. DiPietro, M. C. Lovett, M. K. Norman, and R. E. Mayer, How Learning Works: Seven Research-Based Principles for Smart Teaching, Jossey-Bass (2010).
  21. Y. M. Reddy and H. Andrade, “A review of rubric use in higher education,” Assessment & Evaluation in Higher Education 35(4), 435–448 (2010).
  22. A. Madsen, S. B. McKagan, and E. C. Sayre, “Resource Letter RBAI-1: Research-Based Assessment Instruments in Physics and Astronomy,” American Journal of Physics 85(4), 245–264 (2017): A paper that describes and compares most of the research-based assessment instruments of physics and astronomy content.
  23. A. Madsen, S. B. McKagan, E. C. Sayre, and C. A. Paul, “Resource Letter RBAI-2: Research-based assessment instruments: Beyond physics topics,” American Journal of Physics 85(4), 350–369 (2019): A paper that describes and compares most of the research-based assessment instruments of attitudes and beliefs about physics, epistemologies and expectations, the nature of physics, problem solving, self-efficacy, reasoning skills, and lab skills.
Stay Informed with Updates
Our quarterly newsletter keeps you in the loop about events, ways to get involved, and the latest EP3 Guide content.
By signing up, I agree to the APS Privacy Policy.
EP3 Logo

Brought to you by


Funding provided by

This material is based upon work supported by the National Science Foundation under Grant Nos. 1738311, 1747563, and 1821372. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

This site is governed by the APS Privacy and other policies.

© 2024 The American Physical Society
CC-BY-NC-ND