Engage in actionable assessment
- Proceed with a sense of purpose. Bring together a team or working group to identify important questions and design your assessment. People are more likely to value what they help to create. Ask, “What do we want to know?”; “For what purpose?”; and “How will we use the results?” Avoid leaping into data gathering without a clear sense of this purpose. To quote statistician John Tukey, “better an approximate answer to the right question…than an exact answer to the wrong question.” If your department lacks assessment expertise, try to build this expertise in your department over time and/or consult assessment partners outside your department, such as administrators, agencies, your teaching and learning center, and your office of institutional research.
- Determine who in your department will lead, plan, conduct, and analyze assessments. Consider what person or combination of people are best positioned to do this work effectively. For example, consider what knowledge department members have of the issues at hand and whether they have sufficient experience and longevity to effectively promote the integration of assessment into the department culture. Ensure that this work is appropriately recognized and rewarded and that it is distributed equitably to avoid overburdening people.
- Determine who can provide insight into the important questions you have identified, either as designers of assessments, or as or . Get input from a wide variety of relevant stakeholders, which may include pre-tenure faculty, tenured faculty, non-tenure-track , other staff, advisors, mentors, postdocs, members of other departments that rely on your courses and/or have connections with your programs, alumni, employers, graduate program directors, students in different years of your program, students who left the program, students who participated in research, and students who did not. Be sure to actively engage and consult people from . At the same time, avoid overburdening new faculty or people from marginalized groups.
- Design assessment plans for use rather than for curiosity, and collect only the data necessary to address the issues being investigated. A common mistake is to get carried away with assessment and try to collect and analyze all possible data or ask all possible questions. The pros and cons of each assessment in this section will help identify the best methods based on who is involved in providing, collecting, and analyzing the data. Triangulate among multiple data sources and methods, and obtain a mix of and data. Consider short, quick assessments to get adequate answers efficiently and reduce assessment fatigue.
- Work to collect data that are useful, plausible, credible, and accurate enough, while recognizing that it’s unlikely that any data will be 100% accurate. Use short, frequent assessments that give quick results about impact, rather than relying solely on large, end-of-project assessments.
- Listen to department members as assessments are designed, and then listen to the results of the assessments. Listen actively during interviews and focus groups, and read responses to surveys with an open mind. All assessments are opportunities to gain insight into the experiences and perspectives of different people; take full advantage of this opportunity.
- Interpret and work to make sense of the data you collect, rather than just analyze them. Turn the data into knowledge, and use this knowledge to tell the story of a departmental success or challenge. A common mistake is to collect and analyze data and then let them sit on a shelf. Another is to get lost in details and lose sight of bigger questions. Write up a short report that lays out key findings and how they answer your initial questions, rather than report all possible findings. Then discuss the findings with the team in charge of assessment; building understanding from information is a social process.
- Ask, “What have we learned? What should we do about it? Who needs to see these results?” Use the insights to collaboratively identify action items. Avoid leaving the next steps unclear.
Assess departmental function and initiatives
Overview: An inventory is an itemized list of all the resources, practices, policies, mechanisms, or other features of a particular aspect of your department, e.g., recruiting, retention, teaching, career preparation, or departmental climate. An inventory is a way to take stock of the current landscape, often to help inform decision making.
Pros: An inventory is fairly easy to do. An inventory helps identify strengths, resources, and needs in your department, pointing to possible next steps. Inventories can supportor other stakeholders in developing a sense of ownership and engagement in departmental change by providing a framework for department members to collectively identify what is actually happening in your department. Inventories provide excellent baseline data for improvements.
Cons: An inventory is not a well-defined assessment instrument, so it may be hard to know where to start or how to use the results.
Possible uses: Identify all the mechanisms by which your department currently supports an inclusive culture and additional mechanisms by which it could do so. Identify all the data collected by your department or institution on student learning and where they are stored. Identify the EP3 practices your department uses or could use in a particular area such as recruiting, career preparation, or undergraduate research.
Tips: Inventories can be particularly powerful when used as part of the work of a collective action team or task force. The team can contribute to generating the inventory and engage in conversations about the results to identify next steps.
Use your driving questions (“What do we want to know? For what purpose?”) to determine what to include in your inventory.
Create a shared location for your inventory results (such as a departmental shared drive) for transparency, and so that they can be used again or referenced in the future.
Create a structure for your inventory, however informal. Possible structures include a document with bulleted lists, a spreadsheet with checkboxes, and tables. Tables or spreadsheets allow you to easily track multiple dimensions of an area, e.g., people, mechanisms, and courses associated with different.
Use informal interviews to populate the inventory, if appropriate. For example, ask students to list all the ways they can meet other students; ask instructional staff to list all the ways they help students gain computational skills.
If you are identifying existing mechanisms for accomplishing particular goals, list both policies and procedures. Policies are more formal and more general. Procedures are how things are actually done; they may or may not be written down. Reviewing relevant documents such as department meeting agendas may help identify procedures.
If you are identifying people, consider formal and informal gatekeepers (who control information and access to resources), stakeholders (who have a special interest in a particular area), and experts (who have specialized knowledge in the area).
Consider the following as items and resources that could be included in an inventory: EP3 practices, advising and mentoring practices, which courses address particular, career expertise within the program, faculty research areas, undergraduate research opportunities, courses or types of courses, non-curricular offerings, teaching practices, students licensed to teach, policies supporting inclusive culture, student community-building efforts, internship opportunities, employment opportunities, financial support available to students, career development resources, teaching certification pathways, professional development opportunities for , performance evaluation mechanisms, schedules, meetings, department rituals, department spaces, social events, and external resources or partners.
Aim for good enough: Consider starting with an incomplete inventory and using it as a living document and a conversation starter, rather than waiting until your inventory is complete before using it. Return to the inventory periodically or at the end of a change effort to consider whether gaps have been addressed.
Overview: Department metrics areabout department performance. Metrics are tracked in order to measure that provide insight into how well your department is performing in a particular area. A KPI may be directly related to a single department metric (e.g., number of students graduated this year) or derived from more than one metric (e.g., the average time to degree for students in the major over the past three years). Once a department identifies a KPI, the metrics informing that KPI must be periodically tracked, analyzed, and used for improvement.
Pros: Tracking metrics enables systematic analysis and understanding of the current performance of your department and measures improvement. Relevant metrics may already be tracked by your institution or department for performance review,, or other purposes. Metrics and KPIs can be very persuasive to faculty, administrators, and other stakeholders in creating impetus for change or providing evidence of impact.
Cons: Determining and tracking useful metrics can be time consuming. It can be difficult to determine the causes of changes to KPIs, which are often influenced by external factors beyond intentional department changes. For example, enrollment and graduation rates might change due to social and political changes in society, or the disciplinary background of students enrolled in a service course might change when another department changes the physics requirements for their major. There can be a long delay between making a change and seeing improvement in tracked KPIs. Departments may lack people with the skills to collect and interpret data about KPIs.
Possible uses: Use a database of student entrance and exit dates to track how many students complete the major and minor each year and the average time it takes to complete each degree. Use a database of course enrollments to track the average student-to-instructor ratio in lower- and upper-division courses.
Tips: Rather than identifying every possible metric to track, ask, “What are you trying to understand, and what will help you to best understand it?” Choose a small set of KPIs; starting with 10 to 15 will be sufficient for most departments, although the appropriate number will depend on the size of your department, number of sub-programs, and how many of the KPIs are already being collected for external reporting or are easy to get from existing data sources. See the supplement on Possible Key Performance Indicators for Departments for ideas.
Work with your office of institutional research. Determine what data they are collecting and whatthey can provide to you.
Be strategic: Determine what data will be collected, how often they will be collected, and how often they will be compared to data from the previous time period. Determine who will collect the data, who will analyze them, how the results will be reported, and to whom.
Have a documented plan for consistent tracking that is built into your departmental process, rather than rely on a single motivated volunteer. List responsibilities and timelines for tracking data within position and role descriptions and departmental timelines.
Create a shared, well-organized, secure location for the data, such as a departmental shared drive with sensible folder and file names. This supports transparency and sustainability.
Create one or more well-organized spreadsheets or databases to collect data and calculate KPIs.
Track student data on a term-by-term basis to allow fine-grained understanding of student movement through courses and into and out of the major and minor.
Select a few of the most relevant KPIs for your department. Possible KPIs include the number of students in the major, graduation rates, faculty-staff ratios, student demographics, and passing rates. See the supplement on Possible Key Performance Indicators for Departments for a more detailed list.
Disaggregate data by demographics, major, or other critical subgroupings to understand the experiences of different groups. For guidance on how to do so respectfully and protect anonymity, see the Guidelines for Demographic Questions in the supplement on How to Design Surveys, Interviews, and Focus Groups.
Compare these data to those of other STEM disciplines at your institution and of physics departments at comparable institutions, to the extent possible or appropriate.
Move beyond data collection and analysis andto ensure the data you have collected support departmental improvement. Ensure that decision makers review the data, make sense of them, and turn them into knowledge. Consider using a collaborative team to review and interpret the data and make concrete recommendations from them.
Overview: Ais a tool that identifies criteria for performance and different levels of performance within each of those criteria. A departmental rubric may be used to assess a particular aspect of your department, a course, or the work of faculty or students. A rubric may be a simple checklist of criteria that may be met or not met, or it may be a grid-like structure with criteria listed vertically and performance levels listed horizontally.
Pros: Rubrics identify consistent criteria in hard-to-measure areas such as teaching quality, qualification for a job, departmental climate, and student understanding of a complex physics topic. Rubrics generate a shared understanding about what is important or valued. The process of determining rubric ratings can support shared understanding of areas of strength and areas for growth. Rubrics can be used to evaluate improvement over time. Using rubrics can reduce bias because they identify explicit criteria for evaluation. Rubrics are commonly used in evaluation of student work and program accreditation processes, so they may be familiar to faculty and administrators.
Cons: Rubric ratings are subjective. Rubric results are not easily quantifiable and thus may be difficult to interpret.
Possible uses: Guide peer evaluation of teaching. Provide criteria for merit evaluation of faculty or other. Address bias in hiring by providing clear evaluation criteria that candidates can meet with a wide variety of accomplishments. Assess your departmental climate for equity, belonging, and inclusion. Evaluate the aggregate student mastery of a on a particular assignment.
Tips: Rubrics best support departmental decision-making when the rubrics’ ratings are collaboratively determined and used to foster dialogue among department members. In the case of rubrics that are locally created or adapted, collaborative generation of a rubric is more likely to lead to shared understanding and agreement on the criteria for excellence.
Build on the work of others. An internet search for the rubric of interest will likely yield several examples. Some existing rubrics in higher education are listed in Resources below.
Identify examples of the types of evidence that could be used to evaluate criteria on the rubric. These may be listed in the rubric cell that aligns with that rubric level. For examples, see the supplement on Sample Documents for Program-Level Assessment of Student Learning and/or rubrics on PhysPort.
Consider using a checklist as an alternative to a rubric. Checklists include criteria and a yes/no choice for each criterion, and are formatted as a list instead of a grid. They can be quicker to create and use since they don’t include different levels of performance, but they don’t provide as much information about what it would look like to meet a criterion.
Keep rubrics as simple as possible while identifying meaningful differences. It is difficult for raters to accurately evaluate criteria on more than three performance levels, so in most cases, avoid using more than three to four performance levels. Use more than four levels only if all are needed to identify meaningful and distinguishable differences. You can also create rubrics that have different items with different numbers of levels, or that mix checklist items with items with more levels, if more levels are needed only for certain items.
Determine whether to use general or specific performance levels. For example, three general levels that apply to all criteria might be “emerging,” “developing,” and “proficient,” while three specific levels for instructor use ofmight be “learning outcomes are not explicit,” “learning outcomes are explicit,” and “learning outcomes are presented in the syllabus.” (from Aguirre et al., 2017) Specific levels better support shared understanding and are easier to rate consistently but take more time to develop. If using general levels, list examples of each level. An example might be “Developing: Writing is somewhat unfocused or underdeveloped, and/or lacks clarity in segments of report. Problems with the use of language may interfere with the reader’s ability to understand the content. There are many grammatical errors.”
Practice applying the rubric to a few test cases with a partner or a team. Discuss discrepancies among scores to recalibrate understanding of the levels.
If you apply numerical scores to rubric levels (e.g., “0” for the lowest level, “1” for the next level, and so on), avoid assigning too much meaning to these scores. An alternative approach to quantifying results is to identify the percentage of items that are rated at satisfactory levels or above.
Overview: Surveys, questionnaires, and feedback forms are written questions about a topic sent to a potential set of. Surveys can include and/or and thus can provide both and . Surveys can range from informal to formal. Feedback forms represent a particular type of survey that collects respondents’ opinions and reactions.
Pros: Surveys allow gathering of data from a large number of. Compared to interviews and focus groups, survey data collection is efficient and less subject to the bias or influence of an interviewer. Survey data can be collected anonymously, providing protection for respondent privacy and resulting in more valid and honest answers.
Cons: The value of a survey is heavily dependent on the quality of the questions, and writing good survey questions can be difficult and time consuming. Unlike in an interview, there is no way to clarifyanswers. While surveys avoid the bias or influence of an interviewer, the wording of questions can still result in biased responses. Only a fraction of those invited to take a survey will respond; if those who do respond are different from those who did not (“response bias”), the results may not be generalizable. Analysis and interpretation of survey responses can be time consuming, and transforming the data into knowledge and understanding requires care.
Possible uses: Exit survey to graduating students to solicit feedback on the program. Faculty survey about attitudes toward particular career paths for physics students. Live poll or online form to gather participant feedback about a departmental event.
Tips: Use surveys in connection with interviews or focus groups to get a full picture of a complex topic. For less complex topics or quicker feedback, make use of informal polls or feedback forms. In either case, keep the survey short and focused.
If possible, partner with experts outside your department, such as the leaders of relevant campus offices (e.g., your office of institutional research, campus climate office, office of equity and inclusion, or human resources office) or an external consulting firm to develop, administer, and analyze surveys. Recognize that trained outside experts are likely to be more effective than department members at ensuring safety and anonymity and collecting useful feedback while avoiding the potential for negative consequences. However, recognize that non-physicists may need guidance on understanding and exploring physics-department-specific issues or challenges.
If survey results will be used to contribute to research or generalized knowledge beyond departmental improvement, contact yourto determine if it requires approval and consent. Involving an IRB is not necessary for surveys that are used only for departmental improvement.
Determine whether to use a short, informal survey or feedback form or a more formal, carefully developed survey. A short, informal survey can provide rapid results with little analysis, while a more carefully developed, formal survey can produce more meaningful results.
When possible, use or adapt existing surveys, particularly those that have been developed with careful research. For example, see how to use research-based assessments of concepts or skills and research-based assessments of student attitudes or beliefs below. For examples of surveys of departmental climate, see Resources in the section on Departmental Culture and Climate.
Make sure questions are clear and not leading. For more advice on writing survey questions, see the supplement on How to Design Surveys, Interviews, and Focus Groups.
When developing or adapting surveys, always have a colleague or expert review the questions in advance. When possible, pilot test surveys with a few initial. Pilot testing saves time and aggravation by addressing problematic questions early.
Identify the population to be surveyed (e.g., “all students who have taken introductory physics this year”) and how to reach a sample of this population. Make sure the sample size is large and diverse enough to allow key question(s) to be addressed.
If possible, offer an incentive to participate, such as extra credit points, food, a gift card, or a T-shirt. This will increase participation and help compensate people for their time. A raffle for an incentive can also increase participation rates.
You can administer surveys online (most common) or on paper. Many institutions have secure software with useful features for creating and sending out online surveys, e.g., Qualtrics. Paper surveys can be helpful with a captive audience such as attendees at a career event, but they require data entry in order to be analyzed.
Send an invitation to participate and explain how the information will be used, the purpose of the survey (e.g., to inform future initiatives or improve a particular aspect of your department), and howcan find out about the results of the survey and/or actions taken. These explanations are likely to increase response rates. Make clear whether the information to be collected is confidential or anonymous; explain that the survey is voluntary; explain any incentives you are offering; and provide a link to the survey, the approximate time it will take to complete the survey, and a deadline for response. It is common to provide one or two follow-up reminders, removing those who have already responded from the reminder if the survey is not anonymous.
Aim for surveys that take no longer than five minutes to complete, in order to increase response rate. Longer surveys are possible with highly motivated samples of, or when offering a significant incentive for completion. Some professional survey software will give an estimated completion time; pilot testing is also useful for estimating completion time.
Useon surveys only when you can’t get the information you need with a . Recognize that an open-ended question will likely take 30 to 60 seconds longer to answer and significantly longer to analyze than a closed-ended question, depending on the length and complexity of the question text and expected answers.
At the beginning of the survey, state again how the information will be used and whether it is confidential or anonymous.
For guidance on how to respectfully design and analyze demographic questions and protect anonymity, see the Guidelines for Demographic Questions in the supplement on How to Design Surveys, Interviews, and Focus Groups.
For tips on analyzing survey data, see the supplement on How to Analyze, Visualize, and Interpret Departmental Data.
Share survey results and/or actions taken with the community that was surveyed for transparency, and to build trust for future assessment efforts.
Overview: An interview is a one-on-one conversation guided by questions posed by the interviewer and answered by the interview. Interviews provide primarily descriptive or , rather than , although results can be categorized to provide quantitative data. Interviews can range from informal to highly structured.
Pros: Interviews can provide rich details about individual experiences, especially around complex topics. They are an excellent way to gather in-depth, detailed information and/or understand how people make decisions. They also provide a more personal interaction with the interviewthan techniques like surveys and an opportunity to ask follow-up questions to clarify responses.
Cons: Interviews sample a small number of people; thus responses are less representative or more idiosyncratic than survey responses Strong sentiments expressed by very fewmay lead to skewed perceptions. Interview data cannot be completely anonymous, so a plan must be made to protect participant confidentiality. Interviews can be time consuming to conduct and analyze. Interviewers may inadvertently ask leading questions or introduce bias, particularly if they are not trained to identify potential leading questions and biases ahead of time. Interviews can be more intimidating than surveys for participants, especially if they involve sensitive topics. Systematic analysis of interviews can be time consuming; less systematic analysis can identify key issues or concerns but is more subject to bias or misleading results.
Possible uses: Exit interviews with students completing the major, an internship, or another experience. Interviews with students about their experiences with advising, a course, their career pathway, or a department initiative. Interviews to gather opinions fromabout coursework in the major, possible , or experiences with teaching.
Tips: Interviews can be valuable complements to surveys: Surveys can identify key areas that can be explored further in interviews, and interviews can identify themes that can be tested for generality within a population through a survey. Even a few informal conversational interviews can elucidate issues and provide general impressions. Interviews can guide the design of surveys administered more broadly.
If possible, partner with experts outside your department, such as the leaders of relevant campus offices (e.g., your office of institutional research, campus climate office, office of equity and inclusion, or human resources office) or an external consulting firm to develop, administer, and analyze interviews. Recognize that trained outside experts are likely to be more effective than department members at creating trust with interview, ensuring safety and anonymity, and collecting useful feedback while avoiding the potential for negative consequences. However, recognize that non-physicists may need guidance on understanding and exploring physics-department-specific issues or challenges.
If the interview results will be used to contribute to research or generalized knowledge beyond departmental improvement, contact yourto determine if it requires approval and consent. Involving an IRB is not necessary for interviews that are used only for departmental improvement.
Determine how informal or structured your interviews will be. Informal interviews are conversational, with questions emerging during discussion. Semi-structured interviews identify topics of discussion and a few predetermined questions. These are the most common formats a department is likely to use. By contrast, fully structured interviews plan the questions and the order strictly in advance.
Develop a list of questions (“prompts”) and possibly a list of follow-ups to those questions (“probes”) to guide the interviewer. Five to ten questions should be sufficient for most one-hour interviews. Such ancould be very informal, or it could be a more formal set of scripts for the interviewer to read. For the purposes of departmental assessment, an informal protocol is usually sufficient.
Make sure the questions are clear and not leading. For more advice on writing questions, see the supplement on How to Design Surveys, Interviews, and Focus Groups.
Have a colleague or expert review the questions in advance. Test them in a few pilot interviews. This will help ensure that questions are clear and not confusing.
Start with questions that build rapport with the, especially for student interviews. Interviewers can share a little bit about themselves at the start to establish rapport and ask students about their background and future plans.
Place the most personal or sensitive questions last, after theis likely to be more at ease. Make sure there is a clear reason for asking any personal or sensitive questions.
Determine who will conduct the interviews. Consider how to best mitigate any power differentials between the interviewer and, such as in the case of an member interviewing a student. For example, avoid choosing interviewers who are teaching a class that participants are currently taking; make sure students know their future grades won’t depend on what they share; and let participants know they can “pass” on any question and that they can share information with the interviewer “off the record” (unless the interviewer is a mandatory reporter and a participant shares information that must be reported).
Identify the group from which to recruit, and the number of interviews to conduct. Typically after four to six interviews with members of a distinct population, themes and ideas will begin to repeat themselves. Aim for a balance between doing too many and too few interviews to ensure that you gain a sufficient understanding of the issues at hand without ending up with an overwhelming amount of data.
Recruit a purposeful sample ofthat represents the diversity within the group. Prioritize including interview participants from .
Determine whether to conduct interviews in person or virtually, whether to allowto choose the modality themselves, and whether to record the interview. In-person interviews may allow for a more focused and personal experience. Virtual interviews may be more convenient, may be more comfortable for some participants because they can participate in their own space, and are easier to record and auto-transcribe. Consider recording interviews to facilitate later analysis. If interviews will be recorded, ask for consent to record and let participants know how the recording will be used.
If possible, offer an incentive to participate, such as extra credit points, food, a gift card, or a T-shirt. This helps compensate people for their time and is likely to increase participation.
Send an invitation to participate that explains how the information will be used, the long-term goal of the interviews (e.g., to inform future initiatives or improve a particular aspect of your department), and howcan find out about the results of the interviews and/or actions taken. These explanations are likely to increase response rates. Make clear that the interview is voluntary, and explain any incentives you are offering.
Open the interview by explaining the purpose of the interview, how the information will be used, and how confidentiality will be ensured. If the interview will address topics that might be covered by mandatory reporting, be clear about the circumstances under which the interviewer would be required to report something shared during the interview.
Listen intently to the. Be open and curious. Talk as little as possible, but feel free to offer empathy. Let pauses happen. Ensure that the participant feels heard.
Ask follow-up questions to get more information or seek elaboration. “That’s helpful; could you tell me more?” is a good all-purpose follow-up.
Periodically paraphrase your understanding of thestatements to ensure you’re getting it right.
Take notes during the interview. To enable active listening, these notes may need to be very brief and include only key points. If you are recording or auto-transcribing, focus your notes on things you will need during the conversation, such as follow-up questions that arise in your mind as theis speaking.
Take time to summarize the takeaways in writing immediately after the interview.
Thank theafter the interview, e.g., by sending an email.
Consider offering the opportunity for the interviewto comment on or revise a summary of the interview.
Review a transcript or video of the interview for possible improvements to interviewing behavior, focusing on things such as interruption, talking over the, leading statements or inflections, or unnecessary commentary.
For tips on analyzing interview data, see How to Analyze, Visualize, and Interpret Departmental Data.
Share the aggregated results of the interviews and/or actions taken with the community for transparency, and to build trust for future assessment efforts.
Overview: A focus group is a type of interview conducted with a group ofwho are encouraged to discuss and respond to each other’s ideas. Focus group participants are chosen to be similar in some specific way, such as experiences, demographics, or interests.
Pros: Focus groups provide the opportunity to hear multiple perspectives on an issue in a time-efficient manner. Ideas can emerge from focus groups that would be unlikely to emerge in an individual interview due to the interactions amongas they react to one another’s ideas. Like interviews, focus groups offer rich details about experiences, more personal interaction than surveys, and opportunities to ask questions to clarify responses. Students may be more relaxed and less inhibited in a focus group than in an individual interview.
Cons: Focus groups are not appropriate for sensitive topics, as confidentiality cannot be ensured. Focus groups are also not appropriate for polarizing topics. Focus group members may be reluctant to share perspectives that are in conflict with what other members of the group have expressed. Focus group moderation requires additional skill to facilitate group dialogue. Focus groups cannot address as many questions or topics as interviews, due to the time required to hear multiple perspectives. Like interviews, focus groups also sample a small number of people and can be time consuming to conduct and to analyze, and interviewers must take care to avoid leading questions and address bias. Focus groups are more difficult to schedule than individual interviews.
Possible uses: Focus group with graduating seniors on their experiences with advising and mentoring. Informal discussion with yourgroup about members’ experiences in the major. An expert panel is a type of focus group in which people with particular expertise (such as local industry representatives or high school teachers) are invited to share knowledge with your department to inform departmental efforts related to, e.g., career preparation or teacher preparation.
Tips: Use focus groups to answer questions for which it will be useful to hear multiple viewpoints and discuss disagreements, or when aiming to generate useful solutions to a problem. For example, use a focus group to understand the range of experiences students have with campus advising and brainstorm possible solutions to challenges, but use interviews to learn about student career paths and examples of individual experiences with campus advising to pursue those paths. Avoid using focus groups just to gather feedback from more than one person at a time.
Read the implementation strategies for how to use interviews above, since much of the advice also applies to focus groups.
Aim for two to fivefor a one-hour focus group. For a broader range of perspectives you can include up to 10 participants, but if there are more participants, provide more time (e.g., two hours), ask fewer questions, and take extra care to ensure that everyone gets a chance to speak. If participants are likely to have a lot to say on a topic, aim for fewer participants. This is often the case for faculty participants.
Determine the number of questions you can ask in a focus group by determining the number ofand how much time you have, and by thinking about how much time you want each participant to spend answering each question and how much time you want for discussion. For a one-hour focus group, one to five questions is likely sufficient.
Address confidentiality withdirectly. Remind participants to keep what they hear and learn in the focus group confidential, but advise them that their confidentiality cannot be guaranteed. Thus, they should not say anything that they do not want shared outside the group. Remind participants that they can “pass” on any question.
Advise interviewers to reflect on their own positionality (e.g., their race, ethnicity, gender, and/or role in your department) and that of focus group members, and to consider how these factors may influence interactions within the focus group.
Consider starting with a list of ground rules for the conversation to encourage positive and respectful group dynamics, turn taking, and active listening.
Askto introduce themselves and answer a question that allows them to weigh in on the topic of the focus group and get to know each other.
Choose focus group questions that are conversational in nature and generate discussion amongrather than individual answers. A good approach is to ask participants to discuss and explore a puzzling issue. For example, ask the group to identify the challenges your department faces in doing X, and then rank them in importance.
Recognize that a good individual interview question is not always a good focus group question. Avoid questions that result in people giving answers directly to the moderator, e.g., “What was your first involvement with X?”
Assess teaching effectiveness
- Conduct regular of teaching effectiveness for all and , and use the information collected to provide support and mentoring around teaching.
- Assess teaching effectiveness by using multiple methods listed below and triangulating among multiple data sources that provide the perspectives of the being assessed, students, and peers.
- Work to address and account for the biases and limitations inherent in particular measures of teaching effectiveness, especially student evaluations of teaching, as discussed below.
- Use evidence of student learning to measure teaching effectiveness. See below for methods to assess student learning, which can provide important evidence about teaching practices as part of a complete teaching portfolio.
- Conduct regular peer review of teaching using research-based principles such as the guidelines from the University of Kansas Center for Teaching Excellence. Use a structured format for members to review each other’s teaching by observing classes, having conversations with course instructional staff members about their teaching, and reviewing course artifacts such as syllabi, student learning outcomes, assessments, student work, and teaching materials. Use peer review to gain a holistic view of instructional staff members’ teaching, give feedback, and contribute to their departmental merit review. Provide guidance on how to conduct holistic evaluations of teaching, including explicit criteria for classroom observations and review of teaching portfolios. See how to use departmental rubrics above and how to use teaching portfolios and classroom observations below.
- Use and advertise resources provided by your teaching and learning center as appropriate. For example, some teaching and learning centers offer expert of courses when requested by individual members, provide suggested observation protocols or training on classroom observations, and/or partner with departments to develop assessment materials or plans for mentoring around teaching.
- Consider conducting focus groups with students in a particular course to provide information about students’ perceptions of the course effectiveness and experiences within the course. If possible, assign someone other than the course instructor, e.g., someone from your teaching and learning center, to conduct such focus groups.
Overview: A teaching portfolio or dossier is a compilation of materials to provide evidence of teaching quality. It is a key form of data compiled bymembers about their teaching. A portfolio may include a teaching statement, description of teaching accomplishments, sample course materials, student outcomes, supporting evidence, and annotations of that evidence. Teaching portfolios may be electronic or hard copy. They can be developed over time, with instructional staff adding to a portfolio every term.
Pros: Portfolios enable triangulation of multiple measures of teaching effectiveness beyond student evaluations (see below for details about how these are biased and limited), providing rich and authentic assessment. Portfolios providewith opportunities to reflect upon teaching, improve their teaching, and track that improvement. Portfolios are public representations of teaching as a scholarly activity. Portfolios are flexible, making them adaptable for a wide variety of instructional staff and contexts. Portfolios empower instructional staff to engage in evaluation of their own teaching. Identifying departmental criteria and guidelines for portfolios can support shared understanding of teaching excellence.
Cons: There is no single format for a teaching portfolio, so it may be unclear what should go into one. The volume of information in a portfolio can be dense and daunting, as well as difficult and time consuming to assemble and to systematically and consistently evaluate. (Rubrics can make it easier to evaluate portfolios; see how to use departmental rubrics above.)
Possible uses: Anmember documenting teaching effectiveness for promotion or tenure. A graduate student developing a portfolio of teaching experience to support job applications.
Tips: As a department, establish guidelines on what to include in a teaching portfolio, criteria for evaluating teaching, and the appropriate credit to give tofor the work of assembling a portfolio. When developing these criteria, consider existing frameworks of what constitutes teaching excellence. See Resources below.
Encourageto be selective in choosing materials to go into a teaching portfolio. A portfolio is a compilation not of every piece of evidence, but of the most relevant pieces of evidence. Treat it as a scholarly argument about the quality of one’s teaching.
Consider following as pieces of evidence that could be included in a portfolio: a list of courses taught and courses developed; syllabi; instructional methods used; samples of instructional and assessment materials; samples of de-identified student work, peer reviews of teaching; teaching reflections; student learning assessments (including research-based assessments of concepts or skills); student evaluations of teaching; engagement in faculty development experiences; publications; involvement in educational research or scholarship; contributions to departmental curriculum development or educational improvement efforts; and engagement in equity, diversity, and inclusion efforts in your department and beyond.
Include discussion of inclusive teaching in a teaching statement, and provide evidence of inclusive teaching throughout the portfolio.
Annotate materials to describe the context and significance of each item.
Organize the portfolio into meaningful, transparent categories.
Make portfolios electronic and online for easier sharing, and for inclusion of multimedia materials.
When reviewing teaching portfolios, identify criteria for evaluation and consider using a rubric to consistently apply those criteria. See how to use departmental rubrics above for further information and examples of rubrics used for merit review.
To reflect on and improve a particular course rather than teaching as a whole, and to document the course’s effectiveness, consider using a course portfolio, which focuses on a particular course. See Western Washington University’s How to Prepare a Course Portfolio for guidance.
Overview: Teaching reflection involvesmembers examining and thinking critically about their teaching practice in order to improve it.
Pros: Promoting teaching reflection in your department enables greater awareness and insights about teaching practice for. Teaching reflection supports a mindset that teaching can be incrementally improved over time, leading to improved teaching effectiveness for individual instructional staff members and for your department as a whole, which leads to improved student learning and positive classroom experiences. Carefully examining teaching challenges can enable instructional staff to make appropriate changes that reduce or eliminate the challenges.
Cons: Teaching reflection can be time intensive. There isn’t a clear strategy for engaging in teaching reflection. Teaching reflection isn’t always valued by departments.
Possible uses: Personal work towards continuous improvement of teaching. Addressing specific teaching challenges through targeted reflection and actions. Gathering evidence of teaching effectiveness for promotion and tenure. Reflecting on use of inclusive teaching strategies and areas for growth. Generating ideas to incorporate into a teaching statement, which is often required for promotion, tenure, and awards.
Tips: Embed teaching reflection into your department culture to support effective teaching in all courses and support allin growing as teachers. Encourage teaching reflection to emphasize that learning to teach is a process. Emphasize that gathering data on teaching leads to positive change and isn’t done just to comply with external requirements.
Develop focal questions to guide teaching reflection, such as, “Do my teaching practices support students in learning?”; “Does my teaching influence how students think, act, or feel?”; “What are my students’ goals, values, and strengths?”; “How does my teaching connect to these, and how could it do so more strongly?”; “How am I measuring the success of my teaching?”; and “Are there ways that my teaching may be creating harm for some students?”
Reflect on teaching on both short and long time scales. After each class, make notes or short voice recordings about what went well and what did not go as well. Use these notes to improve the course and its materials when teaching it in subsequent terms. Generate and submit a written reflection on teaching at key points in the promotion and tenure process (e.g., annually) to identify where improvements have been made and plan what to change in the future.
Identify and reflect on specific teaching challenges to guide targeted improvement. Start with challenges that have been brought up in peer evaluations, student feedback, or other evaluations. Gather informal evidence (e.g., interactions with students in office hours, notes after teaching a class, and short written feedback from students) or formal evidence (e.g., exams, projects, surveys, and research-based assessments) to help you understand and respond to teaching challenges. Use classroom observations from a colleague or, a video recording of classes, or a focus group of students to obtain valuable feedback. Evidence does not have to be formal to be useful for addressing a challenge.
As part of reflection on teaching a course, review learning objectives, student work, methods used in assessing that work, and levels of learning expected.
Reflect on how the course supports inclusive teaching. One way to do this is to record one or more classes and watch them while reflecting on how you used inclusive teaching practices. Helpful questions to ask yourself include, “Are my teaching practices inclusive of students from all backgrounds and identities?”; “Who is my teaching working for and who is left out?”; and “What recommendations from thecould I bring further into my teaching?”
Write down a list of changes made in a course each term, look at how those changes influenced, and write down your plans for what to change and keep the same in the future.
Consider using a written template or structure for teaching reflection. It is helpful if this structure is agreed upon by the whole department, so that there is consistency in the teaching reflection process and how it is used in promotion and tenure.
Use a teaching portfolio or dossier to bring together evidence, analyses, decisions, and self-reflections on teaching.
Overview: Classroom observations involve an observer watching anmember teach and providing feedback to help them improve their teaching. Observations can be part of formal teaching evaluation or can be used for formative assessment and instructional staff self-reflection.
Pros: Classroom observation is a way to learn about what is actually happening in the classroom, enabling connections between classroom practice and. Classroom observations can foster reflective teaching by helping discuss and think carefully about the elements of their class and how they helped (or didn’t help) meet course-level student learning outcomes. Classroom observations can be valuable for evaluating teaching as well as for formative assessment. Peer observations can lead to a supportive departmental culture for teaching, especially when this practice is valued by departmental leadership.
Cons: Classroom observations are time consuming. An observation provides only a snapshot in time of what is happening in class, though repeated observations can give a fuller picture. Without clear and agreed-upon goals for the observation and an observer who is skilled in evaluating these goals, the results can be vague, inconsistent, and subjective. Training, clear guidance, and criteria for observations can enable more effective observation but increase the time commitment.can feel threatened by classroom observations, especially if they haven’t experienced many before. Power differentials between observers and instructional staff can negatively impact observations and how they are used.
Possible uses: Observation by an administrator or peer observer of anmember’s course for personnel review (for, e.g., tenure, promotion, and/or post-tenure review). Observation by two instructional staff members of each other’s courses to give formative feedback, repeated several times during the term. Observation by an expert in education of an instructional staff member’s teaching to provide feedback, possibly with an and a recording of the course session.
Tips: Assign teaching mentors to new faculty and other, and encourage instructional staff to regularly invite colleagues to observe their teaching with a particular focus. These observations can be a very positive experience for the observer and observed, and can help an instructional staff member gain new perspectives about their teaching.
Encourage observers to focus a classroom observation on only one or two of the many aspects of a course, e.g., student engagement, inclusive teaching practices, time spent on different activities, and/or use of student-centered teaching practices. Whether the observation is foror , agree ahead of time on the goals and focus of the observation, so that the feedback the observer provides is valuable for its intended purpose.
Use anto generate more systematic data. Consider using the same protocol repeatedly over time to track changes. See the PhysPort assessments database for an extensive collection of observation protocols. Check whether your teaching and learning center has developed an observation protocol or can assist your department in developing one.
As a department, identify criteria for observations or agree to use a pre-existingor form, to ensure quality and consistency in observations.
If using anor form, discuss the protocol with the observation team in advance and consider practicing using it together. Some protocols include training materials that should be completed before the observation.
Before the observation, have a meeting between themember and the observer to discuss the context and goals of the class, the format of the observation, and the kinds of feedback that are desired and/or will be provided. Determine who will have access to the findings from the observation. If the purpose of the observation is formative feedback, the findings might be shared only between the observer and instructor.
After the observation, have another meeting between themember and the observer to discuss the observations (using notes or data from the ) and reflect on what went well in the class and what could be improved. Draft action items that the instructional staff member could implement in subsequent classes.
Think carefully about the strengths and background of the observer when determining the criteria for the observation. Be aware of biases that an observer is likely to bring. Peers within physics understand the nuances of physics and are well qualified to assess disciplinary content, learning objectives, syllabi, assignments, and student work.
Be aware of power differentials between observers and. For example, a more junior person might have trouble giving genuine feedback to a more senior person. Also consider the similarity or difference in the teaching practices of the observer and instructional staff member. When observations involve ratings, peers have been found to give higher ratings to colleagues who teach the way they do than to colleagues who do not.
Consider using multiple classroom observations to build a more complete picture of themember’s teaching. These could be completed by the same observer for consistency or by different observers to get a variety of perspectives.
Consider using a pair of observers who can prepare a shared set of observations and feedback.
Recognize the limitations of classroom observations. No classroom observation captures every important aspect of teaching.
Consider using recordings of a class session to offer themember an opportunity to observe and reflect on their own teaching.
Overview: Student feedback forms are a type of survey that asks students questions about a particular course during the term, rather than at the end of the term. (See below for end-of-term student evaluations of teaching.) These surveys are typically given informally for purposes of. Student feedback forms may be developed by individual members, institutions, teaching and learning centers, or departments.
Pros: Student feedback forms enable gathering of data from a large number of students efficiently and anonymously. Feedback from students is a powerful way to improveeffectiveness. Using informal student feedback forms early in the term provides an opportunity to act on the feedback during the course itself. The results from simple feedback forms can be relatively easy to interpret.
Cons: The value of the feedback will depend upon the quality of the questions; keeping feedback forms simple can mitigate this challenge. Negative feedback from students can be demoralizing to. Response rates may be low, making interpretation difficult.
Possible uses: Brief written prompt at the end of class inviting feedback on today’s class session. Informal course feedback survey one month into the term, and again at the end of the term.
Tips: Use feedback forms to improve teaching and rapport with students. To make the best use of a feedback form, discuss results with the class and indicate changes being made.
Make feedback forms anonymous to protect students and solicit honest feedback. For this reason, course management systems may not be the best format for collecting this feedback.
Use questions such as, “What is going well?”; “What is not going well?”; and “What suggestions do you have for the instructor?” Consider asking students to provide feedback on the class pacing, reading, group work, or lectures; to estimate the number of hours they spend on the course per week; or to rate their learning or aspects of the course. Ask students to explain the reasoning behind such ratings.
If students are particularly vocal about criticizing an aspect of a course such as the use of active learning, use feedback forms to determine how prevalent the criticism is, and share the results with the class. Criticism often comes from a vocal but small minority of students. If that is the case, sharing feedback results will both assure these students that you are listening and help them realize that they are not speaking for the majority of students, which in turn may encourage them to be less critical.
Recognize that student feedback can be biased and can even cause harm to the instructor. For example, it may contain misogynist, racist, and/or harassing messages for the instructor. Harmful messages are particularly likely with anonymous feedback because students may feel more free to say hurtful things. To mitigate this risk, consider having another instructor read through the feedback in advance and remove or help process harmful comments. Put a departmental procedure in place for managing harmful messages so the responsibility does not lie solely with themember.
Read through the responses and analyze any numerical data. Note the number of times certain problems are mentioned, and typical ratings. Pay attention to all feedback but do not be overly swayed by outliers. Write down thoughts and ideas generated during this analysis.
Recognize that people are more likely to pay attention to negative feedback than positive feedback, and that this tendency may leadand supervisors to think that student feedback is worse than it is. Use quantitative counts of negative and positive feedback to get a more accurate representation of student views of a course.
Discuss with students the survey results, what was learned, and any changes being made as a result of their feedback. Point out positive comments as well as common concerns, along with the numbers of students who made them. Students appreciate seeing that an instructor is working to improve their teaching and helping them succeed.
Collect feedback through an online questionnaire or through informal paper-based mechanisms, such as in a one-minute paper or on index cards with one side for general or positive feedback and the other side for suggestions. Feedback can also be collected through a group collaborative activity, e.g., group discussion or online brainstorming session. Feedback on a particular class meeting can be collected in a daily exit ticket or quick poll. A simple online feedback survey may also be kept continuously open to provide ongoing feedback.
Overview: Student evaluations of teaching are a type of survey that asks students questions about a particular course at the end of the term. They are typically mandatory, are administered centrally by your institution at the end of the semester, and use a standardized format in all courses. They are often used to evaluatefor promotion, tenure, and merit review. (See above for student feedback forms for formative assessment.)
Pros: Surveys allow gathering of data from a large number of students consistently, efficiently, and anonymously. If surveys are designed well, feedback from students can be an important element of evaluating teaching.
Cons: Ratings on student evaluation forms are not correlated with student learning, and should not be used as a sole or primary measure of teaching quality. As with all surveys, the value of student evaluation forms depends heavily on the quality of the questions. Student evaluation of instructors is heavily subject to bias, thus penalizing certain groups of. Response rates on student evaluation forms are often low, reducing validity of the results. Negative feedback from students can be demoralizing to instructional staff and have repercussions for promotion and tenure.
Possible uses: Formal end-of-term institutional student evaluation form.
Tips: Do not rely only on student evaluations to evaluate teaching; use them in combination with other measures. Review the standardized forms as a department to understand potential for bias, and decide how to best use the results to inform teaching evaluation.
Use results from student evaluations in combination with multiple other measures of teaching effectiveness, as described in the rest of this section. Student evaluations should not be the primary factor in annual merit review of teaching.
When possible, use questions that promote and reward research-based instructional practices (e.g., “Does the instructor provide opportunities for students to practice and get feedback during class?”) rather than lecture (e.g., “Does the instructor explain the concepts clearly?”).
Review the standardized forms used by your institution against current frameworks for effectively and equitably evaluating teaching. See Resources below. If the standardized forms do not effectively or equitably evaluate teaching, advocate for changing them. If the forms cannot be changed, focus departmental analysis on the questions that are most aligned with departmental values. See Resources below for examples of productive frameworks for modifying student evaluation forms.
Review the distribution of results (not just the mean values) from student evaluations. Were the ratings clustered in the middle, or spread throughout the? This will provide information about the degree to which the course impacted different students differently.
Reviewresponses and analyze them systematically. See the supplement on How to Analyze, Visualize, and Interpret Departmental Data for ideas on analysis.
Use results from student evaluations for formative as well as. For example, what sorts of concerns do students consistently raise in a course? Do student ratings around a particular issue (e.g., clarity and accessibility of materials) improve over time?
Recognize and take into account when making tenure, promotion, and other evaluation decisions that student evaluations are not correlated with student learning and may penalizefor using research-based instructional practices. See Evidence below for details.
Recognize and take into account when making tenure, promotion, and other evaluation decisions that student evaluations are often correlated with factors beyond an instructor’s control, such as the size of a class, whether the class is required or elective, and whether it is in person or online.
Recognize and take into account when making tenure, promotion, and other evaluation decisions thatfrom are more likely to receive negative student evaluations. For example, students are more likely to rate women and instructional staff as less competent than male colleagues, comment on their clothing, or make sexually harassing comments. Students also give lower ratings, on average, to instructional staff who are of color, have accents, and/or have Asian last names. See Evidence in the section on Equity, Diversity, and Inclusion for details.
Recognize that student feedback can cause harm to the instructor. For example, it may contain misogynist, racist, and/or harassing messages for the instructor. Harmful messages are particularly likely with anonymous feedback because students may feel more free to say hurtful things. To mitigate this risk, consider having another instructor read through the feedback in advance and remove or help process harmful comments. Put a departmental procedure in place for managing harmful messages, so the responsibility does not lie solely with themember.
Enhance response rates to the extent possible. Provide class time for completion of the evaluation. Emphasize the importance of the evaluation, and reassure students that results are confidential. Offer participation points for students who complete the evaluation or an incentive if the class completion rate is over some threshold. See Resources below for more ideas for increasing response rates.
Recognize that people are more likely to pay attention to negative feedback than positive feedback, and that this tendency may lead those reading written evaluations to think that student feedback is worse than it is. Use quantitative counts of negative and positive feedback to get a more accurate representation of student views of a course.
Assess student learning
Overview: Exams and homework are sets of questions that assess students’ knowledge and skills. Exams have a variety of formats, including written, oral, open-book, closed-book, individual, and group. Exams are usually used forof students’ knowledge to produce a score, whereas homework is often used as to help students practice using new knowledge and skills. Selected exam and homework questions can be valuable direct for assessing .
Pros: Exams and homework allow students to demonstrate certain kinds of skills and knowledge, such as detailed problem solving. They also enableto assess certain and .
Cons: Exams are often stressful for students, and students may not be able to demonstrate all of their understanding and skills in this high-stakes environment. Students can become overwhelmed and discouraged by excessively difficult homework assignments.
Possible uses: An exam designed to enable students to demonstrate certain aspects of their learning and compare that learning to benchmarks for grading purposes. Exam questions designed to align withand as an of the degree to which a course supports achievement of these outcomes. A to build a collaborative feedback mechanism into the assessment. Homework designed to provide students with feedback on their learning and help them identify gaps and improve.
Tips: Ensure that exams and homework assess. When creating exam and homework questions, map out the outcomes that each assesses. This alignment ensures both that homework and exams are assessing critical knowledge and skills, and that can understand how well the course helped students achieve each goal and objective. When possible, map to questions on exams and homework assignments. Avoid using the course grade or overall exam score when reporting on assessment outcomes, as multiple factors contribute to grades.
Overview: Ais a tool that identifies performance criteria and identifies different levels within each of those criteria, e.g., from novice to advanced. Rubrics of student performance typically address different aspects of an assignment or skill. A rubric is usually designed in a grid-like structure, with criteria listed vertically and performance levels listed horizontally. Rubrics can be used for many kinds of assignments, including projects, oral presentations, and essays. Using rubrics to evaluate of student learning is a key component of assessing .
Pros: Rubrics provide students andwith clear criteria for success for an assignment or skill. Rubrics thus increase objectivity, consistency, and fairness and reduce bias in the grading of assignments, especially when there are multiple graders, e.g., . Using a rubric can speed up grading. Rubrics provide that enables students to reflect on their performance and improve it. Rubrics help students include specific features in their work. Rubrics can help instructional staff shape their teaching.
Cons: It can be difficult to differentiate how the rubric’s performance levels apply to actual work. Research is inconclusive as to whether they help students learn. It can be time consuming to create a new rubric for each assignment.
Possible uses: Rubric given to students at the beginning of an assignment to help them understand expectations for their work. Rubric to provide scores and feedback on how students achieved different levels of performance on an assignment. Rubric to articulate the expectations of student achievement forfor your , used to evaluate of student learning. Such a rubric provides direct evidence of whether each outcome is being met by students in your program at a developmentally appropriate level.
Tips: Create the rubric in advance and share it when assigning the work it applies to, so that students have a clear understanding of what success looks like. Or have students create their own rubrics, as in this lesson plan on developing a rubric for laboratory notebooks. In either case, have students use the rubric to assess their own work mid-way through the assignment, so they can understand their progress and improve their work.
Read the implementation strategies for how to use departmental rubrics above, since much of the advice there also applies to rubrics of student performance.
Rubrics can be developed to assess a particular assignment, or they can be more general and can be used over time to assess skills, e.g., scientific reasoning, lab skills, and/or critical thinking. General rubrics can be reused over time and can help students understand the components of the skills they are working to develop.
When designing a rubric, begin with thefor the assignment and determine at least two observable criteria that meet these outcomes. From there, divide the observable criteria into levels of performance, and draft a description of each level. Use three to five levels of performance for each criteria. Ensure that the levels of performance are distinguishable, observable, and measurable.
Rubrics can enable students to give meaningful feedback on peers’ work. This peer feedback is most commonly used asto improve the work.
Modify an existing rubric for the type of assignment or skill you want to assess, instead of starting from scratch. See Resources below for example rubrics.
If needed, assign different weights to different aspects assessed in the rubric based on their relative importance when determining a student’s overall grade. The grid format of the rubric does not imply that each row on the rubric must have the same weight.
See the supplement on Sample Documents for Program-Level Assessment of Student Learning for examples of how rubrics of student performance are used in program-level assessment of student learning.
Overview: In student self-assessment, students assess their own learning and performance. Student self-assessment also includes self-grading, in which students assess their work based on a rubric and determine a score. Student self-assessments can be used as indirectfor assessing .
Pros: Self-assessment teaches students how to examine their own learning and performance so they can understand what they have learned, where their knowledge gaps are, and to what extent their learning process is working. Because it improves students’ learning process, self-assessment can increase student achievement. Self-assessment supports students in learning about their own learning, an aspect of, which has been shown to be an important predictor of student success. Self-assessment also acknowledges students’ own agency and responsibility for their learning. Self-assessment is a skill that benefits students beyond a single course. Self-grading enables students to reflect on the strengths and weaknesses of their work and assign themselves a score, which can reduce the instructor’s grading burden.
Cons: Students can overestimate their performance when self-assessment contributes to their overall course grade. Less confident students may underestimate their performance. (Training and practice in self-assessment and self-grading can mitigate this.)
Possible uses: A student completes a rubric to assess their performance on a project at a mid-way point, and improves the project based on their own self-assessment results. A student answers self-assessment questions at the end of class and gains a better understanding of what questions to ask during the next class.
Tips: Teach students how to self-assess their learning and their work, and give them many opportunities to practice over small chunks over time.
Support students in figuring out how to use self-assessment results to improve their learning. Have short, individual conversations with students about using self-assessments to improve learning.
Encourage a. Make sure students don’t feel too down on themselves if they aren’t doing well according to their own self-assessment. Support them in seeing and building on their strengths.
Consider using self-assessment on many different time scales. Ask students to reflect briefly at the end of each class using prompts such as, “Today I learned…”; “One thing I’m not sure about is…”; and “The main thing I want to find out more about is…”. Prompt students to self-assess before, during, and at the end of an assignment or project, possibly using a rubric. Ask students to self-assess their learning, skills, or attitudes at the end of a course.
Make self-assessment low-stakes. For example, give participation points rather than a grade.
Before they learn a new skill or concept, have students self-assess their knowledge, skills, or experiences. Have students answer questions such as, “How familiar am I with this concept?” or “Have I done this before?” This information can also helpshape their teaching.
Consider using the Student Assessment of their Learning Gains (SALG). The SALG is a standardized, research-based self-assessment that can help students reflect on their learning and how a course supported this learning.
Structure assignments so that students self-assess as part of the assignment. Include questions in the assignment like, ”What did you do well? Give examples.” and “How could this work be improved?”.
When appropriate, before students start an assignment or project, give them a rubric (see how to use rubrics of student performance above) that will be used to assess their performance, to establish clear expectations. Have students use the rubric to self-assess their work at a mid-way point to identify areas for growth and improvement, and again at the end.
Use “exam wrappers” (see Resources below), which are sets of written reflection questions students answer after they complete an exam. These questions help students self-assess their exam performance and connect it to their learning process.
Use student learning portfolios as a way for students to self-assess and demonstrate their learning in a course.
Use peer assessment as part of the self-assessment process. Students can use peer feedback on their work to reflect on their own learning process and identify strengths and areas for growth.
- A student learning portfolio is a compilation of materials to provide evidence of student achievement of or . Student learning portfolios provide an authentic opportunity for students to reflect on their achievements in a course or program of study. Student learning portfolios can be used as for assessing program-level student learning outcomes. See how to use teaching portfolios above for further description of portfolios and their uses, as the same advice applies to student work.
Overview: A research-based assessment of concepts or skills is a standardized assessment of subject-specific knowledge that has been rigorously developed and tested through research. These assessments are intended to measure the learning of the class as a whole rather than individual students.
Pros: The use of standardized assessments allows comparison of results amongand over time. Because the assessment questions are based on students’ ideas and have been rigorously tested, the questions are clear to students, test ideas students actually have using their own language, and measure the intended concept or skill. They are usually multiple-choice, which makes them quick and easy to score. Online tools are available to make scoring and analysis even quicker and easier.
Cons: Because these assessments are standardized, the concepts or skills covered may not align well with the concepts and skills covered in a particular class. When classes have small numbers of students, it can be difficult to interpret the results and make comparisons. The research on these assessments has been conducted with students who are not representative of the full physics student population, so the findings may not be generalizable to all student populations.
Possible uses: Give an assessment at the start and end of a course to understand how the course influences the assessment results. Compare results between two versions of a course (e.g., before and after making changes to the course) to understand how the two versions did and did not support learning. Give an assessment at the end of a course and examine sub-categories of questions based on concepts or individual questions, to identify strengths and gaps in your instruction to inform future instruction.
Tips: Research-based assessments of concepts or skills are one piece of evidence demonstrating students’ learning. Triangulate these results with the results of other assessments to get a fuller picture of students’ learning and teaching effectiveness.
See the PhysPort assessments database for an extensive collection of such assessments, including the Force Concept Inventory (FCI), the Force and Motion Conceptual Evaluation (FMCE), the Brief Electricity and Magnetism Assessment (BEMA), the Conceptual Survey of Electricity and Magnetism (CSEM), and dozens more assessments for many topics and levels of physics.
Give students a small amount of participation credit for completing the assessment, but do not assign points to individual students based on scores. Students should do their best without feeling the pressure of a high-stakes assessment.
Give the pre-test before teaching the relevant material and the post-test after teaching the material. Some research-based assessments of concepts are used only as a post-test, because the concepts covered and terminology used are unfamiliar to students before the course.
Follow the test developers’ guidelines around test security. Many of these research-based assessments of concepts or skills have taken a long time to develop, and the utility of the instruments will be compromised if students have unsupervised access to the questions.
Only some research-based assessments of concepts and skills can be given online. Check the developers’ instructions before giving an assessment online. See Administering research-based assessments online for more information.
If you give an assessment online, participation rates may be lower than if you give it in class. To increase participation, send several email reminders over the course of the week, including personalized reminders to students who have not yet completed the assessment. Consider using the tools available in your course learning management system, such as pop-up messages or announcements.
Look at students’ results in aggregate or by groups or demographics, but don’t look at the results of individual students. For guidance on how to respectfully design and analyze demographic questions and protect anonymity, see the Guidelines for Demographic Questions in the supplement on How to Design Surveys, Interviews, and Focus Groups.
Look at pre-test scores to get a sense of which concepts or skills students have already developed and which they need more support with. Look at post-test scores to understand how the course helped students learn and where course improvement is needed.
Look at students’ results by concept cluster or individual question to get a sense of which concepts students understood well, and which need improvement.
Consider calculating thein students’ scores between pre- and post-tests, and compare this to published normalized gains for the assessment you are using for different types of courses and institutions, to get a sense of how your course compares to others. Also consider calculating the effect size of the change, and compare to standard ranges of to determine if the change is “large”, “medium”, or “small”. The PhysPort Data Explorer will do these calculations automatically. Note that effect size is a better measure in many cases, but normalized gain is more commonly used, making comparisons easier.
Keep written records of changes made to the course, and examine how these are related to changes inand/or over time.
Overview: Research-based assessments of student attitudes or beliefs usually use a Likertto measure students’ self-reported attitudes and beliefs about their abilities, or about physics and their physics courses. These assessments can also measure other aspects of attitudes and beliefs, such as students’ views about the nature of science, attitudes toward problem solving, or confidence in their abilities. These surveys are intended to measure how different teaching practices impact students’ beliefs and attitudes by providing data on average shifts in these beliefs and attitudes during the course. These surveys often examine how closely students’ attitudes and beliefs about physics align with those of experts. Researchers have found that without explicit attention to attitudes and beliefs, in most introductory physics classes, students’ attitudes and beliefs become less expert-like by the end of the term, so having no impact on them is a significant achievement.
Pros: Since students’ attitudes and beliefs about physics impact their learning, measuring and working to improve students’ beliefs and attitudes can positively impact their learning. Because the assessment questions have been rigorously tested, the questions are clear to students, test ideas students actually have using their own language, and measure the intended concept or skill.
Cons: These assessments include a desired (“expert-like” answer); this desired answer is based on the subjective opinions of a group of physicists. The opinions of this group of physicists may not be representative of physicists as a whole, or these opinions may be biased views of what constitutes science expertise due to the lack of diversity among physicists as a whole. Student responses are self-reported and thus may or may not correspond to the ways they actually think. Manyfind that their students’ beliefs and attitudes become less expert-like during the course, which can be discouraging. Positively impacting student attitudes and beliefs is difficult though not impossible.
Possible uses: Give an assessment at the start and end of a course to understand how the course influences students’ “expert-like” beliefs and attitudes. Compare results between two versions of a course (e.g., before and after making changes to the course) to understand how the two versions did and did not support students in developing “expert-like” beliefs. Give an assessment at the end of the course, and examine sub-categories of beliefs and attitudes to identify strengths and gaps in instruction to inform future instruction.
Tips: Recognize and address how students’ beliefs about their learning impact their learning. For example, if students believe that physics is about memorizing facts, they may not learn as much as if they think it’s about problem solving. Spend time understanding, reflecting on, and improving students’ attitudes and beliefs to support them in being more successful in the current course and in future courses or careers in science. Start by selecting one or two aspects of students’ attitudes and beliefs.
See the PhysPort assessments database for an extensive collection of such assessments, including the Colorado Learning Attitudes about Science Survey (CLASS), the Maryland Physics Expectations Survey (MPEX), the Student Assessment of their Learning Gains (SALG), the Sources of Self-Efficacy in Science Courses-Physics (SOSESC-P), and many more.
Ensure that the assessment you use is appropriate for your course. For example, many of these assessments assume that a physics course has a particular structure, and they may not be appropriate for a course that does not include problem solving, equations, and/or exams.
Give the pre-test at the beginning of the term and the post-test at the end of the term to measure the “shift” in students’ beliefs and/or attitudes over the course of the term. Students can usually complete these assessments quickly, so they can be given at the end of class or as part of a homework assignment.
Keep track of changes made to the course over time in order to understand how these changes may influence students’ beliefs and attitudes.
Look at students’ results in aggregate or by groups or demographics, but don’t look at the results of individual students.
Give students a small amount of credit for completing the assessment, to encourage participation.
Since there is no “correct” answer on assessments of beliefs and attitudes, don’t worry about test security and consider giving these assessments online. See Administering research based assessments online for more information.
If you give an assessment online, participation rates may be lower than if you give it in class. To increase participation, send several email reminders over the course of the week, including personalized reminders to students who have not yet completed the assessment.
Learn how to appropriately score and interpret the assessments you use. Most of these assessments ask students to rank statements using a five-pointthat ranges from “strongly agree” to “strongly disagree.” The most common way to score these surveys is to collapse students’ responses into two categories, depending on whether they are or are not aligned with the “expert-like” response. Information about scoring can be found on each assessment’s PhysPort page, and many of these surveys can be automatically scored using the PhysPort Data Explorer.