Amie Baisley assesses student assessments

Portrait of Amie Baisley standing in front of a window.

Amie Baisley, Ph.D., is an instructional associate professor in the Department of Engineering Education whose research focuses on student assessment and persistence.

Most people understand that student assessments are key to determining how well they are learning the subject matter. But there is much more to these assessments than handing out quizzes or assigning student papers.  

Amie Baisley, Ph.D., the Thomas O. Hunter Rising Star Instructional Associate Professor in the Department of Engineering Education (EEd), focuses on assessment strategies, as well as engineering student persistence – particularly during the first two years.  

Her research has led to the development of new assessment tools, including mastery-objective rubrics, a grade dashboard and a sentiment analysis tool to identify student tone in written reflections. 

The old way  

Baisley became interested in assessments when she realized traditional teaching, grading and assessment methods — particularly in second-year engineering courses — were not fully capturing how students learn today or not supporting conceptual development. 

Baisley contends that a percentage score on an exam is not helpful in determining what students understand. Nor does it give the students actionable feedback. 

“A single exam score does not distinguish between conceptual misunderstandings and procedural errors, nor does it guide students toward specific areas in their problem-solving approach for improvement. That testing system can discourage students, leave them with no opportunity to learn from errors and limit persistence after failure,” she said.  

The new way 

“These courses form the foundation for many advanced engineering courses, so I wanted a system that clearly defined expectations, identified core concepts for each course and allowed students and me to track learning progress,” she explained. 

This involved creating a student-centered learning environment that focused on foundational problem-solving skills and mastery-based grading.  

“The grading reform wasn’t driven by poor test scores but by a desire to better understand what a test score reflected about student learning,” she said. “The mastery-based grading system makes it possible to explicitly track conceptual growth across the semester, provide detailed individualized feedback and close the feedback loop between learning, reflection and progress. 

“It also provides multiple opportunities to revisit concepts, apply feedback and demonstrate growth to help students learn from mistakes and stay motivated throughout the semester.”   

How it works 

Originally, the mastery objectives were closely evaluated across the first several courses to ensure the core concepts were fundamental and repeatable across all problems. Now, these objectives and the scoring rubric remain consistent each semester to provide a stable framework to measure mastery.  

“New assessment problems are created each semester to assess the concepts in fresh contexts based on the needs of the current student cohort and emerging directions in the field. At the end of each semester, the mastery results are analyzed to confirm that the assessments continue to promote professional-level problem-solving skills and support students’ conceptual development,” Baisley said. 

Her courses are organized into two-week modules, which means a new assessment problem is created and tested every other week. Students are assessed on a single problem using the course-specific mastery objectives and a standardized scoring rubric. 

The system allows multiple opportunities for students to show growth while creating a low-stakes, feedback-focused testing environment. Students are encouraged to make mistakes, learn from detailed feedback and apply that feedback on subsequent assessments.  

This allows students to strengthen problem-solving skills and demonstrate mastery through multiple applications rather than a single high-stakes performance.  

“This cycle promotes persistence, metacognition and deeper learning of the course’s core objectives. Tracking each student’s progress over time provides a reliable picture of their strengths and areas for improvement, enabling individualized support throughout the semester,” Baisley explained.  

What do the students think?  

Connor Riley, a former statistics student, said Baisley’s assessments were unique because they emphasized the problem-solving process and the reasoning behind each step. 

“Dr. Baisley’s assessment style forces students to slow down a little more in the solution process to critically analyze the problem and think about what approach is necessary for the given beam structure, as many of the points for assessments lie in defining the types of support reactions, the required geometry, and the solution strategy needed to solve for the unknown values,” he said.   

He said this assessment style gives students a better understanding of what is being asked of them.  

“Toward the end of the semester, I would say, students view the assessment style as a fair reflection of their learning and understanding, most simply because at this point, the students have had time to digest what the assessment system asks of them,” he added. 

The future  

Baisley collaborates with EEd peers and at other universities on assessment design and evaluation practices.  

“The conversations lead to valuable insights about how teaching style, grading practices and pedagogical design influence student engagement, learning outcomes and overall understanding,” she said. 

Baisley said the goal is to use the different tools for a holistic picture of student understanding, engagement and growth in each course.  Her teaching goals include creating student-centered, engaged environments that establish strong foundations for the fundamentals.