Data in Teacher Evaluation
This page outlines how formative and summative teacher effectiveness assessment data can be used to increase: student achievement, educator effectiveness, and human capital decisions. The assessment of teaching and student learning can be viewed as two synergistic activities that aim to benefit both the quality of student learning and the professional development of the teacher. Assessing learning alone is not sufficient because the ultimate success of students is also dependent upon their motivation and commitment to learning. Similarly, assessing only teaching behaviors and course activities is not sufficient because qualities of the teacher may be appreciated by students but not optimally helpful to their learning and growth.
II. Formative Assessment & Summative Assessment
The two types of data are distinguished by how the data is used and not by the format of assessment. The same assessment may act as either formative or summative. However, some methods of assessment are better suited to one or the other purpose.
Formative teacher effectiveness assessments yield data that is used to improve teaching practice. For instance, the Charlotte Danielson observation rubric yields detailed data on how well an educator explains a particular subject. When this data is communicated to a teacher, they can use this data to improve their teaching practice. Charlotte Danielson, the Teacher Advancement Project (TAP), Kim Marshall and others have developed classroom observation rubrics to that yield formative assessment data.
Summative teacher effectiveness assessments yield data that is used to measure end of instructional unit success, by comparing that success against a benchmark. For instance, a value added assement measures a teacher’s contribution to end of the year student growth, by comparing end of the year student growth against beginning of the year student levels. When this data is communicated to a teacher, they can use this data to determine if their teaching practice is actually increasing student growth or if another factor is contribtuting to the growth. Summative assessment data is often a product of student growth measures (value added tests, student learning objectives, pre and post tests, etc.) but can be a product of observations (e.g. Charlotte Danielson).
The below table compares formative assessments and summative assessments.
III. Formative Assessment Case Study
Project COACH is an innovative alternative teacher evaluation model that is currently used in select Tennessee districts like Bradley County and Hamilton County. In short, in contrast to the state model that at most requires four unannounced 15 minute observations, Project COACH requires up to eight brief (minimum of 10 minutes), unannounced mini-observations. Each observation is almost immediately followed-up with a feedback conference that provides the educator with focused action steps.
The District Management Council found that 100% of principals and 84% of teachers agree that the verbal feedback provided or received through the Project COACH mini-observation process has been helpful in improving classroom instruction.
It provides teachers with immediate professional development feedback that they can immediately incorporate into their practice.
The unannounced observations alleviate some of the stress that occurs with an ominous announced observation.
The immediate feedback in Project COACH observations allows teachers to converse with their evaluators and ameliorate teachers concerns about the evaluation.
IV. Summative Assessment Case Study
Delaware Performance Appraisal System II (DPAS II) is Delaware’s statewide educator evaluation system. As a statewide system, DPAS II establishes consistent educator and student performance expectations and outcomes across all schools and incorporates both student growth measures and Charlotte Danielson based observation measures.
Leaders in Delaware consider DPAS II to be the fulcrum of the state's educator human capital life cycle that includes: recruitment, selection, pre-service training, placement, professional development and retention by separation. As they partner with new teacher talent pipelines (e.g. Teach for America or the University of Delaware’s STEM residency program), they can use DPAS II data to drive their human capital decisions. For example, if DPAS II assessments show that University of Delaware’s STEM residency program secondary math teachers are overwhelmingly the best secondary math teachers, then Delaware leaders have a data driven business case to recruit more of those teachers.
The graphic below outlines how DPAS II drives the human capital life cycle.