A Multi-Method Approach for Assessing Changes in Teacher Performance in a Life Sciences Teacher Institute
Authors: Vicki May, Carl Hanssen, Phyllis Balcerzak

Contents
3. Design, Data & Analysis
Print Poster
« Back to Poster Hall
3. Design, Data & Analysis
Next »

Multiple methods are being used to examine the hypotheses.  First, teacher content tests are administered at the start and end of each summer institute and then follow-up assessments are administered annually.  For example, Cohort 1 took the Year 1 and Year 2 content exams at the start and end of the Year 1 and Year 2 summer institutes, respectively.  In addition, Cohort 1 took the Year 1 content test at the end of the Year 2 summer institute, as a measure of content retention.  So, for the Year 1 content exam, we have three measurement points for Cohort 1, a pre-test, post-test and retention test. Additionally, Cohort 2 took the Year 2 content exam prior to their involvement in the Year 2 summer institute, as a control to be compared to Cohort 1, Year 2 content pre/post tests. The content tests were developed by a team comprised of science educators and biologists from Washington University's Biology Department, with expertise in the content areas focused upon during summer instruction. The teacher content assessment items were selected, modified, or written in accordance with a framework that aligns the assessment items and the learning objectives of each of the Institute content course with instruction. The items are coded according to subject area (i.e., plants, ecology, evolution), relative cognitive demand (fact/comprehension; application/multi-step process questions), and connection to global issues. Prior to administering the test to the teachers, a larger test was piloted with a smaller group of teachers, six from each of the three cohorts. The piloted test data were reviewed using accepted techniques including reliability analysis, form comparisons, item difficulty calculations, and foil analysis. The modified test was administered to the teachers prior to and at the completion of their summer residence coursework. The quality of the test performance is monitored through item analyses each time it is administered. Teachers who have not yet been involved in professional development coursework take the test as a control group to be compared to those who have been in coursework. This continuous testing allows for more rigorous analyses than pre-post tests within cohorts. 

Second, student content tests are administered to students in institute participants' classes at the start and end of each school year.  This yields a pre-post test measure that allows us to examine increases in student content knowledge.

Third, a student survey is administered at the end of each school year.  A critical element of this survey asks students to evaluate teacher performance as related to the use of inquiry-based teaching methods. The Institute leadership team and the project evaluators developed the survey.

Finally, classroom observations of institute teachers are planned to begin in January 2009.  Members of the external evaluation team as well as members of the project leadership team will conduct these. This will enable the project team to document evidence of teachers using inquiry-based methods in the classroom. For teachers who have not yet attended the summer institute, this will also allow us to evaluate changes in teacher performance that may be attributable to institute participation. The protocol for classroom observations is comprised of items derived from both the Horizon Research (2000) Local Systemic Initiative Classroom Observation Protocol and the Reformed Teaching Observation Protocol (2000). The specific goals of the Institute were used to construct the classroom observation protocol. Videotaped lessons will be used for internal validation.

Data analysis includes descriptive results from teacher content test results provide evidence of changes in content knowledge during a teacher's participation in the institute.  Secondly, descriptive results from student content tests provide evidence of changes in student content knowledge.  Hierarchical linear modeling will be used to evaluate the effect of teacher content knowledge gains on student content knowledge gains.  This analysis will control for student demographics, including grade level and course type (e.g., AP Biology v. general science).  Thirdly, student survey results will provide descriptive analysis of student perceptions of teacher performance. These results will be correlated with observed teacher performance measured through classroom observations.

Overall, our goal is to use the multiple data sources described above to triangulate findings related to our hypotheses. These results will be used to evaluate the LSGC goals and objectives to improve teacher content knowledge and teacher use of inquiry-based teaching methods.