Peering Around the Corner: - Bellwether Education Partners

Document technical information

Format pdf
Size 491,6 kB
First found апр 5, 2016

Document content analysis

Category Also themed
Language
English
Type
not defined
Concepts

Teacher wikipedia, lookup

Computer program wikipedia, lookup

State wikipedia, lookup

State wikipedia, lookup

Program management wikipedia, lookup

Program wikipedia, lookup

The Program wikipedia, lookup

Link wikipedia, lookup

Persons

Michael Allen Harrison
Michael Allen Harrison

wikipedia, lookup

Arne Duncan
Arne Duncan

wikipedia, lookup

Organizations

Places

Transcript

FEBRUARY 2016
Peering Around the Corner:
Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Ashley LiBetti Mitchel and Chad Aldeman
IDEAS | PEOPLE | RESULTS
Table of Contents
Introduction
2
State Challenges and Trade-offs
4
Colorado
11
Delaware
13
Florida
16
Georgia
19
Louisiana
22
Massachusetts
26
New Jersey
29
North Carolina
31
Ohio
33
Rhode Island
36
Tennessee
39
Endnotes
42
Acknowledgments
44
Introduction
S
chools increasingly rely on new teachers to staff their classrooms. A generation ago,
the modal teacher had 15 years’ teaching experience, meaning that if you asked
teachers how many years they had taught, the most common answer would be 15.
Today, the most common answer is five years.1 And the proportion of teachers who are
new to the field will only increase as the baby boomer generation retires. Some forecasts
estimate that half of the nation’s teachers could retire in the next 10 years.2
This demand for new teachers creates obvious challenges for the education field, but it
also means that states have a unique opportunity to leverage their authority over teacher
preparation and certification to raise the overall level of teacher quality and effectiveness.
States, programs, and schools have long focused on the inputs of teacher preparation — the
rules for candidates and the preparation programs they attend — because inputs were thought
to predict teacher effectiveness, and because they were often the best option available. But in
the early 2000s, policymakers began trying to evaluate preparation programs on the basis of
graduate outcomes.3 No longer would policymakers have to impose rules that were essentially
best guesses about what would make an effective teacher; they could measure which teachers
were effective and then use information about the teachers’ training to shape policy decisions.
Louisiana and Tennessee were the first states to try out this idea. In 2000, Louisiana started
looking at preparation programs through their outcomes data. Between 20003 and 2006, the
state began linking preparation programs to the student-learning data of their recent completers,
and made the data available to the public in 2007. Louisiana’s work suggested that it was possible
to discern program quality form completer outcomes. Tennessee began a similar initiative in
[ 2 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
2007, when the state passed legislation that required an annual report on preparation program
outcomes. Louisiana’s and Tennessee’s efforts laid the foundation for national interest in linking
outcomes to preparation programs. U.S. Secretary of Education Arne Duncan pushed to expand
these models nationwide. The $4.35 billion Race to the Top program prompted a number of
states to begin linking programs to outcomes. Not all of the winning states promised teacher
preparation reforms along the lines of Louisiana and Tennessee, but many of them did, including
Florida, Georgia, North Carolina, Massachusetts, Ohio, and Rhode Island.
In 2011, the U.S. Department of Education took another step to encourage states to link
preparation programs to outcomes. The department announced that it would begin the
process of regulating Title II and Title IV of the Higher Education Act4 to address teacherpreparation accountability and reporting. Title II affects how states and institutions
report on the quality of preparation programs and requires states to identify their lowperforming programs. Title IV includes student-aid programs like TEACH Grants, a loanforgiveness program for teachers attending “high-quality” preparation programs. During
the rulemaking process, the department pushed to include completer outcomes in states’
definitions of program quality, and to use those definitions to determine which programs
were “high-quality” in the context of TEACH Grants.
The regulation, which is still making its way toward a final rule, would require states to
assess preparation programs on three performance outcomes: student learning (measured
by student-growth or teacher-evaluation results); employment (placement and retention
rates, especially in high-need schools); and survey outcomes (of completers and employers).
Although the rule is pending, if the final rule looks like the proposed version, all states will
be required to link completer outcomes to preparation programs, beginning in April 2019,
and to report the data publicly.
Researchers are still
Researchers are still debating how to track results and define a successful preparation
debating how to define a
program, but preparation programs will never be able to improve unless states track their
successful preparation
results. Measuring and publishing completer-outcomes data bolsters programs’ continuous
program, but preparation
programs will never be
able to improve unless
states begin tracking
results.
improvement efforts, giving them deeper insights into the information they already have.
Working with the data also builds technical capacity and allows researchers to study the policies
of teacher-preparation programs and gauge their effectiveness over time. And in the absence of
rigorous state accountability systems, public completer outcomes data give potential candidates
and employers useful information that they can use to choose programs and hire teachers.
State policymakers considering this work would be wise to learn from early implementation
efforts. This report reviews the challenges and trade-offs that states face in their efforts to
link completer outcomes to preparation programs. After reviewing the challenges and tradeoffs, the report looks at seven important questions for 11 states that have attempted to link
outcomes to programs, based on the most recent information we could find.5 What follows is
our distillation of those lessons and what policymakers can learn from these early adopters.
[ 3 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
State Challenges and Trade-offs
S
tates linking outcomes to programs use the outcomes in different ways: Few states
differentiate programs by performance levels or use completer outcomes in their
program-approval processes. Other states plan to use outcomes for program
accountability. And every state we profile publicly reports completer data for transparency
and continuous improvement purposes.
States that take on this work in the future should consider previous states’ efforts and start
by defining how they intend to use completer outcomes. This critical decision should inform
how each state addresses the many challenges and trade-offs that will follow. Once states
decide on their goals, they will face logistical, conceptual, and political challenges in linking
outcomes to preparation programs. Logistical challenges include determining the minimum
Once states decide on
n-size, or sample size, that institutions must meet before their performance data are
their goals, they will face
publicly reported. Conceptual challenges include figuring out if, and how, to use outcomes
logistical, conceptual,
to differentiate programs by performance levels. And states are likely to encounter
and political challenges
in linking outcomes to
preparation programs.
substantial political challenges for any outcomes they plan to use for program approval or
transparency purposes.
Determining Program-Quality Measures
Early in this process, states must determine which measures of program quality they value.
States will select different measures depending on what they’re trying to achieve, but they
generally choose to measure outcomes in four areas: teacher effectiveness, employment
(such as placement and retention rates), certification and licensure, and employer
[ 4 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
and completer satisfaction. States tracking outcomes for continuous improvement or
transparency purposes will likely include a wide range of data to give programs and
consumers as much information as possible. States using outcomes as part of the programapproval process have to carefully consider which outcomes they want to hold programs
accountable for and may focus on fewer measures.
States determining which completer outcomes to measure must consider these trade-offs:
• Reporting a wide range of measures for transparency and continuous improvement
purposes gives programs, consumers, and the state low-stakes information about
completer performance.
• Tracking fewer measures, for any purpose, is less expensive and requires less capacity—
from states to collect data and from programs to report data.
• Holding programs accountable for fewer measures sends the message that those
measures are the most important. This can encourage programs to focus on
only those measure while ignoring others, or to attempt to “game” the subset of
selected measures.
• Holding programs accountable for many measures allows states to create a more
rigorous accountability system, but this may become overly complicated or difficult
for programs to navigate.
Defining the Sample of Completers
When measuring the percentage of completers, states must determine which completers to
link back to programs and for what time period. Some completers follow a linear path: They
start teaching soon after completing a preparation program and remain in the classroom
for several years. In this scenario, it makes sense to link the completer’s outcomes to the
preparation program that trained her.
Other scenarios are less clear. Some completers graduate from an undergraduate teaching
program but then go on to earn a master’s degree before teaching. Others complete a
preparation program but don’t immediately begin teaching. In some cases, completers
don’t go directly from the program to teaching but do eventually teach. Which of these
completers should be linked back to the preparation program, and how much responsibility
does the program bear?
In each of these scenarios, states must also determine for how long after completion to
track the graduates and hold programs accountable for completer outcomes.
[ 5 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
States determining the sample of completers must consider these trade-offs:
• Excluding nonlinear completers may severely reduce the sample size of completers
included in analyses or public reports.
• Including nonlinear completers may increase the sample size, but it adds a layer of
complexity.
• Outcomes for linear completers link most directly to program quality and are easiest to
justify to providers.
• Tracking completers for multiple years increases the number of observed completer
outcomes and indicates how, or if, the completer improves over time. But research
suggests that preparation effects fade over time, and it’s unclear what outcomes can be
reasonably attributed to a program after several years.
Minimum N-Size
States must determine the minimum n-size that can be reported or analyzed. In the context
of teacher preparation, the n-size is the minimum number of program completers that can
be included in a statistical analysis of program effectiveness. The larger the n-size, the more
confident states can be that the results truly reflect the program and are not just random
noise. Matt Kraft, an assistant professor of education at Brown University, estimates
that an n-size of between 100 and 375 completers per program is the minimum number
necessary to discern nonrandom differences among preparation programs.6
So a larger n-size is better from a statistical perspective, but working with a larger n-size
is not always possible. As of November 2015, there were 26,589 teacher-preparation
programs at 2,171 colleges, universities, and other providers across the country, meaning
that the average institution offers about 12 distinct programs, many of which prepare
very few educators each year.7 Certain types of outcomes may further limit the sample of
completers. For example, only a small percentage of completers find and keep education
jobs. And most states are limited to collecting data on completers who work at public
schools within the state. Completers who teach at in-state private schools or at schools
outside the state are excluded from the preparation program’s observations, though efforts
are underway to expand states’ access to out-of-state data.
States can increase n-sizes by “rolling up” observations for multiple years of outcomes data
and completers. A state looking at completer effectiveness, for example, could include
multiple years of cohorts (for example, the graduates of 2012, 2013, and 2014) in its analysis
and then collect its results over multiple years (2013, 2014, and 2015). In this way, a program
would be responsible for the performance of its 2012 graduates in 2013, 2014, and 2015.
[ 6 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Another way to increase n-size is to roll up observations of “equivalent programs.” For
example, instead of looking separately at the data for Spanish-, French-, German-, and
Latin-language completers, a state might decide that the completers’ outcomes combined
reflect the quality of an institution’s world languages programs. Several states have taken
this further and not even looked at individual programs. Instead they focused on overall
institution-level results.
States making n-size decisions must consider these trade-offs:
• A low minimum n-size may be inadequate for analysis and jeopardize
completers’ privacy.
• A high minimum n-size may limit the state’s ability to share the data, and it may
encourage providers to offer smaller programs with fewer completers so that it can
avoid public reporting.
• Rolling up different data across years or programs may undermine any conclusions
and reduce transparency for potential employers or candidates.
Programs vs. Institutions
States must decide whether they will report or analyze data at the program or at the
institution level. Candidates enroll in an institution, for example, the College of Education
at a public university, and complete one or more programs, such as a biology, physical
education, or English program, that prepare them for subject-area certification.
States making program and institution decisions must consider these trade-offs:
• Linking outcomes to institutions produces a larger sample size of completers, making
it more likely that the institution will surpass the minimum n-size requirements for
accountability and public reporting.
• Institutions have authority over individual programs and may be better equipped to
make structural changes on the basis of state feedback.
• Linking outcomes to programs allows for more precise reporting and more
targeted feedback.
• The performance of specific programs can’t be masked by overall institution
performance.
[ 7 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Program Differentiation
States must determine whether they will give programs summative ratings or place
institutions into performance bands. If so, they must determine how they will set
thresholds to differentiate among performance levels. Federal regulations require states to
States must determine
differentiate programs by at least four performance levels, but they allow states discretion
whether they will give
in setting the thresholds for the performance bands.
programs summative
States making program-differentiation decisions must consider these trade-offs:
ratings or place
institutions into
performance bands.
• Most research on completer outcomes has found little meaningful variation between
the quality of different preparation programs or institutions.
• Some research suggests that there are nonrandom differences in the tails of the
distribution (the highest- and lowest-performing programs). Lessons from value-added
teacher-evaluation efforts also suggest that there may be limited variability in the
programs that are not the highest or lowest performing.
• If performance thresholds are too broad, programs will be lumped together; the
thresholds will fail to distinguish the mid-level programs from the highest- and lowestperforming programs. This prevents state policymakers from rewarding top programs
and supporting low performers, reduces incentives to improve, prevents other
providers from identifying best practices, and limits information about program quality
to potential employers or prospective students.
• States that decide to differentiate programs should be careful not to arbitrarily
set inflexible performance thresholds and then force program performance to fit
those thresholds.
Challenges with Specific Outcomes
States will also encounter issues when linking specific outcomes to preparation programs.
Below, we outline some of the issues with measuring completer effectiveness, placement,
and retention and with measuring completer and employer satisfaction.
Completer Effectiveness
Completer effectiveness measures how effective a completer is as the lead teacher of a
classroom. States can measure completer effectiveness with three types of outcomes data:
student learning, observations, or overall evaluation ratings. Student learning scores are
generally measured using completers’ value-added scores, which are based on student
performance on standardized assessments. Observations are part of most districts’
teacher-evaluation models. In most cases, a school administrator observes a teacher
several times throughout the school year and rates her performance against a rubric.
Evaluation ratings are completers’ summative ratings on their district’s teacher-evaluation
models, which may include student learning; observations; and other elements, like levels of
professionalism or family engagement.
[ 8 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
States using completer-effectiveness data must consider these trade-offs:
• States that link student learning to programs, particularly for accountability
purposes, are likely to experience the strongest pushback from providers.
Opponents are critical of the heavy reliance on standardized assessments, and
they question the quality of those assessments.
• Classroom-observation results are more politically palatable, but are often conducted
inconsistently across school districts. Historically, the vast majority of teachers have
received inflated evaluation ratings in systems that rely heavily on observation outcomes.
• Using overall effectiveness scores presents both sets of challenges mentioned above.
• States should also consider whether it’s appropriate to link back to the preparation
program other factors that may be included in a teacher’s overall evaluation ratings,
like levels of professionalism and family engagement.
• Most states can access effectiveness data only for completers who are employed in state.
Completer Placement and Retention
Completer placement measures the completer’s employment outcomes. Placement
outcomes can include three types of measures: employment, subject-area employment,
and school employment. Employment measures whether the completer is employed in a
school, signaling the employment prospects of completers. States can include only teaching
positions or other school-based positions in this measure. States measure subject-area
employment in response to concerns about an undersupply of teachers in certain subject
areas, such as science, technology, engineering, and math. This measure tells providers
that they should prepare candidates to address unmet needs in the state. Similarly, states
measure school employment to assess whether completers are teaching in certain types
of schools, such as high-poverty schools, and to communicate to providers that placing
candidates in higher-need schools should be a priority.
States measure retention by looking at a completer’s year-to-year persistence, often in her
initial placement. States must determine how many years of retention, if any, they want to
attribute to the preparation program.
States using completer placement and retention data must consider these trade-offs:
• All in-school positions are not the same. When collecting data, the state must decide
whether it values all positions — for example, in-field teacher, out-of-field teacher, and
substitute teacher — equally.
• To publish strong placement outcomes, providers may overcompensate. They may
push a candidate to an undersupplied subject area or to a high-need school, even if
it’s not a good fit.
[ 9 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
• States must determine what constitutes “persisting.” How will the state count a
completer who is still teaching, but not in her original school or district?
• States must also determine whether it’s appropriate to link a completer’s employment
choices to her preparation program.
• Most states can access placement and retention data only for in-state public
school employers.
Completer and Employer Satisfaction
States can measure satisfaction by using completer and employer surveys. Surveys include
questions about factors that are not measured by effectiveness, placement, and retention
data — such as whether the completer believes the program prepared her well for teaching
or whether the employer will again hire from a program. States must determine which
entity — the state agency or the providers — will design, deliver, and analyze completer and
employer surveys.
States using satisfaction data must consider these trade-offs:
• Providers administering the surveys may — unintentionally or intentionally — affect the
quality and accuracy of responses. For example, this may happen because of the way
they write the survey questions or collect responses.
• States must take into account the likelihood of a low response rate and the
nonresponse bias that may result.
[ 10 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Colorado
What completer outcomes does the state track?
Colorado statute8 requires that the state annually report the outcomes of preparationprogram completers in six performance areas:
• Student academic growth
• Placement
• Mobility
• Retention
• Performance evaluation
The state is finalizing how it will measure, track, and report completer outcomes in these
performance areas.
Does the state track those outcomes at the program or at the
institution level?
Colorado plans to track outcomes for each performance measure at the institution level.
[ 11 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
How does the state measure each type
of outcome?
What n-size does the state use in
tracking and reporting outcomes?
The state plans to track and report outcomes for all
Colorado plans to use a minimum n-size of five for
novice teachers, defined as teachers who have completed
public reporting of institution-level data for each
an educator-preparation program in the previous
performance area. Completers must have a minimum
three years. The outcomes for all novice teachers will
number of tested students in order to be included in
be disaggregated and reported in comparison with the
the n-size for the student-achievement and student-
outcomes for three other categories of teachers:
growth performance areas. A completer must have at
• Experienced teachers: teachers who completed
an in-state or out-of-state preparation program
more than three years before
• In-state novice teachers: teachers who completed
an in-state educator-preparation program in the
previous three years
• Out-of-state novice teachers: teachers who
completed an out-of-state educator-preparation
program in the previous three years
Colorado is finalizing its business rules for measuring
each performance area. As of October 2015, the state
plans to use these definitions to track and report
completer outcomes:
• Student growth: median growth percentile for
least 16 tested students for the student-achievement
performance area and at least 20 tested students for
the student-growth performance area.
Does the state use outcomes data to
differentiate providers by performance?
Colorado does not plan to use the outcomes data to
differentiate providers by performance.
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
Colorado does not plan to use the outcomes data to
make consequential decisions, such as decisions about
program approval.
novice completers from the program
• Student achievement: percentage of novice
completers’ students who are meeting state
benchmarks
• Teacher effectiveness: percentage of novice
completers who fall into each effectiveness rating
How does the state make the
information meaningful to the public?
Colorado statute requires that the annual report be
made public. The state is collecting data and preparing
them for public release.
on the state’s evaluation system
• Placement: percentage of novice completers who
are employed in a full-time instructional role in a
Colorado public school
• Retention: percentage of novice completers who
have continued in a full-time instructional role in a
Colorado public school
• Mobility: percentage of novice completers who
have stayed in their initial placement in a Colorado
public school
[ 12 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Delaware
What completer outcomes does the state track?
Delaware passed legislation in 20139 that requires the state to collect and publicly
report completer outcomes in five domains:
• Recruitment
• Candidate performance
• Placement
• Retention
• Graduate performance
Each domain comprises two to four metrics. The state first published these outcomes
in 2015.10 Eventually, the state will also collect and report data on one additional
domain, perceptions.
Does the state track those outcomes at the program or at the
institution level?
Delaware tracks and reports completer outcomes at the program level. The state plans
to produce institution-level reports in 2016.
[ 13 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
How does the state measure each type
of outcome?
Recruitment
• Nonwhite candidate enrollment: proportion
of nonwhite completers from among program
graduates from the previous five years who
worked in public education in Delaware.
• SAT score: average cumulative SAT score (on a
scale of 2400) for the most recent incoming class
Retention
• Beyond-year-one retention rate: proportion
of completers from the previous five years
who continued working in public education in
Delaware for any length of time beyond their
first year.
• Beyond-year-three retention rate: proportion
of completers from the previous five years
who continued working in public education in
Delaware for more than three consecutive years.
of the program. This also includes ACT scores
converted to their
SAT equivalents.
Candidate performance
• General Knowledge Test Scores: average General
Graduate performance
• Student Improvement Component ratings:
proportion of completers from the previous five
years who were rated “Exceeds” on the Student
Improvement Component of the state evaluation
Knowledge Test scores (Praxis I scores in math,
system, the Delaware Performance Appraisal
reading, and writing) for all completers from the
System (DPAS II).11
previous five years
who worked in public education in Delaware.
• Performance-assessment score: average
performance-assessment scores for all
completers who worked in public education in
Delaware. As of 2015, this metric has not yet
been calculated or reported.
Placement
• Placement in Delaware: proportion of completers
who began working in public education in
Delaware within one year of completing the
program. On the 2015 scorecard, this metric
evaluates 2013 completers.
• Placement in High-Need Schools: proportion
of completers from the previous five years who
began working in a public school in Delaware that
the state has identified as high need.
• Observation scores: average observation score
for completers who worked in public education in
Delaware over the previous five years.
• Student-growth outcomes: student-achievement
results for program completers over the previous
five years who taught English, math, or social
studies in public education in Delaware. Studentachievement results are measured using the
Delaware Comprehensive Assessment System.
• Overall performance-evaluation ratings:
proportion of completers over the previous five
years who received “Highly Effective” as their
summative rating on the DPAS II.
Perceptions
• Candidate survey: results from a completer
survey administered within one year of completers
starting work in the Delaware school system. This
metric was not calculated for 2015. The Delaware
Department of Education will administer this
survey for the first time in spring 2016.
[ 14 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
• LEA 360: results from a survey that asks a
district representative (e.g., the completer’s
first-year mentor) to assess completer readiness
in several key performance factors in the
completer’s first year. This metric was not
calculated for 2015. The Delaware Department
of Education will administer this survey for the
first time in spring 2016.
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
The state did not use outcomes as part of any
formal regulatory processes in 2015. For 2016, the
reports are scheduled to be reproduced with formal
consequences. These consequences will include, at
minimum, a probationary period for programs rated
Delaware is building its capacity to track
in the lowest tier of performance.12 These biennial
outcomes for all completers, including those who
reports are intended to be the main proxy for
are employed out of state, for the Recruitment,
ongoing program review.
Candidate Performance, and Placement domains.
What n-size does the state use in
tracking and reporting outcomes?
How does the state make the
information meaningful to the public?
The Delaware scorecards are easy to read and
Programs receive a scorecard if they have 10
interpret. The state publishes a state-level
or more completers who have been working in
summary scorecard, with high-level information
Delaware over the previous five years.
about the programs. The state-level summary
Does the state use outcomes data to
differentiate providers by performance?
Delaware differentiates programs by four
performance tiers on the basis of completer
outcomes. Tier 1 is the highest rating and Tier 4 is
links to program-level scorecards, which lay
out specific information about the program’s
performance in each domain, such as the final
tier, earned points, possible points, and actual
performance compared with the minimum state
standard and the state target.
the lowest. Programs receive an overall tier rating
and a tier rating for each performance domain.
[ 15 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Florida
What completer outcomes does the state track?
Florida statute13 requires the state to collect data on six performance measures:
• Placement rate of program completers
• Retention rate of program completers
• Results of district evaluations of program completers
• Achievement of pre-k through 12th-grade students of completers
• Achievement of students of completers by subgroups
• Production of teachers in critical shortage areas
Does the state track those outcomes at the program or at the
institution level?
Florida tracks outcomes for each performance measure at the program level, for each
certification subject area.
[ 16 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
How does the state measure each type of
outcome?
• Placement rate of program completers:
percentage of program completers who are
employed full-time or part-time in instructional
positions in a Florida public school in the first or
second academic year after program completion.
Completers employed in a private or out-of-state
school may also be included if their employment is
verified.
• Retention rate of program completers: average
• Achievement of pre-k through 12th-grade
students of completers by subgroups: the
performance, aggregated by student subgroup,
of pre-k through 12th-grade students who are
assigned to in-field program completers. The
definition of “student subgroup” is taken from the
federal Elementary and Secondary Education Act.
This score is based on in-field program completers
from the previous three years who received a
student-learning growth score in the most recent
academic year for which results are available.
Student-learning growth scores are based on the
number of years that program completers are
performance of students in grades 4 through 10
employed in full-time or part-time instructional
on statewide standardized assessments in math
positions in a Florida public school. The
and English language arts.
employment must occur during a five-year period
that begins with initial employment in the first or
second academic year after program completion.
Completers employed in a private or out-of-state
school may also be included if their employment is
verified.
• Results of district evaluations of program
completers: annual summative evaluation ratings
for the most recent academic year for program
completers from the previous three academic
years.
• Achievement of pre-k through 12th-grade
• Production of teachers in critical shortage areas:
specific certification in high-need content areas
and high-priority locations that the State Board of
Education annually defines.
What n-size does the state use in
tracking and reporting outcomes?
Florida legislation14 requires programs to meet
minimum n-size requirements to receive an Annual
Program Performance Report (APPR). Programs
must have three or more completers in the selected
cohort time period for the Placement or Retention
students of completers: the performance of pre-k
performance metric, and two or more completers
through 12th-grade students who are assigned
who received an annual evaluation for the Annual
to in-field program completers from the previous
Evaluation performance metric.
three-years and who received a student-learning
growth score from the most recent academic year
for which results are available. Student-learning
growth scores are based on the performance of
students in grades 4 through 10 on statewide
standardized assessments in math and English
language arts.
[ 17 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state use outcomes data to
differentiate providers by performance?
Florida recently released program outcomes data for
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
the first time, through the APPR. The APPR outlines
Florida statute15 requires the state to incorporate
four performance levels (1 through 4) for each metric
completer outcomes into the continuing program-
using the program’s outcomes. For example, this is how
approval process. Programs must be reapproved every
a program’s Placement Rate performance is defined:
five years. During the approval process, programs
receive a summative score of between 1.0 and 4.0.
Level 4
Level 3
Level 2
Level 1
Placement
rate is at
or above
the 68th
percentile
of all
equivalent
programs
across the
state.
Placement
rate is at
or above
the 34th
percentile
and below
the 68th
percentile
of all
equivalent
programs
across the
state.
Placement
rate is at
or above
the 5th
percentile
and below
the 34th
percentile
of all
equivalent
programs
across the
state.
Placement
rate is
below
the 5th
percentile
of all
equivalent
programs
across the
state.
Half of a program’s summative score is based on onsite visits and the program’s average APPR scores over
the previous five years.
Programs may receive one of three approval ratings,
on the basis of their summative score:
• Summative score below 2.4: approval denied
• Summative score between 2.4 and 3.5: full
approval
• Summative score above 3.5: approval with
distinction
Level 4 is the highest rating for all performance
metrics except Critical Teacher Shortage. Critical
Teacher Shortage, a bonus metric, is defined below:
Bonus Performance Metric
The critical teacher shortage program increased the number
of program completers comparted to the year before with a
minimum of 2 completers in each year.
How does the state make the information
meaningful to the public?
The first APPRs are publicly available. Users can
access certification-subject-area reports one at a
time. There is no way to compare program results
without downloading multiple files and manually
comparing them.
[ 18 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Georgia
What completer outcomes does the state track?
Georgia collects data on its nine Teacher Preparation Program Effectiveness Measures
(TPPEMs). These indicators include completer and program performance. Three of the
indicators measure completer performance pre-service, or while still enrolled in the
preparation program. The state will begin collecting data on the TPPEMs in the 2015-16
academic year. The full set of measures will be available for the first time in 2018. These are
the nine TPPEMs:
• Teacher effectiveness measures of program completers
• Success rates of induction certificate teachers
• Candidate performance on state-approved content assessments
• Candidate performance on edTPA
• Completion rates
• Retention rates
• Employment yield rate
• Survey of employed completers
• Employer survey
[ 19 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state track those outcomes at
the program or at the institution level?
Georgia tracks these outcomes at the program level.
How does the state measure each type of
outcome?
• Teacher effectiveness measures of program
completers: program-completer performance on
the state’s teacher-effectiveness system, which
includes three components: Teacher Assessment
on Performance Standards (TAPS), the growth and
academic achievement of the completer’s students,
and surveys of instructional practice.
>> TAPS: evaluator observations of teacher
practice16
>> Student growth and academic achievement:
student-growth percentile and value-added
measure for teachers of state-tested subjects,
or approved student-learning objectives, using
district achievement growth measures for
teachers of nontested subjects
>> Surveys of instructional practice: student
surveys, administered in grade bands 3-5, 6-8,
and 9-1217
• Candidate performance on edTPA: Programs
must require candidates to take the edTPA
at some point before program completion.
Completer scores are used to calculate the
program’s content-assessment measure.
• Completion rates: annual completion rates,
measured by comparing the number of
enrolled candidates enrolled in a program with
the number of candidates who successfully
completed the program.
• Retention rates: program completers and
nontraditional candidates who, during the
reporting year, continued their employment as
a teacher beyond their first year teaching in a
Georgia public or public charter school.
• Employment yield rate: used in conjunction
with annual employment data, the yield rate
represents the number of program completers,
or in the case of nontraditional programs,
candidates who earn the Induction Certificate,
are employed by a Georgia pre-k through 12
public school or public charter school, and are
placed in in-field teaching positions.
• Survey of employed completers: an annual
statewide survey of program completers who
• Success rates of induction certificate teachers:
are employed in a Georgia public school or public
As of 2014, beginning teachers in Georgia work
charter school. The goal of the survey is to assess
as “Induction Teachers” for the first three years.
whether completers are adequately prepared to
After completing the induction period, teachers
translate theory into practice and whether the
can earn a professional certificate. This measure
program gave them the essential knowledge, skills,
tracks the rate at which a program’s completers
and dispositions they need to be effective in the
successfully pass the induction phase.
classroom.
• Candidate performance on state-approved
• Employer survey: an annual statewide survey
content assessments: Completers must attempt
of employers of those completers who are
all tests in the state-approved assessment before
working in a Georgia public school or public
August 31 of the completion year. The better of
charter school. The goal of the survey is to gauge
two attempts for each program completer will be
employer satisfaction; identify what qualities are
used to calculate the content-assessment pass
most desirable in a teacher when making a hiring
rate for the program.
decision; and determine what knowledge, skills,
and dispositions are essential in a teacher.
[ 20 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
What n-size does the state use in
tracking and reporting outcomes?
The minimum n-size for reporting and tracking
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
outcomes is 10. Georgia tracks and reports
Georgia will use its TPPEM data to complement the
outcomes for aggregated cohorts. An aggregated
state’s program-approval process, which takes place
cohort is a group of candidates completing
every seven years, though the consequences for low
a defined, state-approved program between
performance and accolades for high performance
September 1 and August 31. If the number of
have not been determined. The state will likely use
candidates in an aggregated cohort is below 10,
the data to make inferences and encourage program
multiple years (up to three years) will be combined
adjustments between full program-approval reviews.
to create a cohort of at least 10 candidates.
Does the state use outcomes data to
differentiate providers by performance?
How does the state make the
information meaningful to the public?
The TPPEM data are not yet publicly available, but
Georgia is still deciding how it will differentiate
the state plans to publish the data online. This will
programs by performance, though it expects to use four
most likely happen 2018, when all TPPEM measures
performance levels: exemplary, effective, at risk of low
are collected.
performing, low performing.
18
[ 21 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Louisiana
What completer outcomes does the state track?
Louisiana is examining new types of outcomes as it tracks and reports them on its online
Teacher Preparation Data Dashboards. The state is collecting information on indicators
in four categories: Candidate Selection Profile, Knowledge and Skills for Teaching of
Completers, Program Productivity and Alignment to State Needs of Completers, and
Performance as Classroom Teachers. These indicators are modeled on 2020 Effectiveness
Indicators, a framework that proposes a set of annual, publicly reported indicators for
alternative and traditional educator-preparation programs to ensure transparency for
stakeholders and facilitate continual-improvement efforts.19
Louisiana is also revamping its preparation-accountability process. It currently reports
on licensure pass rates only for public universities, for accountability purposes. The data
dashboards were voluntarily developed by the state’s providers in collaboration with the
Board of Regents and with support from the Louisiana Department of Education. The
providers decided together which data they would voluntarily publish. So while the format
of the data dashboard follows the 2020 Effectiveness Indicators, only some data are
available, and some indicators are not published in the suggested format.20
[ 22 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Below are the indicators on the 2014 data dashboards.
In this section and the next, indicators for which data
How does the state measure each type of
outcome?
are not available are marked with an asterisk.
Candidate Selection Profile
Candidate Selection Profile
• Academic strength*: percentage of completers
• Academic strength*
who pass the Praxis skills assessment; median
• Teaching promise*
GPA of candidates at admission and completion of
• Candidate/completer diversity
Knowledge and Skills for Teaching of Completers
• Content and pedagogical knowledge
• Teaching skill*
• Completer rating of program*
Program Productivity and Alignment to State Needs
of Completers
• Entry and persistence in teaching*
the program; and the number of completers who
started but did not complete the program within
six years
• Teaching promise*: percentage of accepted
program candidates whose score on a rigorous
and validated “fitness for teaching” assessment
demonstrates a strong promise for teaching
• Candidate/completer diversity: number of
candidates who enrolled and completed the
program; number of candidates enrolled in the
program, by gender and racial/ethnic subgroup
• Placement and persistence in high-need subjects
and schools*
Performance as Classroom Teachers
Knowledge and Skills for Teaching of Completers
• Content and pedagogical knowledge:
percentages of completers who passed the Praxis
• Impact on K-12 students
content assessments, the Praxis professional
• Demonstrated teaching skill
knowledge assessment, and all assessments
• Overall impact and demonstrated teaching skill
• State value-added and overall evaluation scores
• K-12 student perceptions*
Does the state track those outcomes at
the program or at the institution level?
The state tracks outcomes at the institution level.
• Teaching skill*: number of hours of clinical
experience prior to and during student teaching,
and the percentage of completers who meet state
licensing requirements
• Completer rating of program*: state- or nationally
developed program-completer survey of teaching
preparedness and program quality, by cohort,
upon program completion and at end of the first
year of full-time teaching
[ 23 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Program Productivity and Alignment to State Needs
of Completers
• Entry and persistence in teaching*: percentage
and number of completers who began teaching
in the year following program completion;
percentage and number of completers who
obtained a teaching license; and percentage and
• State value-added and overall evaluation scores:
mean value-added score and number of scores
for new teachers in grades 4-8 with less than two
years of teaching, by content area (mathematics,
science, social studies, and language arts/reading);
percentage and number of value-added scores by
content areas and teacher-effectiveness levels
number of completers from five years ago who
• K-12 student perceptions*: K-12 student surveys
have persisted in teaching each year since then.
about completers’ teaching practice during first
• Placement and persistence in high-need
subjects and schools*: number and percentage
of completers, by cohort, who are employed
and persisting in teaching in low-performing,
low-income, or remote rural schools or in high-
three years of full-time teaching, using valid and
reliable statewide instruments
What n-size does the state use in
tracking and reporting outcomes?
need subjects one to five years after program
The minimum n-size for tracking and reporting
completion, including in-state and out-of-state
outcomes is 25.
placements
Performance as Classroom Teachers
• Impact on K-12 students: mean student-outcome
score and number of scores for all new teachers
with less than two years of teaching in the
previous academic year; percentage and number
of student-outcome scores for new teachers in the
previous academic year, by teacher-effectiveness
levels
• Demonstrated teaching skill: mean professional
Does the state use outcomes data to
differentiate providers by performance?
Louisiana does not use outcomes data to differentiate
institutions by performance level, but it previously
did so. For example, as part of the Programmatic
Intervention accountability model, the state determined
performance levels on the basis of value-added scores.
As part of the new preparation-accountability system,
the state is considering using outcomes to differentiate
providers by performance level.
practice score and number of scores for all new
teachers with less than two years of teaching
in the previous academic year; percentage and
number of professional practice scores for
new teachers in the previous academic year, by
teacher-effectiveness levels
• Overall impact and demonstrated teaching
skill: mean overall evaluation score and number
of scores for all new teachers with less than two
years of teaching in the previous academic year;
percentage and number of overall evaluation
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
Louisiana does not use outcomes data to make
consequential decisions about institutions, but it
previously did so. As part of the new preparationaccountability system, the state is considering how
to integrate outcomes data and use them to make
consequential decisions.
scores for new teachers in the previous academic
year, by teacher-effectiveness levels
[ 24 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
How does the state make the
information meaningful to the public?
Louisiana publishes the institution-level data
dashboards on the Board of Regents website.
Institutional reports from previous years are also
available. In 2014, the state provided a “fact book”
with the data dashboards, which includes historical
context, as well as institution-level trend data for
many indicators.
[ 25 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Massachusetts
What completer outcomes does the state track?
According to Massachusetts legislation,21 the state Department of Elementary and
Secondary Education (DESE) must publish an annual report with information on each
organization that is approved to offer an educator-preparation program in the state. In
Preparation Program Profiles, the state publishes data on these areas:
• Candidate enrollment
• Program completers
• Massachusetts Tests for Educator Licensure (MTEL)22 pass rate
• Employment
• Faculty and staff
• Job placement and retention rates
Massachusetts is revising the types of data it collects from sponsoring organizations and
programs. When these changes go into effect, the state will collect and report data on two
additional types of outcomes:
• In 2016: educator-evaluation ratings
• In 2017: Survey results of program completers’ supervising practitioners and their
hiring principals
[ 26 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state track those outcomes at
the program or at the institution level?
Massachusetts reports outcomes at the sponsoringorganization (institution) level and the program level.
How does the state measure each type of
outcome?
Current Profiles
• Candidate enrollment: total unduplicated
number of candidates enrolled, and number and
percentage of candidates enrolled, by gender and
racial/ethnic subgroup.
• Program completers: total number of candidates
meeting all requirements of the preparation
program (e.g., instruction/coursework and
• Retention rate: percentage of employed
completers who were employed for a second
consecutive year. These data are also represented
by program type, program level, and program
subject area. This measure also includes the five
Massachusetts public school districts that employ
the highest percentage of program completers.
• Faculty and staff: total number of full-time
program faculty and staff, overall and by gender
and racial/ethnic subgroup, and the number of
candidates per faculty member.
• Job placement rate: percentage of completers
employed in a Massachusetts public school
within one, two, and three years of completing a
preparation program.
• Retention rate: percentage of employed
practicum), whether or not a candidate has taken
completers who stayed for two, three,
and passed state tests or assessments for licensure
and four years.
or has been endorsed for licensure by the program.
This count includes candidates who complete two
or more programs during the same year.
• Massachusetts Tests for Educator Licensure
(MTEL) pass rate percentage and number of
candidates who took the MTEL and achieved a
score equal to or higher than the passing score
established by the state, by assessment type and
candidate status. Candidate status includes all
Additional Outcomes Data in New Profiles
• Educator-evaluation ratings: percentage of
completers by summative rating; by ratings
on each component of the state’s evaluation
system, including impact on student growth;
and percentage of completers who have earned
professional teacher status.23
• Surveys of program completers and their
program completers, candidates who have not
principals: Response rate and response by
completed nonclinical coursework, and candidates
question to state-administered surveys for
who have completed the nonclinical coursework
enrollees, non-practicum completers, program
but have not completed the clinical component.
completers, district personnel, and new educators.
• Employment: number of program completers each
year, for the previous three years and overall, who
are employed in a Massachusetts public school.
What n-size does the state use in
tracking and reporting outcomes?
These data are also represented by program type,
The minimum n-size for tracking and reporting
program level, and program subject area. This
outcomes is six.
measure also includes the five Massachusetts
public school districts that employ the highest
percentage of program completers.
[ 27 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state use outcomes data to
differentiate providers by performance?
How does the state make the
information meaningful to the public?
Massachusetts does not use outcomes data to
Massachusetts publishes completer outcomes data in
differentiate providers by performance level.
two places on the DESE website: the statewide profile
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
section, where a range of state-, district-, and schoollevel information is available, and the section that
offers individual educator-preparation profile data
reports.
Massachusetts launched a new program-approval
The educator-preparation profiles provide in-depth
process in the 2014-15 academic year. Through this
information about each institution, on the basis
process, the state includes outcomes data as one piece
of the data mentioned above and other indicators
of evidence in program-approval decisions.24
required by legislation. The statewide reports allow
stakeholders to compare all institutions by several
indicators, including employment and retention by
program, program characteristic, and year.
[ 28 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
New Jersey
What completer outcomes does the state track?
New Jersey tracks and publicly reports completer outcomes through its Educator
Preparation Provider Annual Reports. The state adds new public metrics each year.
The 2015 version of the report includes the following outcomes:
• Certification and licensure rates
• Hiring rate
• Persistence rate
• School placement by:
>> School classification
>> District factor group
• Classroom assignments by teacher-shortage area
• Compensation
• Praxis II scores
The state expects that future versions of the report will include completer-evaluation data,
more robust persistence and hiring data, and teacher-candidate-survey data.25
[ 29 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state track those outcomes at
the program or at the institution level?
New Jersey tracks all of these outcomes at the
institution level. In addition, the state also reports
program-level employment outcomes (i.e., the number
of certified and employed completers) for the five
largest programs at the institution.
• Classroom assignments by teacher-shortage
area: number of completers who received an
endorsement in a teacher-shortage area and who
are employed in a teacher-shortage position.
• Compensation: the average starting salary of
completers employed in a New Jersey public
school, by region.
• Praxis II scores: the average scaled score on the
How does the state measure each type of
outcome?
• Certification and licensure rate: number of
completers receiving New Jersey certification or
licensure, subject areas of endorsement, number
of endorsements, and percentage employed.
• Hiring rate: percentage of completers from the
previous two years who were employed in a New
Jersey public school as of that fall.
• Persistence rate: percentage of completers
employed in a New Jersey public school in one year
who continued employment in the following year.
• School placement by:
>> School classification: number and percentage
of completers who are employed at a Priority,
Focus, or Reward school, as defined by New
Jersey’s Elementary and Secondary Education
Act (ESEA) waiver,26 compared with the
percentage of teachers employed in each
category of school statewide.
>> District factor group: number and percentage
of completers who are employed at a school in
each of the state’s eight district factor groups.
The state defines district factor groups by
a number of variables that approximate the
community’s socioeconomic status.27 The
percentage of completers employed in a single
district factor group is compared with the
percentage of completers employed in each
district factor group statewide.
Praxis II, by content area, compared with the
average scaled score for the state.
What n-size does the state use in
tracking and reporting outcomes?
The minimum n-size for tracking and reporting
outcomes is 10.
Does the state use outcomes data to
differentiate providers by performance?
New Jersey does not differentiate programs or
institutions by performance level on the basis of
outcomes data.
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
New Jersey does not yet use outcomes data to make
consequential decisions, but it expects to eventually use
outcomes data to inform parts of the approval process.
How does the state make the
information meaningful to the public?
The state improves each iteration of the Educator
Preparation Provider Annual Report. Currently, the
reports are available only as PDF files and cannot be
easily compared across institutions or programs. The
state plans to make these documents more accessible
to different stakeholder groups, such as program
deans and potential employers.
[ 30 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
North Carolina
What completer outcomes does the state track?
North Carolina legislation28 requires educator-preparation programs to submit annual
performance reports on the criteria below. The state compiles much of this information
on an online dashboard available to the public.
• Demographics and academic profile of entering candidates
• Graduation rate
• Time-to-graduation rate
• Licensure assessment scores and pass rate
• Licensure rate
• Employment rate
• Retention rate
• Completer satisfaction
• Employer satisfaction
• Completer effectiveness
[ 31 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state track those outcomes at
the program or at the institution level?
What n-size does the state use in
tracking and reporting outcomes?
North Carolina tracks outcomes at the institution level.
The minimum n-size for tracking and reporting
How does the state measure each type of
outcome?
• Demographics and academic profile of entering
outcomes is five.
Does the state use outcomes data to
differentiate providers by performance?
candidates: number of full-time and part-time
The state does not use outcomes data to differentiate
enrolled candidates, by gender and racial/ethnic
providers by performance level.
subgroup, and mean scores from several academic
criteria, including SAT, ACT, and GPA.
• Graduation rate percentage of candidates who
completed the program.
• Time-to-graduation rate: number of completers,
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
The state does not use outcomes data to make
by full- or part-time status and by the number
consequential decisions, including
of semesters it took for them to complete the
program-approval decisions.
program (range: three or fewer semesters through
eight semesters).
• Licensure assessment scores and pass rate: average
How does the state make the
information meaningful to the public?
completer scores on professional and content-area
Although North Carolina requires preparation
examinations for the purpose of licensure.
providers to submit information on the outcomes
• Licensure rate: percentage of completers
receiving initial licenses.
• Employment rate: percentage of completers hired
as teachers.
• Retention rate: percentage of completers
remaining in teaching for four years.
• Completer satisfaction: results from a common
survey of completer satisfaction.
• Employer satisfaction: results from a common
survey of employer satisfaction.
• Completer effectiveness: summary of evaluation
listed above, the information is not published in one
place. The state provides some of the information,
such as the completer-effectiveness ratings, on
an online dashboard,29 but other information on
completer time-to-graduation rates is available in
a separate IHE Performance Report.30 Potential
candidates or employers who are interested in the
outcomes data for a specific institution would have
to look through two separate reports to get most of
the information. Some of the required information,
such as completer-retention data, is missing from the
dashboards.
data for beginning teachers (teachers with less
than three years of experience and a Standard
Professional 1 license) by each component in the
state’s evaluation system. This measure includes
sample size and percentage of completers in each
performance level.
[ 32 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Ohio
What completer outcomes does the state track?
Ohio tracks and publicly reports completer outcomes in annual Educator Performance
reports. The reports include data on these outcomes:
• Licensure test scores or pass rate
• Evaluation results of program completers
• edTPA assessment results
• Value-added data
• Candidate academic measures
• Field and clinical experiences
• Pre-service teacher survey results
• Resident educator survey results
• Resident educator persistence
• Excellence and innovation initiatives
• National accreditation
[ 33 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state track those outcomes at
the program or at the institution level?
Ohio tracks outcomes at the institution and program
levels. Different outcomes are tracked at different levels:
• At the program level, Ohio reports licensure test
scores, candidate academic measures, field and
clinical experiences, pre-service teacher survey
results, and resident educator survey results.
• At the institution level, Ohio reports evaluation
employed as teachers and who had value-added
data. Number and percentage of those completers
in each value-added performance classification.
Number and percentage of those completers
by school characteristic (grade span, school
type, overall grade level of building, minority
enrollment, poverty enrollment).
• Candidate academic measures: Both the
program- and institution-level reports include
nearly two dozen academic criteria, such as SAT
results of program completers, edTPA assessment
writing subscore, Praxis II score, GRE composite
results, licensure pass rate, value-added data,
score, and GPA. The number and average score
candidate academic measures, field and clinical
are reported for admitted candidates, enrolled
experiences, pre-service teacher survey results,
candidates, and completers by their degree level
resident educator survey results, national
(undergraduate, postbaccalaureate, or graduate).
accreditation, resident educator persistence, and
excellence and innovation initiatives.
How does the state measure each type of
outcome?
• Licensure test scores or pass rate
>> Program-level report: cut score for passing the
• Field and clinical experiences: Both the program-
and institution-level reports provide the minimum
and maximum number of clinical hours required,
the average number of weeks required to teach
full time as a student teacher, and the percentage
of candidates who complete student teaching.
• Pre-service teacher survey results: Both the
required assessment, number of completers
program- and institution-level reports provide
tested, average scaled score (the average of
the results of a completer survey. The survey
completers’ best scores and the number and
was developed by the Ohio Board of Regents
percentage of completers who passed).
and a committee of representatives from Ohio
>> Institution-level report: number of completers
tested and the pass-rate percentage of
completers from the previous year.
• Evaluation results of program completers:
number of completers in each evaluation
performance level, for each year over the previous
four years.
• edTPA assessment results: institutional average
score.
• Value-added data: number of completers with
effective licensure dates over the previous four
institutions. The survey is given to all teacher
candidates. Number of and response rate for
respondents, question language, and the average
number of institution responses are reported.
• Resident educator survey results: The survey
was developed by the Ohio Board of Regents
and a committee of representatives from Ohio
institutions. The survey is given only to completers
who participate in the state’s Resident Educator
program.31 Number of and response rate for
respondents, question language, and average
number of institution responses are reported.
years; number of those completers who were
[ 34 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
• Resident educator persistence: number of newly
hired teachers who entered the Resident Educator
program each year for the previous four years.
Number and percentage of Resident Educators
who are persisting to the next year.
• Excellence and innovation initiatives: narrative
descriptions of up to three initiatives that are
“geared to increase excellence and support
innovation in the preparation of Ohio educators.”
Each initiative includes the purpose, goal, number
of participants, strategy, a demonstration of
impact, and information about any external
recognition.
• National accreditation: accrediting agency, date
of last accreditation, and accreditation status.
What n-size does the state use in
tracking and reporting outcomes?
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
Per Ohio legislation, the state can make consequential
decisions only on the basis of licensure pass rates.
Until the state sets thresholds for other metrics, the
outcomes are purely informational.
How does the state make the
information meaningful to the public?
Each program and institution report is available
publicly on the state website. All materials are
available in individual PDF reports that are not easily
compared across institutions or programs. The state
hopes to develop an online, interactive dashboard
where stakeholders can look at data across a number
of programs or sort institutions by specific variables.
The minimum n-size for reporting outcomes is 10.
Does the state use outcomes data to
differentiate providers by performance?
The state does not use outcomes data to differentiate
programs or institutions by performance level. The
state notes when any program’s or institution’s
performance on a measure is below the normal
distribution, but that information is not shared
publicly; it is available only to preparation programs.
[ 35 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Rhode Island
What completer outcomes does the state track?
Through its Educator Preparation Indices, Rhode Island publicly reports completer
outcomes in three categories: Educating Rhode Island, Entering the Profession in Rhode
Island, and Admission and Progression. The state also publishes a summary with provider
information in these three categories.
Educating Rhode Island
• Employed completers
• Completer employment details
• Educator effectiveness
• Educator-effectiveness details
Entering the Profession in Rhode Island
• Certified completers
• Certification details
Admission and Progression
• Professional test data
• GPA at admission
• GPA at completion
[ 36 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state track those outcomes at
the program or at the institution level?
Rhode Island publicly reports outcomes at the
program, institution, and degree (undergraduate or
postgraduate) level. Different outcomes are collected
at different levels.
• Employed completers, employment details,
educator effectiveness, educator-effectiveness
details, certified completers, and certification
details are tracked at the institution level.
• Professional test data are collected at the
program level.
• GPA at admission and GPA at completion are
collected at the degree level.
How does the state measure each type of
outcome?
• Educator effectiveness: percentage of completers
over the previous two years who were rated for
each performance level of the state’s evaluation
system. These data are compared with data for all
Rhode Island program completers and all Rhode
Island educators from the same time period.
• Educator-effectiveness details: percentage of
completers over the previous two years who
performed at each performance level on both
components of the Rhode Island educatoreffectiveness evaluation. The two components
are Professional Practice and Personal
Foundation and Student Learning. On the
Professional Practice component, completers
can earn a score of 1 through 4, with 1 being
the lowest and 4 the highest. On the Student
Learning component, completers can earn
Minimal Attainment, Partial Attainment, Full
Attainment, or Exceptional Attainment. Student
Educating Rhode Island
• Employed completers: number of completers
from the previous two years who were employed
as regular or substitute teachers in Rhode Island
public schools, and the total number of newly
hired regular and substitute teachers in the state
in the previous two years.
• Completer employment details: employment
completers by certification area and by school
accountability level, district or local education
name name, and grade span (elementary,
middle, or high school). Completers and newly
Learning is measured using the completer’s
Student Learning Objectives. These data are
compared with data for all Rhode Island program
completers and all Rhode Island educators from
the same time period.
Entering the Profession in Rhode Island
• Certified completers: total number of completers
over the previous three years, and the total
number of those completers who achieved Rhode
Island certification.
• Certification details: demographics of certified
hired educators from the previous two years
completers over the previous two years compared
are included. This report card denotes critical
with demographics of all certified completers
certification areas or areas where the Rhode
in Rhode Island from the same time period. This
Island Department of Education has historically
measure also includes institution-level data for
issued emergency credentials at the request of
program completers by certificate area from the
LEAs struggling to fill open positions.
previous two years.
[ 37 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Admission and Progression
• Professional test data: number of completers who
took and passed each content-area assessment.
Also includes the average score of all institution
completers in that content area. These data are
student learning, professional responsibility scores)
with the aggregate performance distribution for
all recent completers on those same elements.
A program’s rating is determined by its recent
completers’ performance relative to other recent
compared with the statewide average score
program completers in the state.
and the statewide pass rate. All data are for the
For Standard 4.2, the state will set cutoff points for
previous two years.
• GPA at admission: minimum GPA required at
admission and median GPA of accepted individuals,
by undergraduate and postgraduate degree level,
from the previous two years.
• GPA at completion: minimum GPA required for
aggregate placement rates but has
not yet done so.
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
completion and median GPA of completers, by
The state plans to use completer-evaluation data
undergraduate and postgraduate degree level, from
and placement data to inform the program rating
the previous two years.
on Standards 4.1 and 4.2 of the performance-
What n-size does the state use in
tracking and reporting outcomes?
The minimum n-size for reporting and tracking
review process. It is unclear how much weight these
outcomes will have on a program’s or institution’s
overall performance review. The state plans to use
outcomes at different levels to inform the program-
outcomes is 10.
approval process.
Does the state use outcomes data to
differentiate providers by performance?
How does the state make the
information meaningful to the public?
Rhode Island does not differentiate programs or
The outcomes data are available on the state
institutions by performance levels on the Educator
Preparation Indices. The state does, however,
plan to eventually include certain outcomes in its
performance-review process.32
Completer performance data and placement data will,
as of the 2016-17 academic year, be used to inform
a program’s rating on Standards 4.1 (Evaluation
Outcomes) and 4.2 (Employment Outcomes) in the
Department of Education’s website. The website
encourages future educators to explore the
information in the indices. Any stakeholder can easily
search for data by institution, and an FAQ section
accompanies each set of data points.
The content of the indices, however, is not as clear
as it could be. For example, placement rate could be
represented as a percentage as well as a discrete
Rhode Island performance-review process.
number, and the indices could pair percentage
For Standard 4.1, program performance is determined
educator-effectiveness outcomes.
numbers with the stacked bar charts that depict
by comparing the performance data of completers
on the different elements of the educator evaluation
(overall effectiveness rating, professional practice,
[ 38 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Tennessee
What completer outcomes does the state track?
Since 2007, Tennessee legislation has required the state to track and publicly report data for
four outcomes as part of the state’s Teacher Preparation Report Cards. The outcomes are:
• Placement rate
• Retention rate
• Assessment average score and pass rate
• Teacher effect data
In October 2014, Tennessee’s State Board of Education passed Educator Preparation Policy
5.504, which changed the way the state approves educator-preparation providers. As part
of that policy, the state will collect information from all educator-preparation programs for
an annual report that is separate from, but includes much of the same data as, the Teacher
Preparation Report Cards.33 The metrics required for the annual report are:
• Recruitment and selection
• Placement
• Retention
• Completer satisfaction
• Employer satisfaction
• Completer outcomes
• Completer impact
[ 39 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Does the state track those outcomes at
the program or at the provider34 level?
Tennessee publicly reports outcomes at the provider
level as part of the Teacher Preparation Report Cards.
The annual reports will attempt to track outcomes at
the program level, but the state expects to encounter
challenges with meeting the minimum n-size and may
have to aggregate programs by type to review.
How does the state measure each type of
outcome?
Report Card Outcomes
• Placement rate: percentage of completers from
the previous four years who teach in a Tennessee
public school and who started their job within
one or two years of program completion.
• Retention rate: percentage of completers from
the previous four years who teach in a Tennessee
public school have been at their job for three
consecutive years. This measure also includes the
percentage of completers who have been teaching
in a Tennessee public school for three out of the
previous four years.
• Assessment average score and pass rate:
candidates’ average scores on Praxis II
core reading, core math, and core writing
assessments, as well as on the Praxis II
Principles of Learning and Teaching assessment.
Also includes the overall pass rate for these
assessments.
• Teacher effect data: value-added data for
all completers with one to three years of
experience (also known as beginning teachers),
according to the Tennessee Value-Added
Assessment System, as compared with all
teachers statewide and other beginning
teachers during the same time period. For the
2014 report card, teacher effect data are based
on one-year t-value estimates of teacher effects
for the 2013-14 school year. The performance
of the institution’s beginning teachers is noted
as either positive or negative, as compared
with the other groups for each subject. The
results are provided for apprentice- and
transitional-license teachers separately and
together. The state also provides a statewide
distribution of apprentice and transitional
teachers (again separately and together). The
statewide distribution denotes the percentage
of beginning teachers who are in the top and
bottom performance quintiles statewide and
whether there is a statistically significant
difference between those teachers and other
teachers statewide.
Annual Report Outcomes35
• Recruitment and selection: performance against
identified recruitment goals.
• Placement: number and percentage of candidates
placed in Tennessee public schools in the three
years immediately following program completion.
• Retention: number and percentage of placed
completers who remain working in Tennessee
public schools in the third and fifth years following
placement.
• Completer satisfaction: results from a completer
satisfaction survey, delivered within 12 months
of program completion and again after the third
year of teaching. The Tennessee Department of
Education administers the survey to program
completers.
• Employer satisfaction: results from an employer
satisfaction survey. All primary partner LEAs and
LEAs employing more than 25 percent of the
completer cohort will be surveyed. The Tennessee
Department of Education will administer the
survey to employers.
[ 40 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
• Completer outcomes: includes outcomes on
components such as graduation rate, first-time
pass rate on required content assessments,
and the ability of completers to meet licensing
requirements.
• Completer impact: completer performance as
measured by evaluation results, including overall
evaluation scores, observation scores, and
student-growth scores.
The state is determining the number of completer
cohorts to include in this indicator.
What n-size does the state use in
tracking and reporting outcomes?
For the state Teacher Preparation Report Cards, the
minimum n-size for tracking and reporting placement
rate, retention rate, and assessment pass rate is five.
The minimum n-size for tracking and reporting teacher
effect outcomes data is 10. The state is determining
the minimum n-size for annual reports.
Does the state use outcomes data to
differentiate providers by performance?
Does the state use outcomes data to
make consequential decisions, such as
whether to approve programs?
Tennessee does not use outcomes data to make
consequential decisions, including provider- and
program-approval decisions. According to Educator
Preparation Policy 5.504, however, the state will begin
using the data from annual reports in the approval
process in the 2017-18 academic year.36
How does the state use transparency to
shape the teacher pipeline?
The Teacher Preparation Report Cards from 2010
through 2014 are publicly available on the Tennessee
Higher Education Commission website. The state
has made each iteration of the Report Card more
user friendly than the last. The 2014 version, for
example, is the first to provide institution reports, the
state profile, and the executive summary in separate
documents. The 2014 institution reports also provide
teacher effect data as the percentage of completers
in the highest and lowest performance quintiles and
statistical significance, rather than as estimates of the
Tennessee does not differentiate providers by
completers’ average value-added scores, as previous
performance levels on the basis of outcomes data, but
versions had done. (The 2014 estimates are available
the Teacher Preparation Report Cards provide some
in a technical appendix.) The state hopes to eventually
comparative information. The value-added data for
create a public, interactive system.
each provider’s completers are compared with the
data for completers of other programs in the state and
Annual reports will be produced only for program use.
with teachers statewide.
[ 41 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Endnotes
1
See figure 6 in this report: http://cpre.org/sites/default/files/workingpapers/1506_7trendsapril2014.pdf.
2
“Nation’s Schools Facing Largest Teacher Retirement Wave in History,” National Commission on Teaching
& America’s Future, last modified June 24, 2011, http://nctaf.org/announcements/nations-schools-facinglargest-teacher-retirement-wave-in-history/.
3
For a review of the evolution of teacher quality reforms, see http://bellwethereducation.org/sites/default/
files/JOYCE_Teacher%20Effectiveness_web.pdf.
4
“Department of Education; 34 CFR Chapter VI; Negotiated Rulemaking Committee, Negotiator Nominations
and Schedule of Committee Meetings—Teacher Preparation and TEACH Grant Programs,” 76 Federal
Register 207 (26 October 2011), pp. 66248-66249, http://www.gpo.gov/fdsys/pkg/FR-2011-10-26/
pdf/2011-27719.pdf.
5
For each state, we conducted extensive desk research and interviewed at least one person at the state
education agency who works in educator preparation. Most interviewees were heavily involved with the
preparation-program approval process. We synthesized our research into a template and then sent the state
profile back to the interviewees to review for accuracy. Each state profile has been reviewed by at least one
person in the state education agency. The information provided here is accurate, to our knowledge, at the
time of publishing. These states are continually making progress on these efforts, however; each state will
have the most up-to-date information.
6
Dr. Matthew Kraft, interview with the authors, July 10, 2015.
7
These are the latest numbers, as of November 5, 2015, from https://title2.ed.gov.
8
Colo. Rev. Stat. § 22-2-112(q)(III) (2013)
9
Del. Code tit. 14, § 12-1 (2014); http://legis.delaware.gov/LIS/lis147.nsf/vwLegislation/SB+51/$file/legis.
html?open.
10 Delaware Department of Education, 2015 Delaware Educator Preparation Program Reports, accessed
November 10, 2015, http://www.doe.k12.de.us//cms/lib09/DE01922744/Centricity/Domain/398/ED_
PREP_SUMMARY_TABLE_PORT.pdf.
11 Delaware Department of Education, DPAS-II Guide (Revised) for Teachers, last modified August 2015,
http://www.doe.k12.de.us/cms/lib09/DE01922744/Centricity/Domain/375/DPAS_II_Guide_for_
Teachers_2015-16.pdf.
12 Del. Code tit. 14, § 290-7 (2014). http://regulations.delaware.gov/AdminCode/title14/200/290.pdf
13 § 1004.04(3)(e), Fla. Stat. (2015).
14 Fla. Admin. Code R. 6A-5.066 (2015).
15 § 1004.04(3)(e) Fla. Stat. (2015).
16 “Teacher Assessment on Performance Standards Reference Sheet Performance Standards and Sample
Performance Indicators,” Georgia Department of Education, last modified July 16, 2012, https://www.
gadoe.org/Curriculum-Instruction-and-Assessment/Special-Education-Services/Documents/IDEAS 2013
Handouts 2/TAPS Standards Indicators.pdf.
17 Georgia Department of Education, Teacher Keys Effectiveness System: Implementation Handbook, last modified
July 1, 2015, https://www.gadoe.org/School-Improvement/Teacher-and-Leader-Effectiveness/Documents/
TKES%20Handbook%20-713.pdf.
18 Representative from Georgia Professional Standards Commission, interview with the author,
August 12, 2015.
19 Developed by Michael Allen, Edward Crowe, and Charles Coble, from Teacher Preparation Analytics: http://
www.suny.edu/media/suny/content-assets/documents/teachny/2-6-15/TPA-Report-Evidence-BasedSystem-TeacherPrep.pdf.
20 A complete description of the definitions and intended format of the 2020 Effectiveness Indicators is available
at http://www.regents.la.gov/assets/docs/2014/11/Allen-Coble-Crow-KEI-Final-Version-FINAL-6-10.pdf.
21 603 CMR 7.03.
[ 42 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
22 “About the Test,” Massachusetts Tests for Educator Licensure, accessed November 1, 2015,
http://www.mtel.nesinc.com/about.asp.
23 Professional teacher status is awarded to teachers who are rated proficient or exemplary on all four
Standards of Effective Teaching Practice and for their summative performance rating during their most
recent evaluation. For more information see http://www.doe.mass.edu/edeval/resources/implementation/
RatingEdPerformance.pdf.
24 Massachusetts Department of Elementary and Secondary Education, Guidelines for Program Approval, last
modified November 2015, http://www.doe.mass.edu/edprep/ProgramApproval.pdf. A guide to sources of
evidence is available in appendix B; Massachusetts Department of Elementary and Secondary Education,
Review Evaluation Tool, accessed October 1, 2015, http://www.doe.mass.edu/edprep/evaltool/Overview.pdf.
25 New Jersey Department of Education, 2015 Educator Preparation Provider Annual Reports, last modified
August 2015, http://www.state.nj.us/education/educators/rpr/preparation/providers/2015/overview.pdf.
26 “Priority and Focus Schools,” State of New Jersey Department of Education, accessed October 1, 2015,
http://www.state.nj.us/education/rac/schools/.
27 NJ Department of Education, District Factor Groups: Executive Summary, accessed August 2, 2015, http://www.
state.nj.us/education/finance/rda/dfg.pdf.
28 N.C. Gen. Stat. § 115C 296 (b) (2014).
29 “North Carolina Institution of Higher Education Educator Preparation Program Report Cards,” Public Schools
of North Carolina, August 1, 2015, http://apps.schools.nc.gov/pls/apex/f?p=141:5:1457486023468901.
30 “IHE Educator Preparation Program Performance Reports,” Public Schools of North Carolina, August 1, 2015,
http://www.ncpublicschools.org/ihe/reports/.
31 “Resident Educator Program,” Ohio Department of Education, July 7, 2015, http://education.ohio.gov/Topics/
Teaching/Resident-Educator-Program.
32 “PREP -RI: Performance Review for Educator Preparation-Rhode Island,” Rhode Island Department of
Education, July 10, 2015, http://www.ride.ri.gov/TeachersAdministrators/EducatorCertification/Performanc
eReviewforEducatorPreparation-RI.aspx#32031096-providers-planning-for-a-visit.
33 Tennessee State Board of Education, Tennessee Educator Preparation Policy, last modified October 31, 2014,
https://www.tn.gov/assets/entities/sbe/attachments/5-504_EducatorPreparationPolicy_10-31-14.pdf. See
page 11 for a complete list of the metrics that will be included in the annual reports.
34 Tennessee distinguishes between provider and program, rather than using the term “institution.” A provider is
an organization like Vanderbilt University, the University of Tennessee, or Teach for America, while a program
is an area of preparation (e.g., elementary education, middle-grades math).
35 These outcomes come directly from Tennessee’s educator-preparation policy. The state has convened an
implementation working group to develop and refine metrics related to each of these topics. For this reason,
these outcomes may change.
36 Tennessee State Board of Education, Tennessee Educator Preparation Policy. See appendix F for a complete
timeline.
[ 43 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
Acknowledgments
The authors would like to thank the Joyce Foundation for providing funding for
this paper. The authors would also like to thank the state contacts who shared their
time and expertise with us. Any inaccuracies belong to the authors alone.
[ 44 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
About the Authors
Ashley LiBetti Mitchel
Ashley LiBetti Mitchel is a senior analyst at Bellwether Education Partners. She can
be reached at [email protected]
Chad Aldeman
Chad Aldeman is an associate partner at Bellwether Education Partners. He can
be reached at [email protected]
About Bellwether Education Partners
Bellwether Education Partners is a nonprofit dedicated to helping education
organizations in the public, private, and nonprofit sectors become more effective
IDEAS | PEOPLE | RESULTS
in their work and achieve dramatic results, especially for high-need students. To do
so, we provide a unique combination of exceptional thinking, talent, and hands-on
strategic support.
[ 45 ] Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs That Prepared Them
© 2016 Bellwether Education Partners
This report carries a Creative Commons license, which permits noncommercial re-use of content when
proper attribution is provided. This means you are free to copy, display and distribute this work, or include
content from this report in derivative works, under the following conditions:
Attribution. You must clearly attribute the work to Bellwether Education Partners, and provide a link back
to the publication at http://bellwethereducation.org/.
Noncommercial. You may not use this work for commercial purposes without explicit prior permission
from Bellwether Education Partners.
Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only
under a license identical to this one.
For the full legal code of this Creative Commons license, please visit www.creativecommons.org. If you
have any questions about citing or reusing Bellwether Education Partners content, please contact us.
×

Report this document