Standard 2.2: Moving Toward Target

2.2a. Assessment System

The Kremen Learning Assessment System to Sustain Improvement (KLASSI) is our unit wide assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations in order to evaluate and improve our unit and programs. Our initial preparation programs are at, or are moving toward, target in the following areas.

Through a variety of activities our unit regularly evaluates, with the involvement of the professional community, the capacity and effectiveness of KLASSI, which incorporates candidate proficiencies as detailed in professional and state standards and reflects our conceptual framework. Data from assessments are shared at the periodic meetings of the Kremen School Professional Advisory Committee and input solicited from committee members about actions that might be taken to strengthen unit programs. Representatives from the school regularly attend Beginning Teacher Support and Assessment (BTSA) meetings, enabling us to collaborate with BTSA partners in our service area to improve our programs. Publications and presentations enable us to collaborate with the national professional community. Five articles have been published and presentations at national conferences made to share results from our assessment system.

We use multiple assessments conducted at multiple points prior to program completion and post-completion as practitioners in the field to inform and drive decisions about candidate performance. Our primary system for assessment of teaching performance prior to program completion for our elementary and secondary teacher preparation programs is the Fresno Assessment of Student Teachers (FAST), the only Teacher Performance Assessment locally designed and approved by California Commission on Teacher Credentialing (CCTC). The FAST, which consists of four separate tasks, is embedded across courses allowing for the logical integration of both formative and summative evaluations of candidate mastery of the Teacher Performance Expectation (TPEs) at strategic points within the program. The FAST measures each TPE twice, using a different format in a different teaching context each time. Three of four tasks have an accompanying rubric that generates a discreet score for each TPE evaluated by a given task. The Teaching Sample Project is scored by sections aligned with identified TPEs. Scores range from one to four, and a score of four, exceeds expectations, has been informally described by our Beginning Teacher Support and Assessment (BTSA) partners as representing the performance expectation following the induction period.

In working toward a culture of evidence relative to teacher preparation, all elementary and secondary teacher preparation programs in the 23 California State University (CSU) campuses have, since 1999, participated in a survey of program graduates at the end of their first year of professional teaching and the graduate’s employment supervisor during that year of teaching. The Systemwide Evaluation of Professional Teacher Preparation Programs compiles evidence about the extent to which the graduates are prepared for their most important teaching responsibilities and the extent to which CSU professional coursework and fieldwork were professionally valuable and helpful to them during their initial year of K-12 teaching (CTQ, 2009).

Each campus receives an annual report from the CSU Center for Teacher Quality (CTQ) with survey results pertinent to the previous year’s graduates and supervisors. The report also includes a summary of all data since the inception of the surveys for comparison purposes and parallel system-wide results. This unique service allows us to track the effects of program changes designed to improve performance. The dean presents the data to program faculty who discuss the implications of the results and consider how the data informs decisions to improve the program. Examples of changes to programs based on survey results include a stronger emphasis on teaching English Learners and students with special needs and a revision of the Single Subject (secondary) program.

The unit regularly examines the validity and utility of the data generated through our assessments, and we make modifications in keeping current, relevant and aligned to the changes in assessment technology and professional standards. We understand that the usefulness of our performance assessment for licensure and program improvement depends on the degree to which assessments are valid and the scoring is reliable. Evaluating the validity of FAST for its ability to accurately and fairly measure the teaching skills of teacher candidates is critical. However, scores can be no more valid than they are reliable; reliability coefficients represent a ceiling to validity measures (Huck, 2008). In-depth study was conducted regarding the reliability of our performance assessments.  By any published standard for performance identified, the level of inter-rater reliability achieved was higher than the norm. In addition to reliability, FAST was examined for validity based on Fredriksen and Collins (1989) who proposed examining “directness, scope, and transparency” (p.30) as criteria for the validity of a performance assessment, rather than referring to construct, or criterion-related validity. FAST content explicitly represents the 13 TPEs identified by the CCTC and addresses the entirely of the California TPEs which were established by policy makers, teachers, teacher educators, and administrators based on a statewide job analysis (Pecheone & Chung, 2006). The rubrics and scoring for each of the tasks and all TPEs are provided to teacher candidates and repeatedly reviewed throughout the course of the program.

Tasks are subject to an in-depth review and analysis every two years on a rotating basis. One task is reviewed each semester and every task will have been evaluated every two years. A minimum of 15% of responses to each task are double-scored to determine inter-rater reliability. These data are used to evaluate scorer training and calibration. Data generated by tasks under review are analyzed by gender, ethnicity, and self-reported English language proficiency. This systematic examination and periodic review helps assure that FAST maintains its high level of reliability and information useful.

The validity of our primary post-program completion assessment derives from the alignment between the evaluation questions and (1) California standards for grades K-12 in all curriculum areas, (2) California Standards for Accreditation of Professional Teacher Preparation, (3) California Teaching Performance Expectations, (4) California Standards for the Teaching Profession, and (5) Standards adopted for institutional accreditation by the National Council for Accreditation of Teacher Education (CTQ, 2009). Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence of homogeneity of responses. Substantively related evaluation questions are grouped into composites. The reliability for the composite scores for the system and individual campuses generally range from 0 to 2 percentage points at the 90% confidence level.

In order to establish fairness, accuracy, and consistency of our assessment procedures and unit operations, our unit conducts and publishes thorough studies, and we make changes in our practice consistent with the results of these studies.

The CTC Assessment Design Standards (CTC, 2006) required TPAs to be valid, fair, and at least as rigorous as the state passing standards. On June 5, 2008 the Commission at their CTC meeting approved FAST as an alternative TPA model. Faculty in teacher education or single subject content areas, master teachers, student teaching supervisors, and local BTSA support providers score all FAST projects. Each assessor is trained, periodically tested, and must meet calibration standards annually in order to score candidate performances.

Basic analyses were completed to identify differential effects in relation to candidates’ ethnic group or gender. No differences were great enough to affect candidates’ overall passing scores on any task. For any difference, there is on-going examination and review by faculty to determine and ensure that differences do not stem from insensitivity to the lower scoring group.

2.2b. Data Collection, Analysis, and Evaluation

KLASSI provides regular and comprehensive data on candidate performance at each stage of our programs and extending into a completer’s first year of practice, on program quality and on unit operations. Data based on multiple assessments from both internal and external sources are systematically collected. Assessment data comes from our candidates, graduates, faculty, and completers’ employers as well as other members of our professional community. The data are disaggregated by program and regularly and systematically compiled, aggregated, summarized, analyzed, and reported publicly for the primary purpose of improving candidate performance, program quality, and our unit operations.

All credential programs are required to submit a Biennial Report to the California Commission on Teacher Credentialing (CTC). “The purpose of the biennial report is for every credential preparation program to demonstrate to the CTC how it utilizes candidate, completer, and program data to guide on-going program improvement activities. In addition, the biennial reports help move accreditation away from prior years’ “snapshot” approach to a process in which accreditation is part of a continual evaluation system. The biennial report process allows for the recognition that effective practice means program personnel are engaged constantly in the process of evaluation and program improvement.

Data from the surveys of program graduates and their employers (Elementary/Secondary Teacher Preparation) are presented by the Dean to faculty of each program and to the advisory board. Data reported in the Biennial Reports are shared with program faculty annually.

Advising: The Education Student Services Center annually surveys candidates regarding the effectiveness of advising. Results are compiled periodically, reviewed by the advising staff, and discussed at KSOEHD Executive Committee meetings.

Candidate Complaints: Our unit has a system for effectively maintaining records of formal candidate complaints and documentation of their resolution. Candidate complaints within the Kremen School are handled through the Associate Dean’s office. Students/faculty and/or administrators complete a form, which documents the complaint and the resolution. Evidence is stored and reviewed for any unit needs by the Associate Dean.

University Grievance Procedures: The University has in place well-defined policies for student rights, grade protest, and review processes for student petitions. The process for grade protest is outlined in the Academic Policy Manual, the Faculty Handbook, the General Catalog, the Schedule of Courses, and on a handout from the Office of the Dean of Student Affairs. The policy detailing the student academic petition process is available in the General Catalog and on a handout from the Office of the Vice President for Student Affairs. The Dean of Student Affairs and the Student Grievance Board handle all formal grievances with the exception of matters related to grading.

Career Services/Placement:
The unit has a very effective career services/placement professional who conducts workshops on interviewing and resume writing, informs students of position openings, and organizes an annual job fair. An annual report contains data on the activities of Career Services, which provide input
on the hiring status of candidates and graduates.

2.2c. Use of Data for Program Improvement

The unit is continually searching for stronger relationships in our evaluations and making revisions as necessary. We make changes not only based on data, but systematically study the effects of any changes to assure that our programs are strengthened without adverse consequences. Our candidates and faculty regularly review their performance and develop plans for improvement based on data. For example, candidate preparation in teaching English Learners was indicated as only minimally acceptable through data collected from FAST tasks and graduate / employer surveys. Program improvement efforts were implemented through changes to course syllabi which resulted in an increase in mean scores for candidates on the TPE that measures teaching English Learners.

The CCTC requires each credential program to submit a Biennial Report which includes sections describing Candidate Competence and Program Effectiveness Data, Analyses and Discussion of Candidate and Program Data and Use of Assessment Results to Improve Candidate and Program Performance. The most recent report for the Multiple Subject (elementary) program included a focus on teaching English Learners, based on data from FAST and from the Chancellor’s survey. A program-specific plan to address this area included increased emphasis on English learner teaching strategies.

An Evaluation and Needs Assessment Survey administered by the Special Education faculty indicated the need to Improve candidates’ ability to design and implement positive behavioral support plans and interventions based on observation and assessment data. The program continues to require candidates to design and implement a positive behavior support plan and will develop ABA plans including Single Subject Design in course assignments and during Final Practicum.

Because of the changing, less experienced candidate population, the Early Childhood Specialist program modified their program to provide more skill-specific direct instruction and fieldwork opportunities for leadership and advocacy activities through initiating a new ECE leadership class (LEE240: Leadership in ECE), and encouraging enrollment in an elective class that emphasizes these tools.

The Education Administration program has developed a data collection and analysis process to use candidate signature/fieldwork work products as a data set for continuous course and program improvement. A sampling of work products from every instructor for every EAD course has been uploaded to Blackboard. Data sets will be used to inform conversations and next steps in Professional Learning Community course-alike meetings.

Sustaining target level performance requires an ongoing, systematic effort. Biennial Reports and SOAPs ensure that data from multiple sources for each program is analyzed and reported, that program faculty make informed decisions regarding changes based on these data, and that the impact of changes on candidate performance is reviewed.

Areas for Improvement»

Back to Top