Category C: Assessment of Candidate Competence

Standard 10: Planning, Organizing, Providing and Leading Literacy Instruction

Standard 10A: Instruction

Candidates work effectively with children from a variety of ethnic, cultural, gender, linguistic, and socio-economic backgrounds and provide specialized instruction that meets the specific needs of students, PreK and elementary, as well as adolescent learners, and students with reading difficulties. Candidates employ an advanced level of proficiency in the practice and adaptation of instructional routines and strategies, especially for students with extreme reading difficulty.
Candidates select, plan and implement culturally responsive curriculum based on assessed needs and literacy experiences of students in a target population.
Candidates demonstrate the ability to facilitate the implementation of the state-and/or district-adopted literacy curricula at classroom, school and/or district levels.

The program provides candidates multiple opportunities to plan and implement lessons with students from various age, grade, and demographic groups. The program uses specific measures to ensure that candidates are competent in planning and teaching lessons with students from these diverse backgrounds. Specific course assignments and clinical experiences require candidates to plan and deliver lessons to students from culturally and linguistically diverse backgrounds. Candidates’ competence is measured through the Teaching Strategies journal and the Clinical Experience Matrix (See LEE 215: Language Issues in Reading, p. 4; LEE 230: Supervised Teaching of Reading/Language Arts, p. 14).

The program uses multiple measures through which candidates demonstrate competence in selecting, planning, implementing, and adapting instructional strategies for culturally and linguistically diverse students and students with extreme reading difficulties. In LEE 224: Assessment & Development of Reading Abilities candidates complete two assessment projects. These projects are evaluated using rubrics to determine candidates’ competence in selecting appropriate literacy assessments for different students across PK-Adult ranges. The rubrics for these projects also evaluate candidates’ competence in administration and interpretation of results (See LEE 224: Assessment & Development of Reading Abilities, p.8-9). The Case Study Report is used as a summative measure of candidates’ competence in summarizing assessment results, using assessment results to guide instruction, and reporting the results in ways that are meaningful to parents, classroom teachers, and administrators. The candidates prepare a case study report, which includes the assessment tools and results, an analysis of the results, and instructional recommendations, for parents, teachers, and administrators. Reports are evaluated and scored using a rubric as exceeds expectations (90-100), meets basic expectations (80-89), or needs improvement (below 80) based on the ability to administer, score, and analyze assessment tools and to use assessment results and literacy research to guide the design of differentiated instruction for struggling readers (See Student Outcomes Assessment Plan, Appendix 2).

In addition, clinical field experience courses include multiple measures of candidates’ comprehensive understanding of the assessment/instruction cycle. Candidates are required to complete supervised clinical field experience diagnosing and tutoring K-Adult students who demonstrate reading achievement below expected performance for their respective age levels. The Clinical Experience Matrix is used during on-site observations and analysis of tutoring materials, lessons, and case reports. The matrix is designed to document candidates’ competence in selecting and administering appropriate assessment measures, analyzing results, and using the results to guide instruction to accelerate student learning. The final case study summary reports resulting from the clinical field experiences are summative measures that evaluate candidates’ competence in determining appropriate intervention placements and reporting the results in ways that are meaningful to parents, teachers, and administrators (See LEE 230: Supervised Teaching of Reading/Language Arts, p. 4; LEE 234: Clinical Experiences in Reading Assessment & Instruction, p.4).

Candidates’ competence in facilitating the implementation of state-adopted curricula is measured through classroom-based peer mentoring/coaching field experiences. In LEE 254: Supervised Field Experiences in Reading, candidates collaborate with a colleague in 3 peer-coaching cycles, consisting of pre-consultation, observation/modeling, and debriefing consultation. The candidates prepare presentations for two of the cycles. Presentations include lessons learned about the coaching process, critical reflective insights about professional growth, and plans for future goals.  Presentations are evaluated and scored using a rubric as excellent (31-50), fair (11-30), or poor (≤ 10) based on ability to critically analyze coaching experiences and to reflectively assess professional growth (See Student Outcomes Assessment Plan, Appendix 6).

Candidates have an advanced level of knowledge about, and can advocate for resources to support, students’ acquisition of the critical aspects of multiple digital literacies for 21st Century skills necessary for success in today’s global economy.

The program includes multiple measures of candidates’ knowledge of digital literacies, competence in facilitating student and teacher use of such 21st Century skills, and ability to advocate for resources to support students’ acquisition of these skills. The Theory to Practice Project is one measure of candidates’ competence in instructional application of multiple digital literacies. The written report and subsequent presentation linked with this project are evaluated on a rubric to determine candidates’ competence in applying research on effective practices within instructional contexts (See Student Outcomes Assessment Plan, Appendix 1).  The Clinical Experience Matrix is a second measure of candidates’ application of effective digital literacy instruction. Candidates are required to include multiple digital literacies as components of the tutoring instructional lessons. The Matrix is used to determine candidates’ competence in using technology to facilitate student learning and to support students’ acquisition of digital literacy skills (See LEE 230: Supervised Teaching of Reading/Language Arts, p. 14; LEE 234: Clinical Experiences in Reading Assessment & Instruction, p. 13).

Candidates’ ability to advocate for digital literacy resources is measured through rubrics evaluating the Case Study Report and the Program Evaluation Report. In LEE 224: Assessment & Development of Reading Abilities, candidates are required to include recommendations for utilizing technology as an instructional tool and/or as a strategy to facilitate underlying literacy processes in their case study reports (See LEE 224: Assessment & Development of Reading Abilities, p.11). Candidates are also required to analyze technology resources and their instructional uses, and then use research to support suggested recommendations for improvements as a key component of their Literacy Program Evaluation Reports (See Rubric LEE 254: Supervised Field Experiences in Reading, p.9).
 
Category C: Assessment of Candidate Competence

Standard 10: Planning, Organizing, Providing and Leading Literacy Instruction

Standard 10B: Assessment/Research

Candidates critically analyze and interpret research; identify appropriate research design and methodology; and recognize research that is current, confirmed, reliable and replicable.

In LEE 244: Research for Reading Professionals, candidates review research from the emergent reading, comprehension, and English Learner fields of literacy and construct a Literature Review in Wiki page and/or Proposal format. In the reviews, candidates summarize and critically analyze the research design, methods, and conclusions. Literature Reviews are evaluated and scored using a rubric as “craftsman” (87-100), “good” (74-86), or “satisfactory” (below 74) based on the ability to summarize and synthesize research studies (See Student Outcomes Assessment Plan, Appendix 3).

Candidates select, administer, analyze, summarize and communicate results of diagnostic literacy assessments and provide appropriate intervention, including strategic and intensive, with beginning readers and students who have reading difficulties, and can reflect upon, monitor and adjust instruction over an extended period of time.

The program uses multiple measures through which candidates demonstrate competence in selecting and administering assessments, and analyzing and reporting assessment results. In LEE 224: Assessment & Development of Reading Abilities candidates complete two assessment projects. These projects are evaluated using rubrics to determine candidates’ competence in selecting appropriate literacy assessments for different students across PK-Adult ranges. The rubrics for these projects also evaluate candidates’ competence in administration and interpretation of results (See LEE 224: Assessment & Development of Reading Abilities, p.8-9). The Case Study Report is used as a summative measure of candidates’ competence in summarizing assessment results, using assessment results to guide instruction, and reporting the results in ways that are meaningful to parents, classroom teachers, and administrators. The candidates prepare a case study report, which includes the assessment tools and results, an analysis of the results, and instructional recommendations, for parents, teachers, and administrators. Reports are evaluated and scored using a rubric as “exceeds expectations” (90-100), “meets basic expectations” (80-89), or “needs improvement” (below 80) based on the ability to administer, score, and analyze assessment tools and to use assessment results and literacy research to guide the design of differentiated instruction for struggling readers (See Student Outcomes Assessment Plan, Appendix 2).

In addition, the program uses multiple measures through which candidates demonstrate competence in planning, implementing, and monitoring literacy instruction that is based on formal and informal assessments. In clinical field experiences, candidates analyze assessment results to develop case study reports, develop an intervention plan, and implement these intervention plans. The final case study summary reports resulting from the clinical field experience are summative measures that evaluate candidates’ competence in determining appropriate intervention placements and reporting the results in ways that are meaningful to parents, teachers, and administrators  (See LEE 230: Supervised Teaching of Reading/Language Arts, p. 4; LEE 234: Clinical Experiences in Reading Assessment & Instruction, p.4). The Clinical Experience Matrix is used during on-site observations and analysis of tutoring materials, lessons, and case reports. The matrix is designed to document candidates’ competence in selecting and administering appropriate assessment measures, analyzing results, and using the results to guide instruction to accelerate the literacy development of early readers, English Learners, and students with reading difficulties (See LEE 230: Supervised Teaching of Reading/Language Arts, p. 14; LEE 234: Clinical Experiences in Reading Assessment & Instruction, p. 13).

Candidates know and use theories and research related to adult learning theory as it informs professional development on literacy acquisition at the school or district level.

Candidates are required to apply research on adult learning theory to the formation of professional development on literacy acquisition at the school level. Course assignments require candidates to conduct a multilevel (teachers and administrators) analysis of interventions for struggling readers and interventions for English Learners (See Course Schedule LEE 224: Assessment & Development of Reading Abilities, p. 6; LEE 215: Language Issues in Reading, p. 5). Based on the findings of these analyses, candidates provide administrators with recommended revisions of intervention components and professional development needs to enhance the effectiveness of the programs. These assignments are completed and analyzed as formative assessments within the specific individual courses.

The Literacy Program Evaluation Report is used as a summative measure of candidates’ capacity to apply adult learning theory to the formation of professional development on literacy acquisition at the school level. Candidates complete a literacy program evaluation report, which involves an intensive comprehensive examination of a school-wide and/or particular grade-level literacy program. Candidates collect multiple sources of qualitative and quantitative data to examine student achievement, intervention procedures, classroom instruction, and instructional resources. Candidates combine the findings of this analysis with the research on adult learning theory to provide specific recommendations to administrators on the content and structure of professional development activities to enhance the effectiveness of the programs. This component of the report is analyzed using a rubric to determine the candidates’ competence in using evidence from the program evaluation data to identify professional development needs and in using adult learning theory research to support recommended professional development structures (See Student Outcomes Assessment Plan, Appendix 5).

Candidates can facilitate collaborative processes with teachers and administrators for designing, implementing, and evaluating action research projects, case studies, and/or state or federal programs.

Candidates’ competence in facilitating collaborative processes with teachers for designing and implementing state and/or federal programs is measured through classroom-based peer mentoring/coaching field experiences. In LEE 254: Supervised Field Experiences in Reading, candidates collaborate with a colleague in 3 peer-coaching cycles, consisting of pre-consultation, observation/modeling, and debriefing consultation. The candidates prepare presentations for two of the cycles. Presentations include lessons learned about the coaching process, critical reflective insights about professional growth, and plans for future goals.  Presentations are evaluated and scored using a rubric as excellent (31-50), fair (11-30), or poor (≤ 10) based on ability to demonstrate effective collegial mentoring in literacy instruction (See Student Outcomes Assessment Plan, Appendix 6). 

Standard 10: Planning, Organizing, Providing and Leading Literacy Instruction

Standard 10C: Professional Development and Leadership

Candidates demonstrate their capacity to identify areas of growth as a professional and to select resources and opportunities to stay current with the teaching profession and with the professional community of other specialists, including those at the community level (such as, social agencies, after school programs, etc.).

The program provides candidates multiple opportunities to demonstrate their capacity to identify personal areas of professional growth. The program requires three separate clinical supervised experiences: small-group tutoring; intensive individual intervention; and literacy coaching. In all three supervised experiences, candidates are required to reflect on their experiences and identify areas for continued growth. For example, discussion boards and lesson reflections are used as formative assessments to capture candidates’ reflective decision-making across time as well as identification of future goals (See LEE 254: Supervised Field Experiences in Reading, p.3; LEE 230: Supervised Teaching of Reading/Language Arts, p. 4; LEE 234: Clinical Experiences in Reading Assessment & Instruction, p.4). The Clinical Field Experiences Matrix is also used to determine candidates’ competence in self-analysis and self-adjustment (See LEE 230: Supervised Teaching of Reading/Language Arts, p. 14; LEE 234: Clinical Experiences in Reading Assessment & Instruction, p. 13). In addition, candidates deliver presentations about their coaching experiences. Presentations include lessons learned about the coaching process, critical reflective insights about professional growth, and plans for future goals.  Presentations are evaluated and scored using a rubric as excellent (31-50), fair (11-30), or poor (≤ 10) based on ability to critically analyze coaching experiences and to reflectively assess professional growth (See Student Outcomes Assessment Plan, Appendix 6).

These measures provide a variety of evidence on candidates’ capacity to identify areas of growth and select appropriate resources to address these areas. In addition, candidates are required to attend the César Chávez Conference on Literacy and Educational Policy and the Dual Language Conference. These conferences contain multiple sessions on various areas of literacy research, practices, and policies. Candidates report on their sessions and experiences through collaborative discussion board assignments. Attendance at these sessions demonstrates candidates’ ability to stay current with the teaching profession and the broader community of social agencies, after school programs, and policymakers (See Chavez Conference 2013; Chavez Conference 2012; Dual Language Conference 2012).

Candidates demonstrate advanced professional competencies in reading and literacy development, curriculum, instruction, and assessment, including a deep, rich and interconnected understanding of Program Standards 2, 3, 7 and 8.
Candidates analyze instructional practices and evaluate student assessment data at grade, school or district levels to plan and provide guidance, coaching and/or professional development to strengthen appropriate practices as needed and work collaboratively with students and their families, teachers, administrators, specialists, and other interested stakeholders to design, implement and evaluate a comprehensive literacy plan or a specific component of that plan.

The program uses multiple measures through which candidates demonstrate competence in evaluating and strengthening the culture of literacy at a classroom, grade or school level. The Theory to Practice project and the Teaching Strategies journal are two measures used to determine candidates’ competence in identifying classroom level instructional practices that impede or support students’ literacy development. These major assignments require candidates to analyze personal classroom practices and reflect on how the practices align or conflict with current research on literacy development, with particular emphasis on first and second language acquisition.  Candidates submit written reports and make presentations detailing their analysis and plan of action to strengthen the culture of literacy to better support student learning (See LEE 213: Teaching the Language Arts K-12, p.3; LEE 215: Language Issues in Reading, p. 4).

The Teaching Strategies journal reflections are utilized as a formative assessment. The ongoing nature of the journals provides a measure of candidates continued growth in their abilities to identify classroom factors that support the development and sustainability of a culture of literacy.  The Theory to Practice project is utilized as a summative assessment. Projects are evaluated and scored using a rubric as exceeds expectations (90-100), meets basic expectations (80-89), or needs improvement (below 80) based on the ability to compare and contrast literacy theories and apply the theoretical perspectives in effectively designing literacy instruction that meets the needs of struggling readers and English Learners (See Student Outcomes Assessment Plan, Appendix 1).

Standard 10: Planning, Organizing, Providing and Leading Literacy Instruction

Standard 10D: Program Evaluation

Candidates critically examine the relevant research and recommendations of experts in the field and incorporate that information when generating and communicating to stakeholders the results of reliable and informative evaluations of current literacy practices, including program strengths and weaknesses and program effects on various aggregate student populations. Candidates utilize that information to develop a plan for improving literacy learning that includes communication about the planned changes to all interested stakeholders and a process for implementing and evaluating those changes.

The program provides multiple opportunities for candidates to demonstrate competence in sharing assessment results with various audiences, including teachers, parents, and administrators. Several opportunities are provided for candidates to develop competence in reporting assessment results for individual students. In LEE 224: Assessment & Development of Reading Abilities candidates prepare an individual case study report based on an analysis of results across literacy domains. The Case Study Report is used as a summative measure of candidates’ competence in summarizing assessment results, using assessment results to guide instruction, and reporting the results in ways that are meaningful to parents, classroom teachers, and administrators. The candidates prepare a case study report, which includes the assessment tools and results, an analysis of the results, and instructional recommendations, for parents, teachers, and administrators. Reports are evaluated and scored using a rubric as exceeds expectations (90-100), meets basic expectations (80-89), or needs improvement (below 80) based on the ability to administer, score, and analyze assessment tools and to use assessment results and literacy research to guide the design of differentiated instruction for struggling readers (See Student Outcomes Assessment Plan, Appendix 2).

Additionally, candidates are required to complete supervised clinical field experience diagnosing and tutoring K-Adult students. Candidates complete final case study summary reports resulting from the tutoring experiences. These reports are evaluated as summative measures to determine candidates’ competence in reporting the results in ways that are meaningful to parents, teachers, and administrators (See LEE 230: Supervised Teaching of Reading/Language Arts, p. 4; LEE 234: Clinical Experiences in Reading Assessment & Instruction, p.4).

Multiple opportunities are also provided for candidates to develop competence in reporting assessment results for broader classroom and school levels. Candidates are required to conduct a multilevel (teachers and administrators) analysis of interventions for struggling readers and interventions for English Learners (See Course Schedule LEE 224: Assessment & Development of Reading Abilities, p. 6; LEE 215: Language Issues in Reading, p. 5). Based on the findings of these analyses, candidates provide administrators with recommended revisions of intervention components and professional development needs to enhance the effectiveness of the programs. These assignments are completed and analyzed as formative assessments within the specific individual courses. Candidates also complete a literacy program evaluation report, which involves an intensive comprehensive examination of a school-wide and/or particular grade-level literacy program. Candidates collect multiple sources of qualitative and quantitative data to examine student achievement, intervention procedures, classroom instruction, and instructional resources. Based on the findings of the program evaluation, candidates provide administrators with recommended revisions of intervention components, instructional practices and professional development needs to enhance the effectiveness of the programs. Reports are evaluated and scored using a rubric as excellent (90-105), fair (63-89), or poor (21-62), based on the ability to provide clear analysis that accurately reflects the data, summarize areas of strength/weakness, draw conclusions for refinements supported by the research literature, and clearly communicate the conclusions to administrators and teachers (See Student Outcomes Assessment Plan, Appendix 5).

Back to Top