The Impact of Cognitive Performance of English Language Learners, TF Miller

Tags: school psychologists, intelligence, intelligence test, intelligence tests, special education, psychologists, ELL, assessment, school psychologist, CLD students, assessment practices, assessments, English Language Learners, intelligence testing, cognitive ability, L2, school psychology, ELL students, culturally and linguistically diverse, cognitive assessment, primary language, American Psychological Association, language other than English, students, language acquisition, language proficiency, cognitive academic language proficiency, ELL student, University of Wisconsin Stout, Universal Nonverbal Intelligence Test, literature review, graduate school, University of Wisconsin-Stout, Cognitive Performance, Artiles & Ortiz, English language services, language development, language skills, The Graduate School University of Wisconsin-Stout, Ochoa & Ortiz, State assessments, References American Educational Research Association, valid assessment practices, psychology, School Psychology International, National Association of School Psychologists, John Wiley & Sons, Inc., culturally sensitive, school psychology training, second language acquisition, training programs, school psychology training programs, Author Kranzler, School Psychology Review, receptive language, English language, intellectual, Hispanic or Latino, standardized test, test measures, David Wechsler, Alfred Binet, Limited English Proficient, Simon Binet, Stanford-Binet
Content: 1 The Impact of Cognitive Performance of English Language Learners By Tyler F. Miller A Research Paper Submitted in Partial Fulfillment of the Requirements for the Master of Science Degree m School Psychology Approved: 2 Semester Credits Carlos Dejud, P . The Graduate School University of Wisconsin-Stout May, 2011
2 The Graduate School University of Wisconsin-Stout Menomonie, WI
Author: Title:
Miller, Tyler F. The Impact ofCognitive Performance ofEnglish Language Learners
Graduate Degree/ Major: MS School Psychology
Research Adviser: Carlos Dejud, Ph.D.
MonthrYear:
May, 2011
Number of Pages: 53 Style Manual Used: American Psychological Association, 6th edition Abstract This literature review investigated cognitive performance of English Language Learners and the associated outcomes with cognitive ability scores. Assessing for intelligence is complex and can be controversial, especially when the test taker is culturally or linguistically diverse. The change in demographics in demographics has presented school psychologists with a host of challenges and barriers to ensure culturally sensitive assessments with diverse students. A number of landmark court cases and ethical standards have been brought about as a response to school districts and their school psychologists who engaged in biased or discriminatory cognitive assessment practices. Despite persisting barriers preventing valid, culturally sensitive assessment practices, researchers and experts in the field of school psychology have set forth frameworks for which school psychologists can utilize to ensure valid assessment practices. In addition, research has recognized the use of nonverbal batteries as viable measures to capture cognitive assessment. One battery in particular, the Universal Nonverbal Intelligence Test, has
3 proven to be a reliable, valid, and culturally sensitive measure of intelligence that school psychologists can use with culturally and linguistically diverse students.
4 The Graduate School University of Wisconsin Stout Menomonie, WI Acknowledgments First and foremost I want to thank the University of Wisconsin-Stout for providing me with the opportunity to better my life. Secondly, I would like to thank Dr. Carlos Dejud for his time, energy, motivation, and patience throughout this entire process. This thesis would not be what it is without your guidance and knowledge. Additionally, I need to thank my boss, my mentor, and friend, Bob Hebl. You have taught me so many invaluable life lessons over the years and have impacted my life much more than you will ever know. I would also like to thank my family. You have been extremely supportive, encouraging, and loving throughout graduate school. Along those lines, I want to thank my fiance, Kari, for her ongoing love and support. We have sacrificed a great deal the last two years perusing our education, but now it is time to pursue our life together. Finally, I want to thank the school psychology cohort of 2011. We entered graduate school as individuals but left as a family. I cannot express in words how important you all have been and the impact you have made. I could not imagine going through graduate school with a finer group of people.
5 Table of Contents .................................................................................................................................................... Page Abstract. ..........................................................................................................................................2 Chapter I: Introduction....................................................................................................................6 Statement of the Problem................................................................................................... 11 Purpose of the Study.......................................................................................................... 11 Research Objectives.......................................................................................................... 11 Definition of Terms............................................................................................................ 12 Assumptions of the Study.................................................................................................. 14 Limitations......................................................................................................................... 14 Chapter II: Literature Review........................................................................................................ 15 History of Intelligence....................................................................................................... 15 Change in Demographics................................................................................................... 19 Law and Ethics...................................................................................................................21 Culturally Sensitive Frameworks For Cognitive Assessment With CLD Students...........24 Current Barriers When Assessing ELL Students...............................................................27 Cognitive Assessment Instruments....................................................................................32 The Universal Nonverbal Intelligence Test.. ...................................................................... 33 Chapter III: Summary, Critical Analysis, and Recommendations.................................................40 Summary ............................................................................................................................ 40 Critical Analysis.................................................................................................................44 Recommendations .............................................................................................................. 45 References ...................................................................................................................................... 47
6 Chapter 1: Introduction Assessment of intelligence has remained synonymous with psychologists throughout the last century (Hu & Oakland, 1991). Intelligence tests were initially constructed to distinguish those with severe intellectual deficits and those who were intellectually superior (Kaufman, 2000). Currently, intelligence tests have evolved to become a widely accepted psychoeducational instrument used to determine special education placement, diagnose and target intervention. In fact, the largest users of intelligence tests are school psychologists (Valencia & Suzuki, 2001). School psychologists select and implement intelligence tests for a multitude of reasons. For instance, assessing intelligence aids in, but is not limited to, academic decision-making, planning for intervention, as well as educational and vocational placement (Groth-Marnat, 2003). With the inception of the Stanford-Binet, regarded as the first modem-day intelligence test, the array of intelligence tests has vastly increased (Stinnett, Havey, & OehlerStinnett, 1994). Also increasing is the culturally and linguistically diverse (CLD) student population in our nation's public schools. Today's schools are unreflective of schools three decades prior. Over the past 30 years, specific races and ethnicities have decreased, some idled and some have rapidly increased. For example, the White population has dramatically decreased while the country has witnessed a sudden spike in the increase of the Hispanic population. At the present time, nearly 4 million students are considered Limited English Proficient (LEP), with half of the LEP enrolled in kindergarten through fourth grade. With the sudden influx of English Language Learners (ELL), school psychologists are facing significant barriers when attempting to assess for cognitive ability (Zehler et aI., 2003; U.S. Census Bureau, 2009; U.S. Department of Education, 2009; O'Bryon & Rogers, 2010).
7 Traditional, norm-referenced intelligence tests, such as the Stanford-Binet and the Wechsler series are no longer appropriate within the realm of CLD students, as these tests were primarily normed using middle-class, English-speaking individuals (Jacob & Hartshorne, 2007). Assessing the cognitive ability of ELLs using instruments such as the Stanford-Binet or one from the Wechsler series ultimately becomes assessing the child's expressive and receptive language skills and may wrongfully dilute overall test performance (Rhodes, Ochoa, & Ortiz, 2005). The implications of test selection, administration and interpretation without sensitivity to a student's culture and language may mistakenly brand students with stigmatizing labels, lead to higher dropout rates and over represent certain races and ethnicities in special education (Valencia & Suzuki, 2001; Artiles & Ortiz, 2002). Perhaps the most salient criticism of special education has been the complexity of overrepresentation and underrepresentation of CLD students receiving special education services (Coutinho & Oswald, 2000; Artiles, Rueda, Salazar, & Higareda, 2005; Salend & Duhaney, 2005). Guiberson (2009) reports that erroneous special education placements may manifest in a variety of ways, which include overrepresentation, underrepresentation and misidentification. Overrepresentation occurs when a given minority group possesses a higher percentage of students in special education when compared to the group's percentage of the entire student population (Salend & Duhaney, 2005). Conversely, underrepresentation transpires when nonwhite students, with significant educational needs are overlooked and fail to receive opportune services (Guiberson, 2009). Sattler (2008) notes that underrepresentation can also occur when minority students, who do not have a disability, but rather are intellectually gifted, fail to receive adequate services. Further, misidentification is classified as students who possess genuine disabilities are misdiagnosed with another form of disability (Guberson, 2009).
8 Aside from overrepresentation, underrepresentation and misdiagnosis, ELL students are at a greater risk for academic failure, grade retention, dropping out of school and often fail to receive early childhood assistance (Aliiles & Ortiz, 2002; O'Bryon & Rogers, 2010). In fact, fewer than 50 percent of ELL students are exposed to early childhood programs. Moreover, ELL students, compared to their non-ELL peers are 15-20 percent more likely to drop out of school (Artiles & Ortiz, 2002). Equally troublesome, Sepulveda (2010) reports that 50 percent of Hispanic students, many of which are ELL, fail to graduate high school. Of those who do graduate from high school, nearly 50 percent are ill equipped to enter the collegiate arena (Sepulveda, 2010). The increase of the Hispanic population and the academic success of Hispanic students, or lack-there-of, have captured the attention of President Obama and other influential policy makers, prompting several initiatives to prepare the Hispanic school-age population for the future workforce (Sepulveda, 2010). Despite a wealth of research and governmental initiatives targeting ELL student success, school psychologists remain unequipped with the skills necessary to assess these students. With the monumental shift in demographics, especially the steady increase of the ELL population, school psychologists have inappropriately engaged in and continue to employ inadequate testing practices that have lead to misguided interventions, faulty diagnoses and inaccurate educational placements (Rhodes, Ochoa, & Ortiz, 2005). Because the complexity of assessing CLD students has remained a lingering concern, a framework of ethical codes, principals and standards from the American Educational Research Association (AERA), American Psychological Association (APA), National Council on Measurement in Education (NCME) and National Association of School Psychologists (NASP) have been developed to
9 ensure ethical and culturally competent testing practices (AERA, 1999; APA, 2002; NASP, 2010). Despite these ethical codes, principles and standards, assessing CLD students have continued remain a complex process (Rhodes, Ochoa, & Ortiz, 2005). In fact, a significant percentage of school psychologists are unacquainted with, or disagree as to what constitutes best practice when assessing the intelligence ofCLD students (Bainter & Tollefson, 2003; Kranzler, Flores, & Coady, 2010; McCloskey & Athanasiou, 2000; O'Bryon & Rogers, 2010; Ochoa, Riccio, Jimenez, Garcia de Alba, & Sines 2005; Valencia & Suzuki, 2001). In addition, school psychologists face a host of constraints when assessing CLD students, which include: the lack of bilingual school psychologists, a lack of culturally competent training programs, misusage of interpreters insufficient knowledge of second language acquisition and a sparse selection of reliable and valid language proficiency and intelligence tests (Newell et al., 2010; O'Bryon & Rogers, 2010; Ochoa, Rivera, & Ford, 1997; Sattler, 2008; Schon, Shaftel, & Markham, 2008). American schools are in dire need of trained bilingual school psychologists to conduct intellectual assessments in multiple languages (O'Bryon & Rogers, 2010); however, there is a severe shortage of trained bilingual school psychologists ready to conduct cognitive assessments (Curtis et al., 2008). The need for bilingual psychologists is evident but the growth in bilingual school psychologists has not mirrored that of CLD growth. As of2005, 92.6 percent of practicing school psychologists were Caucasian, while Caucasians represented only 74 percent of the total population (Curtis et al., 2008; U.S. Census Bureau, 2009). An additional shortcoming of intellectual assessment amongst CLD students is the lack of culturally sensitive training programs, as a significant percentage of school psychologists fail to receive adequate training from their respective graduate institutions in the area of bilingual assessment (O'Bryon &
10 Rogers, 2010; Ochoa, Rivera, & Ford, 1997). Although best practice would encourage schools to employ a bilingual psychologist, there are avenues for monolingual school psychologists to explore when conducting cognitive assessments (O'Bryon & Rogers, 2010). Currently, there are more intelligence tests to choose from than ever before. With that said, the majority of intelligence tests were created from test developers who are of European background and contain a certain extent of cultural and linguistic bias when used with CLD students (Suzuki, Prevost, & Short, 2008). Several nonverbal intelligence tests have been created to minimize these biases and ensure reliable and valid results. These tests include: the Kaufman Assessment Battery for Children (KABC -II), Second Edition; Differential Ability Scales, Second Edition (DAS - 2), Comprehensive Test of Nonverbal Intelligence (CTONI), Test of Nonverbal Intelligence, Third Edition (TONI - 3), Leiter International Performance Scale, Revised (Leiter-R), Raven's Progressive Matrices (Ravens) and the Wechsler Nonverbal Scale of Ability (WNV; Garcia-Vazquez, Crespi, & Riccio, 2010; Sattler, 2008). Although a wide assortment of nonverbal batteries exists, many come with caveats. For example, many of these batteries are one-dimensional, in that they only provide one type of task (e.g., matrices), are restricted to measuring nonverbal intelligence as opposed to general intelligence, and some continue require previously acquired knowledge or prior cultural experiences (Bracken & McCallum, 1998; Garcia-Vazquez, Crespi & Riccio, 2010). One nonverbal battery that was piloted utilizing CLD students, remained cognizant of potential cultural and linguistic biases, and taps general intelligence is the Universal Nonverbal Intelligence Battery (UNIT; Bracken & McCallum, 1998). Currently, the UNIT is touted as an adequate nonverbal assessment that school psychologists can select when assessing the intelligence of CLD students (Athanasiou, 2000).
11 Statement of the Problem With the increasing cultural and linguistic diversity of nation's students, school psychologists are encountering numerous challenges when attempting to assess the cognitive abilities of CLD students. Traditional, norm-referenced cognitive assessments contain culturally and linguistically loaded questions that are often inappropriate for many CLD students and may yield intellectual scores that are not reflective of the child's actual true reasoning abilities. A substantial percentage of school psychologists remain unequipped to assess CLD students due to a number of reasons, such as the lack of bilingual school psychologists, failure to receive culturally sensitive assessment practices during graduate training, lack of knowledge of second language acquisition, acculturation, and administering inappropriate intelligence tests. Purpose of the Study The purpose of this literature review is to investigate how the United States demographics have changed within the last three decades. In addition, the literature review will examine what methods school psychologists are using to assess CLD students, frameworks in which to following when conducting culturally sensitive assessments and potential barriers to conducting sound assessments. Finally, the review of literature will examine what types cognitive assessments exist and which are most appropriate for CLD students. Data will be collected through a comprehensive literature review during the Spring of 2011. Research Objectives The following research objectives are addressed in this literature review: 1. To explore the origin of intelligence testing and the function in which they serve and continue to serve. 2. To examine the changing demographics of school-age children.
12 3. To investigate what landmark cases and ethical guidelines have shaped the way in which psychologists assess CLD students. 4. To highlight multiple frameworks in which to conduct culturally sensitive assessments. 5. To determine what barriers inhibit reliable and valid cognitive assessments of ELL students. 6. To examine what cognitive assessments exist and which are most appropriate to use with ELL students. Definition of Terms To understand the content area of this literature review, the following terms have been defined and will be used: Cognitive Assessments - Norm-referenced tests that follow standardized directions, which produce an overall intelligence quotient (Kranzler, 1997). The terms intelligence tests, intellectual assessment and cognitive batteries will be used interchangeably with cognitive assessments. Cultural Loading - The degree to which a subtest inherently contains or demands a test taker to have previous cultural knowledge in order to provide an adequate response (VazquezNuttall et al., 2007). This term will also be used interchangeably with cultural bias. Culturally and Linguistically Diverse (CLD) - The term Culturally and Linguistically Diverse (CLD) refers to students age 3-21 who whose primary or home language is other than English (Colorado Department of Education, 2009). English Language Learner - "Students who are not native English speakers and are not reclassified as fluent in English" (Frisby, 2008, p. 535).
13 Hispanic or Latino - A person of Mexican, Puerto Rican, Cuban, South or Central American, or other Spanish culture or origin, regardless ofrace (Aud, Fox, & KewalRamani, 2010). Intelligence - Within faculties such as mental properties, reasoning with and without words, adjustment to one's environment and higher-order thinking skills (Sattler, 2008). The term cognitive ability will be used interchangeably with intelligence. Limited English Proficient (LEP) - The No Child Left Behind (NCLB) Act of2001 defines Limited English Proficient as someone: (A) who is aged 3 through 21; (B) who is enrolled or preparing to enroll in an elementary school or secondary school; (C)(i) who was not born in the United States or whose native language is a language other than English; (ii)(I) who is a Native American or Alaska Native, or a native resident of the outlying areas; and (II) who comes from an environment where a language other than English has had a significant impact on the individual's level of English language proficiency; or (iii) who is migratory, whose native language is a language other than English, and who comes from an environment where a language other than English is dominant; and (D) whose difficulties in speaking, reading, writing, or understanding the English language may be sufficient to deny the individual (i) the ability to meet the State's proficient level of achievement on State assessments described in section 1111(b)(3);
14 (ii) the ability to successfully achieve in classrooms where the language of instruction is English; or (iii) the opportunity to participate fully in society (Section 9101). Linguistic Demand - The requisite expressive and receptive language skills to respond to a task (Vazquez-Nuttall et ai., 2007). Overrepresentation - A disproportionate representation of a minority group of students that is higher than that of the minority group's prevalence in the general population of students (Salend & Duhaney, 2005). School-age Children - Children ages 5-17 (U.S. Department of Education, 2008) Test Bias - When the same test measures different attributes that differ from one group to another. Two main forms of test bias include culture and language (Frisby, 2008; Valencia & Suzuki,2001). Assumptions of the Study It is assumed that all existing literature related to conducting cognitive assessments with CLD students was available and most current. It would be impossible to locate the entirety of literature dedicated to this specific issue. Limitations of the Study A major limitation to the literature review is that very little published research exists examining how ELLs perform on both a culturally and linguistically loaded intelligence test and on the UNIT. Additionally, the information gathered from a specific piece of research pertaining to ELLs typically cannot be generalized, as they are as heterogeneous as any population.
15 Chapter II: Literature Review This chapter will include the following topics pertaining to CLD students and cognitive assessments: 1) the historical evolution of intelligence tests; 2) the profound change in demographics and how this change clashes with traditional measures of intelligence tests; 3) landmark cases and professional ethical codes and principles that have shaped the way in which school psychologists conduct cognitive assessments with CLD students; 4) culturally sensitive frameworks for school psychologists to follow when conducting cognitive assessments; 5) barriers preventing best practice when assessing intellectual functioning and; 6) Nonverbal measures of intelligence. Finally, the chapter will examine the UNIT and its relation to assessment of CLD students. History of Intelligence Testing Psychological testing has remained a critical tool that school psychologists use to assist in educational, behavioral and vocational decision-making (Hu & Oakland, 1991). Although school psychologists heavily rely upon intellectual assessment, intelligence testing predates the field of school psychology. In fact, the inception of intellectual theory emerged from Hippocrates, Plato and Aristotle (Merrell, Ervin, & Gimpel, 2006; Sattler, 2008). Although intellectual theory can be traced to 400 B.C., more "modem" intelligence testing did not begin until around the mid-1800s (Sattler, 2008). Many psychologists fallaciously credit Simon Binet and Theodore Simon with developing the first modem intelligence test. However, Kaufman (2000) notes the source of intelligence testing began with Jean Esquirol during the mid-1800's. To solidify his point, Kaufman (2000) reports Esquirol was the first to differentiate between those who possessed severe cognitive deficits and severe psychological disturbances. This distinction is significant
16 because, prior to this division, individuals with mental retardation, autism, language impairments and other disabilities were arbitrarily labeled as having severe sub-average intelligence and intellectual deficits. The conceptualization of cognitive assessment underwent a series of facelifts and transitions for nearly a century, eventually leading to conception of the first "real" intelligence battery, developed by Alfred Binet, regarded by many as the father of intelligence testing (Kaufman & Lichtenburger, 2006). Binet's original intelligence test underwent three revisions but the fourth was undeniably his greatest creation. After the Binet established itself as the premier intellectual assessment, Terman Lewis of Stanford University translated and revised Binet's French intelligence test to English in 1916 and coined the battery, the Stanford-Binet Intelligence Scale (Kaufman, 2000; Thorndike, 1997). The inception of the Stanford-Binet Intelligence scale ignited the era of intelligence testing. Soon after the Stanford-Binet was translated to English, the United States entered its first World War. The United States Army recognized the utility of intellectual assessments and began testing potential soldiers with not only verbal tests (Army Alpha) but also nonverbal measures (Army Beta; Kaufman & Lichtenburger, 2006; Thorndike, 1997;). The army utilized both verbal and nonverbal measures because not all potential recruits were literate and the army needed alternative measures to remove those said to possess inferior intelligence. Although the military believed a nonverbal measure was simply a measure to compensate for a soldier's lack of literacy, David Wechsler believed it was much more (Kaufman & Lichtenburger, 2006). In 1939, David Wechsler replicated much of the Army's nonverbal test (Army Beta) and fused these subtests with ones similar to the Stanford-Binet's verbal measures to create the Wechsler-Bellevue Intelligence Scale, Form I (Boake, 2002). Prior to Wechsler, intelligence tests were used to discriminate those who were believed to have intellectual deficits and those of
17 superior intelligence, neglecting those with average intelligence. However, Wechsler envisioned a battery that would provide clinical psychologists with information that stemmed beyond classifying intellectual deficiencies. The purpose of Wechsler's novel and innovative intelligence test was to provide clinical psychologists with a tool to pinpoint specific mental faculties, determine areas of strength and weakness, and provide interventions based upon test results (Boake, 2002; Sattler, 2008). Wechsler's Form I began several revisions and with every revision, the standardization and psychometric properties steadily improved. The Wechsler batteries continued to evolve and have now become the face of intelligence testing. In fact, Camara, Nathan and Puente (2000) found that, through survey research, within the top 20 standardized tests most frequently used amongst clinical psychologists, the WAIS-R was the most heavily relied upon standardized test. Additionally, Camara, Nathan and Puente (2002) discovered the WISC-R was the third most commonly used standardized test and that the Wechsler Memory Scale was the ninth most commonly used. In a similar study, Stinnett, Havey, & Oehler-Stinnett (1994) randomly selected and surveyed 123 members ofthe National Association of School Psychologists (NASP). The respondents reported the WAIS-R, the WISC-R and the WISC-III were the most commonly used intelligence tests amongst practicing school psychologists. In addition, respondents were asked to rank the importance of 13 commonly used intellectual instruments. The respondents ranked the previously listed Wechsler batteries as the most important batteries to be used (Camara, Nathan, & Puente, 2000). There is substantial data suggesting the Wechsler series continues to dominate the market of intellectual assessments (Camara, Nathan, & Puente, 2000; Groth-Mamat, 2003; Kaufman &
18 Lichtenberger, 2004; Stinnett, Havey, & Oehler-Stinnett, 1994). Although the Wechsler Intelligence Scale for Children - Fourth Edition (WISC-IV) is the most recognizable and implemented intelligence test amongst school psychologists, this test may be highly inappropriate when used with ELL students (Tomes, 2010). Current intelligence batteries, like the WISC-IV, are almost always designed within an Anglo-Saxon framework, embedding culturally and linguistically loaded test items and directions throughout the battery that can greatly impact test performance for LEP students (Tomes, 2010). For decades, school psychologists have heavily relied upon intellectual assessments, such as the Stanford-Binet or Wechsler Intelligence batteries to diagnose, plan for intervention and guide educational and vocational decision-making. However, changing demographics, especially those who speak a language other than English have greatly transformed the way in which cognitive assessments are conducted. Newell et al. (2010) discuss the transition of schoolage children, evolving from a heavily European population to one that is extremely diverse in socio-economic status, language, ethnicity and culture. The demographic shift in school populations produced significant difficulties and as a result, educators were not equipped with the necessary tools to align the cultural and linguistic factors of students to educational and academic programming; thus, many students, primarily African Americans and Hispanic children were viewed as atypical or different and were wrongly placed in special or alternative classrooms (Newell et al. 2010). There is a plethora of data to demonstrate that the demographics of the United States has shifted and is projected to continue to shift from a White majority to that of people of color. As the population expands, the plurality of students raises significant concerns as to how cognitive assessments should be conducted. Traditional, European intelligence tests have become archaic
19 with the increase ofCLD students and have failed to evolve at a commensurate rate oftoday's population. Change in Demographics The change in demographics have continued to shape the country during the previous three decades. Aud, Fox and KewalRamani (2010) report that in 1980, 80 percent of United States population was classified as White and that by 2008, it dwindled to 66 percent. In addition, the Black population remained roughly 12 percent while the Asian/Pacific Islander population increased from 2 to 4 percent during this timeframe. Interestingly, the Hispanic population increased from 6 percent of the total population in 1980 to account for 15 percent of the population in 2008. Furthermore, the Hispanic population is projected to account for 21 percent of the nation's population by 2025 while the White population is expected to decrease to 57 percent (Aud, Fox and KewalRamani, 2010). Not only has the nation experienced a shift in its racial and ethnic composition, the total number oflanguages spoken has dramatically increased. Nearly 20 percent of the nation's population speaks a language other than English at home (U.S. Census Bureau, 2009). Moreover, of that 20 percent, nearly 10.8 million of those who spoke a language other than English at home were school-age children (U.S. Department of Education, 2008). Further, in 2006, nearly 72 percent of school-age children who spoke a language other than English at home spoke Spanish. It is evident that the increase of predominately Spanish speakers have become a great presence in the United States and has greatly revamped the face of American education. School district's racial and ethnic composition has greatly shifted. For example, during the 2007-2008 school year there were over 5 million LEP students, an increase of 53 percent increase compared to the 1997-1998 school year (U.S. Department of Education, 2010).
20 Additionally, nearly 45 percent of teachers reported having at least one LEP student in the classroom (U.S. Department of Education, 2003). Kindler (2002) reports that within the nation's school districts, nearly 10 percent of the student population is categorized as LEP. This suggests that nearly lout of 10 students in the classroom are struggling to comprehend and verbalize academic knowledge using the English language. Furthermore, two-thirds of LEP students are elementary-age students (Kindler, 2002). This data suggests that most students, who speak a language other than English, speak Spanish. The data also reports that the majority of these students are just beginning their educational journeys through school. When comparing the growth rates of White to that of Hispanic school children, the White population of school-age children decreased from 61 to 56 percent during the 2007-2008 school year, while the Hispanic population rose from 17 to 21 (Aud, Fox &, KewalRamani, 2010). Although Hispanics are a minority when compared to the total number of students throughout the nation, they are the majority in many school districts. For example, Hispanic school-age children were the majority in 8 of the nation's largest 20 school districts during the 2007-2008 school year (Aud, Fox, & KewalRamani, 2010). Most educators think of the Hispanic population as a minority but as the data delineates, this population is the majority in nearly 40 percent of the nation's largest 20 school districts. Given this data, only 3 states require teachers to demonstrate competence in ELL instruction (Aud, Fox, & KewalRamani, 2010). The nation's cultural and linguistic makeup has dramatically shifted from a Caucasian dominated population to those of color, especially Hispanics. The expansion of CLD students, coupled with the explosion of the number of intelligence tests administered within schools has engendered a number of legal and ethical concerns regarding the education of LEP students. A
21 significant concern that remains for schools, if not the primary concern, has been the inappropriate assessment of CLD children and the effects thereof. Law and Ethics Valencia and Suzuki (2001) highlight a number of landmark cases in which students of color were inappropriately administered formal, standardized measures of cognitive ability. These very intelligence tests were normed primarily on middle-class, white children, which in return brought about biases such as cultural and linguistic loadings within tests (Childs, 1990). The first case, Hobson v. Hansen (Hobson v. Hansen, 1968) was a case in 1967 that investigated whether or not group-administered intelligence tests were appropriate to use with minority students. In this case, Black children from the District of Columbia School District were administered group intelligence tests that impacted the educational "track" to which they would be placed. The dispute was that these group-administered intelligence tests were normed using a high percentage of middle-class white children and thus, the content of the items were biased. Ultimately, a federal district judge agreed with the students, that the tests were inappropriately selected and used. The outcome of this case was significant and groundbreaking in the fact that it acknowledged that test bias exists and that some intelligence measures were not valid when used with CLD students (Hobson v. Hansen, 1968; Valencia & Suzuki, 2001). An additional case exemplifying the damage intelligence tests can have if inappropriately selected and administered was the case of Diana v. State Board of Education. Childs (1990) notes that this case involved a district administering individual intelligence tests to MexicanAmerican children. The problem was that the intelligence tests were read to the students in English and as a result, many of the children were thought to have significantly sub-average intelligence scores. As a result, these children were then placed in classes specifically for those
22 with severe cognitive deficits. When the families of these children challenged the school's decision, the children were then read the intelligence tests in Spanish and as a result, the scores increased (Childs, 1990). In fact, one Mexican-American student initially received a cognitive ability score of 30, which is categorized as a cognitive disability. When a bilingual school psychologist retested this student in Spanish, her cognitive ability score was 79, which is considered below average (Jacob & Hartshorne, 2007). Similar to the case of Diana v. State Board of Education, the case of Larry P. v. Riles tested the court to determine if standardized measures of assessment were valid indicators of African American students' intelligence. This class action suit argued that far too many African American students were overrepresented and placed in classes for those with significant cognitive impairments. The courts agreed, noting that the intelligence measures contained cultural bias and therefore, the tests, when used with African American students were deemed invalid (Jacob & Hartshorne, 2007). Given the high volume of CLD students, the expanding phenomena of intelligence testing and the over and underrepresentation of minority students ethical principles and codes have been established to ensure appropriate and valid intelligence testing. The National Council on Measurement in Education (NCME), APA, NASP, and AERA are four major governing bodies that set ethical precedents and standards for school psychologists to adhere to when working with CLD students. For example, Standard 9.02 of APA's (2002) ethical code of conduct states that school psychologists should use assessment methods, such as intelligence tests that is sensitive to the student's primary language as well as the psychometric properties of the assessments used. Furthermore, APA advocates that school
23 psychologists are cognizant of cultural and linguistic factors that may impact how a school psychologist interprets test results (APA, 2002). NASP's Principles for Professional Ethics (NASP, 2010) Standard 11.1.2 mandates that school psychologists seek out information, knowledge and training regarding diverse students' cultural and linguistic makeup, especially prior to assessing these students. Moreover, NASP (2010) advocates that school psychologists engage in appropriate and fair testing, taking into account linguistic and cultural factors when selecting, administering and interpreting tests to ensure that validity is maintained. In addition, AERA, APA, and NCME have created a joint set of standards and principles for testing CLD students. This joint set of ethical standards include (AERA, 1999): 7.7. In testing applications where the level of linguistic or reading ability is not part of the construct of interest, the linguistic or reading demands of the test should be kept to the minimum necessary for the valid assessment of the intended construct. (p. 82) 9.1. Testing practice should be designed to reduce threats to the reliability and validity of test score inferences that may arise from language differences. (p. 97) 9.3. When testing an examinee proficient in two or more languages for which the test is available, the examinee's relative language proficiencies should be determined. The test generally should be administered in the test taker's most proficient language, unless proficiency in the less proficient language is part of the assessment. (p. 98) 11.22. When circumstances require that a test be administered in the same language to all examinees in a linguistically diverse population, the test user should investigate the validity of the score interpretations for test takers believe to have limited proficiency in the language of the test. (p. 118)
24 12.3. Tests selected for use in individual testing should be suitable for the characteristics and background of the test taker. (p. 131) The previously listed codes, standards and principles suggest that school psychologists who select, administer and interpret intelligence tests have an ethical responsibility locate information and remain abreast regarding the cultural and linguistic background of the child to be assessed. Additionally, the school psychologist has an ethical responsibility to seek out existing research and receive additional training when lacking the requisite knowledge and skills needed to conduct valid cognitive assessments with CLD students in accordance to best practice (AERA, 1999; APA, 2002; NASP, 2010). Culturally Sensitive Frameworks For Cognitive Assessment With CLD Students Rhodes, Ochoa and Ortiz (2005) as well as other experts in the field of bilingual psychology have provided a comprehensive framework for both bilingual and monolingual school psychologists to follow when assessing cognition for CLD students. However, it should be noted that prior to administering formal assessments, school psychologists need to be cognizant of how culture, religion, acculturation, socio-economic status and medical conditions impact a CLD student as a whole (Artiles & Ortiz, 2002; Sattler, 2008). Once these holistic variables have been accounted for, CLD students may be assessed using the Multidimensional Assessment Model for Bilingual Individuals (MAMBI). The MAMBI incorporates the CLD child's language proficiency in English and the child's primary language, the language in which the child has received academic instruction, current grade level and what instrument to use when conducting a cognitive assessment (Vazquez-Nuttall et aI., 2007). First, the child's language abilities should be assessed in both their native or primary (Ll) language and in English (L2) to determine, which, if any language the child is proficient (AERA,
25 1999; Artiles & Ortiz, 2002; Rhodes, Ochoa, & Ortiz, 2005). Rhodes, Ochoa & Ortiz (2005) strongly advocate that formal (e.g., standardized language proficiency tests) and informal measures (observations, interviews, questionnaires, language samples, story telling and teacher rating scales) of language ability be conducted to ensure the student's entire repertoire of language skills are tapped. Formal measures of language acquisition produce two types of language development. According to Cummins (1984), two types oflanguage exist. The first is classified as basic interpersonal communication skills (BICS) and the second is referred to as cognitive academic language proficiency (CALP). BICS is often referred to a social or interpersonal language observed amongst students while CALP is considered to be academic language (Cummins, 1984). It is imperative to make this distinction between the two types of language as many teachers mistake ELL students who are proficient in BICS but not in CALP, as fluent English speakers. Once informal and formal language assessments have been conducted, an ELL student may fall within one of nine language profiles. Typically, ELL students are most likely to exhibit the following language profiles: minimal language skills in L1 and minimal language skills in L2, emergent language skills in L1 and minimal language skills in L2, fluent language skills in L1 and minimal language skills in L2, minimal language skills in L1 and emergent language skills in L2, emergent language skills in L1 and emergent language skills in L2, fluent language skills in L1 and emergent language skills in L2, and fluent language skills in both L1 and L2. As one can see, language skills between ELL students can be extremely diverse (Rhodes, Ochoa, & Ortiz, 2005). The second procedure in assessing cognition is to determine which types of intelligence tests are more appropriate for the student.
26 Rhodes, Ochoa and Ortiz (2005) note that once the CLD student's language skills have been documented, the school psychologist must ascertain if the student should be assessed using nonverbal measures, verbal measures in L1, verbal measures in L2 or perhaps a combination of the three. In addition, the school psychologist must account for the student's grade level and modes of academic instruction (i.e., current bilingual instruction in addition to English language services, previous bilingual instruction and current English language services or no previous bilingual instruction, while receiving or not receiving English language services). If the student does not possess CALP in L1 or L2, the school psychologist should administer a nonverbal battery to eliminate linguistic bias. Further, depending upon the grade and mode of instruction (e.g., Ll only, L2 only, or Ll and L2), the school psychologist may choose to test in the student's primary language. For example, if the student has been or currently is receiving academic instruction in a bilingual program and is fluent in his or her primary language, the school psychologist should administer an intelligence test in the child's primary language. Lastly, if the child currently in 5th-7th grade, has received or currently receives bilingual academic instruction, and is fluent in his or her primary language and in English, the school psychologist should administer an intelligence test in both the student's primary language and in English (Rhodes, Ochoa, & Ortiz, 2005). A second blueprint that is promising to better assess ELL's is the use of the Cattell-HomCarroll (CHC) cross-battery approach. Vazquez-Nuttall et al. (2007) conceptualize the crossbattery approach as pulling select subtests from multiple intelligence tests and aligning these subtests within a matrix. The cross-battery matrix is typically a 3 x 3 matrix that accounts for cognitive processes within the child as-well-as linguistic and cultural influences that may impact test performance. Within the two domains of language and culture, select subtests are placed in
27 three categories, ranging from low to moderate to high. The researchers note that although the cross-battery approach is not intended for diagnoses, patterns or trends can be observed based upon the levels of linguistic and cultural influence within subtests. In addition, the 3 x 3 crossbattery matrix can assist the school psychologist in determining if a CLD child's cognitive performance is due to linguistic and cultural factors or if there processing deficits exist. Unlike traditional intelligence tests, this innovative selective testing process can isolate variables that may decrease overall cognitive performance due to cultural and linguistic bias (Vazquez-Nuttall et aI. (2007). Assessing cognitive ability amongst ELL students is a complex task. Prior to assessing cognitive ability, school psychologists must take into account numerous variables that may impact a CLD student such as: cultural, linguistic, religious, socio-economic status and medical factors. Once these factors have been accounted for, the school psychologist may choose from the MAMBI or CHC Cross-Battery approach. Although researchers (Artiles & Ortiz, 2002; Rhodes, Ochoa, & Ortiz, 2005; Vazquez-Nuttall et aI., 2007) have provided multiple frameworks to assess cognitive ability of CLD children, a magnitude of barriers prohibits school psychologists from conducting valid assessments. Current Barriers When Assessing ELL Students Research demonstrates there continues to be withstanding debate amongst school psychologists as to how intelligence testing should be conducted amongst ELL students and a variety of obstacles that prevent school psychologists from engaging in best practice (Bainter & Tollefson, 2003; Kranzler, Flores, & Coady, 2010; McCloskey & Athanasiou, 2000; O'Bryon & Rogers, 2010; Ochoa et aI., 2005; Valencia & Suzuki, 2001;). Barriers such as the use of interpreters, competence regarding second language acquisition, multicultural training in
28 assessment, available cognitive batteries, and lack of agreement as to what practices constitute valid cognitive assessment are some of the profound obstacles when assessing CLD students. One barrier that exists amongst school psychologists is the lack of agreement as to what measures should be taken to conduct assessments with ELL students. In order to determine what methods school psychologists were using when assessing ELL children, Bainter and Tollefson (2003) surveyed 500 NASP members to determine the agreement or lack-there of regarding what constitutes acceptable assessment practices. The survey recipients were asked to rank, in order (never acceptable, rarely acceptable, sometimes acceptable, usually acceptable and always acceptable), the acceptability of 8 categories in regards to assessment practices when assessing ELL students. Bainter and Tollefson (2003) found that of the sample surveyed, school psychologists ranked utilizing bilingual school psychologist to administer an intelligence test in English and in the child's native language and testing in English when the child is perceived to be fluent in English as usually or always acceptable. In addition, respondents reported that using a nonverbal test without any oral directions and without an interpreter, using a nonverbal test with oral instructions with an interpreter, and using foreign-normed translated tests as sometimes acceptable or usually acceptable. Furthermore, respondents rated using a nonverbal test with oral directs, without an interpreter and using a test in English when the child is not fluent in English as never acceptable or rarely acceptable (Bainter & Tollefson, 2003). The survey results demonstrate that there is little agreement amongst school psychologists as to what best practices should be used when assessing the intelligence of an ELL student. A possible explanation as to why there is no complete agreement as to what practices should or should not be taken when assessing ELL students may be explained through Ochoa, Rivera and Ford's (1997) survey research regarding school psychologists' competency when
29 assessing ELL students. After surveying over 1,500 NASP affiliated school psychologists, their data suggests a significant lack of competency in cross-cultural assessment and how to conduct evaluations with ELL students. In regards to graduate school training, over half of the respondents reported having "somewhat or very little" competency in the area of cross-cultural assessment, while only 10.6 percent of respondents reported being "above average or extremely" well competent in conducting cross-cultural issues. Ochoa, Rivera and Ford (1997) found that 69 percent of respondents reported no or very little competency during their graduate training on how to conduct evaluations with ELL students, while less than 4 percent reported having above average or extremely well competency in conducting evaluations with ELL students. In addition, McCloskey and Athanasiou (2000) found that only one-fourth of school psychologists surveyed reported to be moderately to completely comfortable assessing ELL students, while early three-fourths of respondents were significantly less comfortable providing services to this population. This data suggests that school psychologists vary in degree of self-perceived competence in cross-cultural assessment issues and self-confidence when assessing ELL students. A significant percentage of school psychologists reported having both low competence and confidence when conducing assessments with ELL students. It should be noted that a significant portion of school psychologists surveyed reported receiving little or inadequate graduate training in regards to assessing CLD students. In addition to overall confidence and competence, monolinguistic school psychologists appear to vary compared to bilingual school psychologists in regards to critical aspects of assessment such as second language acquisition and acculturation. A second barrier inhibiting the ability to conduct valid cognitive assessments is the lack of knowledge pertaining to second language acquisition. O'Bryon and Rogers (2010) surveyed
30 nearly 300 bilingual school psychologists and found survey respondents reported as having "above average" knowledge in regards to second language acquisition. O'Bryon and Rogers (2010) found that respondents reported to be "somewhat comfortable" assessing for language proficiency. Ochoa, Rivera and Ford (1997) found that of those who have conducted bilingual assessments, nearly 59 percent of school psychologists reported their graduate training provided no to very little competency, while 21 percent reported their graduate training prepared them to be fairly well to extremely well competent in regards to second language acquisition. This data suggests that bilingual school psychologists, relative monolinguistic school psychologists who received training in bilingual assessment, were more competent and more comfortable when assessing for language proficiency. Another complicating variable within assessing language proficiency is what instruments to use when assessing for language proficiency and determining who is responsible for conducting language proficiency assessments. O'Bryon and Rogers (2010) found that bilingual school psychologists use a variety of educators and professionals when conducting language proficiency assessments. For example, O'Bryon and Rogers (2010) found that most schools (31 percent) are using ELL specialists to conduct language proficiency assessments. In addition, the researchers found that nearly 20 percent of schools use two or more professionals, while only 15 percent of the bilingual school psychologists performed language proficiency assessment. This data suggests that ELL specialists who may be untrained on how to administer language proficiency tests are most likely to conduct language assessments, while bilingual school psychologists are unlikely candidates to conduct language assessments. As such, this data reports there is significant variability within schools as to who is responsible for conducting language proficiency assessments.
31 Variability can be observed in regards to whether educators and other professionals choose to implement informal, formal or both modalities to assess language proficiency. Research suggests that nearly 57 percent of bilingual school psychologists use formal batteries, such as the Woodcock-Munoz Language Survey (WMLS), the Bilingual Verbal Ability Test (BVAT) and the Bateria III Woodcock-Munoz (O'Bryon & Rogers, 2010). Moreover, 83 percent of bilingual school psychologists engage in informal methods, such as parent interviews, student interviews, teacher interviews and observations to assess for language proficiency (O'Bryon & Rogers, 2010). Although school psychologists use formal measures to conduct language assessments, Rhodes, Ochoa and Ortiz (2005) caution that these batteries have several limitations such as insufficient psychometric properties, deficient comparable norms relative to the child's primary language, norms created from monolinguistic speakers, and that formal measures tend not to tap every aspect of language. Despite these flaws, Rhodes, Ochoa and Ortiz (2005) note that formal language assessments should be used in conjunction with informal measures. Again, the previously noted research demonstrates that some bilingual school psychologists use informal, formal, and both modalities when conducting language assessments and a varying degree in which they select specific tools to evaluate language exists. Many school psychologists have turned to interpreters to aid in cognitive assessment because of the linguistic complexity of today' s students. Several researchers (Rhodes, Ochoa, & Ortiz, 2005; Sattler, 2008; Vazquez-Nuttall et aI., 2007) caution and note many areas of concern when using an interpreter. First, many interpreters do not have ample time to establish rapport with the child prior to testing, possibly leading to inaccurate test results (Sattler, 2008). Second, a lack of corresponding words between both languages or differences dialect between the student and translator may exist or the interpreter may alter the child's response to a task (Sattler, 2008;
32 Vazquez-Nuttall et aI., 2007). Third, concerns that may impact the validity of test results is that the school psychologist may not have received adequate training using a translator. Fourth, there are concerns on the lack of accepted practices when using interpreters, as well as the lack of intelligence tests that have been normed using translators (Sattler, 2008; Vazquez-Nuttall et aI., 2007). Despite these potential damaging implications, Ochoa et aI. (2005) found that 78 percent of school psychologists used interpreters when assessing ELL students and of these school psychologists, only 52 percent were trained to use interpreters. In addition, Ochoa, Gonzalez, Galarza, and Guillemard (as cited in Rhodes, Ochoa, & Ortiz, 2005, p. 91) report that of school psychologists who use interpreters, only 37 percent received formal training regarding the use of interpreters. Lastly, O'Bryon and Rogers (2010) report that bilingual school psychologists use interpreters nearly 15 percent of the time when assessing ELL students. Of these bilingual school psychologists, nearly 33 percent used untrained translators, nearly 29 percent used friends or family members to interpret, and 21 percent used an interpret without receiving adequate training. It is clear that there are numerous shortcomings when using interpreters and that a significant percentage of school psychologists are inappropriately using translators during ELL assessments. In additional barrier to conducting cognitive assessments with CLD students, determining which assessment tool to utilize further enhanced "best practices" on providing services to ELL students. Cognitive Assessment Instruments School psychologists have the option of administering a wide selection of instruments to use when assessing cognitive of ELL students. O'Bryon and Rogers (2010) report of the bilingual school psychologists who were surveyed, 59 percent used norm-referenced assessments that were representative to the child's demographics. Additionally, 63 percent of bilingual
33 psychologists were found to use norm-referenced assessments that used standardized norms. McCloskey and Athanasiou (2000) also explored standardized test usage amongst school psychologists who assessed ELL students and found that 57 percent of respondents reported using the WISC-Ill. In addition to verbal measures, 79 percent of bilingual school psychologists utilized nonverbal assessments (O'Bryon and Rogers, 2010). McCloskey and Athanasiou (2000) found that the TONI-3 is used by 43 percent of school psychologists surveyed. This data reflects that a significant number of school psychologists, both bilingual and monolingual, are using nonverbal measures of intelligence. When assessing a CLD student, many school psychologists tum to nonverbal measures of intelligence. Some of the most commonly used nonverbal batteries include: These tests include: the KABC-II, CTONI, TONI-3, Naglieri Nonverbal Ability Test (NNAT), Leiter-R, Raven's, Wechsler Nonverbal Scale of Ability (WNV) and the UNIT (Garcia-Vazquez, Crespi, & Riccio, 2010; Sattler, 2008). Some of these measures, such as the NNAT and Raven's are limited in scope, in that they only measure nonverbal intelligence and only use unidimensional tasks (i.e., matrices; Sattler, 2008). Although a number of nonverbal batteries purport to measure general intelligence, the UNIT is a highly regarded and often used battery amongst school psychologists (Fives & Flanagan, 2002). The Universal Nonverbal Intelligence Test The UNIT is individually administered, requires no verbal language, and was designed specifically to reduce cultural and linguistic bias. Because the UNIT does not use language, pantomime and nonverbal gestures are used to communicate. In terms of intelligence, the battery taps two facets of intelligence; the facet of intelligence memory and the second is reasoning abilities. The UNIT is versatile in that it can be used as a screener, using two subtests, a standard
34 batter, utilizing four subtests or an extended battery, gathering additional information using 6 subtests (Bracken & McCallum, 1998; Fives, Flanagan, 2002). The UNIT exemplifies multiple strengths relative to traditional intelligence batteries and other nonverbal batteries. In relation to other nonverbal batteries, the UNIT is comprehensive in that it utilizes multiple tasks as opposed to singular tasks such as matrices and produces an overall intelligence quotient (IQ). Relative to traditional batteries, the UNIT requires no receptive or expressive language skills, isolating linguistic biases that may exist within traditional intelligence tests. Also, the UNIT does not place an emphasis on timed processes like subtests found within performance indices on traditional assessments (Fives & Flanagan, 2002). Further, the UNIT represents sound psychometric properties, such as reliability and validity. In terms of Internal consistency reliability, the UNIT's overall measure of intelligence ofFSIQ was shown to be .91 for the Abbreviated Battery, .93 for the Standard Battery and .93 for the Extended Battery. Further, the average index reliability coefficient was .89 for the Standard Battery, producing coefficients from a low of .86 to .91. In terms of test-retest reliability, the UNIT meets or exceeds the .90 coefficient threshold for ages 8 and older. In addition, the UNIT was found to reliably remain stable across extreme scores (IQ = 70; IQ = 130) of intellectual functioning. In addition to becoming one ofthe most precise and accurate instruments of nonverbal cognitive assessment, the examiner manual of the UNIT provides multiple studies providing ample data in regards to validity (Bracken & McCallum, 1998; Fives & Flanagan, 2002). The test developers underwent a series of methods to ensure the UNIT would be as valid as possible. For example, when constructing and piloting the battery, tasks remained within the battery only if performance was affected due to reasoning abilities and not due to external
35 factors, such as the ability to comprehend administrative directions and so forth. Further, the test developers placed a heavy emphasis on content validity, eliminating items and tasks that required previously learned academic infOlmation or cultural encounters. The examiner's manual notes a series of correlation studies pertaining to other cognitive and academic achievement batteries. Bracken and McCallum (1998) explored the UNIT's convergent validity by comparing the battery to the Wechsler Intelligence Scale - Third Edition (WISC-III), the Tests of Cognitive Ability of the Woodock-Johnson Psycho-Educational Battery-Revised (WJ-R), the Bateria Woodock-Munoz (Bateria-R) and nonverbal batteries such as the Standard Progressive Matrices (Raven's SPM), the Matrix Analogies Test (MAT) and the TONI-2. Relative to the WISC-III, the UNIT produced very similar correlations in regards to overall or FSIQ. For example, within a sample of students with document learning disabilities, the UNIT Standard Battery produced an overall FSIQ of 91.69, which was highly comparable with the WISC-III's FSIQ of 92.44. Further, to determine how culture may influence scores between the WISC-III and the UNIT, Bracken and McCallum (1998) tested a sample of Native Americans, using both batteries. Not surprisingly, the lowest score, when comparing the WISCIll's Verbal Index Quotient (VIQ), Perceptual Index Quotient (PIQ) and FSIQ, the VIQ was the lowest of the scores. The UNIT, as predicted, produced a FSIQ score (99.76) that was comparable to the WISC-III's PIQ score of 103.06 (Bracken and McCallum, 1998). In a similar study, Bracken and McCallum (1998) compared the UNIT to that of the WJR Cognitive Ability test battery. The correlation of the UNIT Standard Battery FSIQ to that of the WJ-R Broad Cognitive Ability score was .83. In addition, the overall FSIQ score of the UNIT (102.59) was on par with the WJ-R Broad Cognitive Ability score (105.31). However, when the test developers compared the UNIT to that of the Bateria-R (the Spanish version of the
36 WJ-R), scores were not comparable. The study used two samples of students whose primary language was Spanish; the first sample consisted of students who were receiving services for English (high English proficiency), while the other sample group was receiving bilingual educational classes (LEP). The overall Broad Cognitive Ability score for the bilingual sample was 75.44 and the overall Broad Cognitive Ability score for the sample of students receiving services for English was 62.81. Compared to the UNIT Standard Battery, FSIQ scores were 92.30 and 95.54 for the students in the bilingual and students receiving English instruction samples. The developers of the UNIT observed that there was a high degree of variability within the Bateria-R but that there was very little variability within the UNIT. This information suggests that the Bateria-R may not fully capture the true ability of these students (Bracken & McCallum, 1998). Because the UNIT is a nonverbal instrument that captures intelligence, Bracken and McCallum (1998) compared the UNIT to that of the MAT and the Raven's SPM. The MAT and Raven's SPM are both nonverbal instruments but differ in that these tests only measure nonverbal intelligence using matrices, while the UNIT uses multiple dimensions of intelligence to produce an overall general intelligence score. Within the study, individuals from Ecuador were administered all three nonverbal measures of intelligence. All three instruments produced similar overall scores, ranging from 98.07 to 101.89. The advantage to using the UNIT, as opposed to opting for the MAT or the Raven's SPM, is that the UNIT can yield comprehensive information regarding intelligence as it produces an overall intelligence quotient (Bracken and McCallum, 1998). In addition to demonstrating strong concurrent validity with other highly touted intelligence batteries, the UNIT demonstrates adequate predictive validity in relation to academic achievement.
37 When a sample of gifted students were administered both the UNIT and the WJ-R Achievement test, results suggested that the UNIT is a strong predictor for Broad Mathematics, Broad Knowledge and Skills clusters. In relation to the WISC-III, the UNIT was shown to be a more accurate predictor to the WJ-R Achievement test as the Standard Battery FSIQ of the UNIT was more closer in comparison to the WJ-R Achievement than that of the WISC-III (Bracken & McCallum, 1998). An additional study examined the predictive utility of the UNIT and Bateria-R, relative to reading ability, measured by the Woodcock Language Proficiency Battery - Revised (WLPB-R). The study used two samples; the first sample contained students who were receiving bilingual educational services while the second sample utilized students receiving English language instruction. The results reported that both the UNIT and the Bateria-R are more accurate predictors of reading achievement for students who are LEP than those who have higher levels of English proficiency (Bracken and McCallum, 1998). In another study, Bracken and McCallum (1998) compared the UNIT and the TONI-2 to determine the predictive validity of reading comprehension and math performance, as measured by the PIAT-R. The results demonstrate that the UNIT is a stronger predictor, relative to the TONI-2 for predicting reading and math achievement, as measured by the PIAT-R. For example, the predictive correlation coefficients for the Standard Battery FSIQ on the UNIT and that of the reading comprehension and math on the PIAT-R were .53 and 040. Interestingly, the TONI-2 produced predictive correlation coefficients .18 and .27 for the reading comprehension and math subtests on the PIAT-2 (Bracken & McCallum, 1998). The examiner's manual provides a wealth of information demonstrating sound psychometric properties in regards to multiple aspects of reliability and validity. The test
38 developers made a conscious effort to pull tasks that impacted scores primarily due to administrative complications, previously acquired information or cultural experiences and not reasoning ability to novel tasks. Moreover, not only does the UNIT adequately include racial and ethnic minorities within the standardization sample, the test heavily tested specialized populations such as students who speak a language other than English, students who are receiving bilingual educational services and those who are primarily receiving English language instruction (Bracken & McCallum, 1998). In sum, the utility in intelligence testing has evolved from separating those who were once considered mentally retarded from those with superior intelligence to providing psychoeducational information to determine strengths, weaknesses and educational placements (Boake, 2002). With the explosion of intelligence testing, American schools are the largest consumers of intelligence tests (Valencia & Suzuki, 2001). Additionally explosive has been the amount of school-age students who are CLD (Kindler, 2002; U.S. Department of Education, 2008; U.S. Census Bureau, 2009; Aud, Fox and KewalRamani, 2010;). With heavy emphasis placed upon cognitive assessment, coupled with the diverse nature oftoday's student body, school psychologists face a number of barriers when assessing ELL students (Bainter & Tollefson, 2003; Kranzler, Flores, & Coady, 2010; McCloskey & Athanasiou, 2000; O'Bryon & Rogers, 2010; Ochoa et al., 2005; Valencia & Suzuki, 2001). Although assessment frameworks have been provided, there remains a disconnect between research and application for a multitude of reasons. Because intelligence testing will remain a valuable tool within schools, school psychologists should be familiar with the range of nonverbal intelligence tests to utilize when assessing the intelligence of an ELL student (Garcia-Vazquez, Crespi, & Riccio, 2010). Although many nonverbal batteries exist, some purport to measure nonverbal intelligence, while
39 other tap general intelligence. Within the arena of nonverbal instruments that tap general intelligence, the UNIT remains a reliable and valid a battery that is supported psychometrically for the use with CLD children (Bracken & McCallum, 1998).
40 Chapter III: Summary, Critical Analysis, and Recommendations This chapter will include a summary of the literature review presented in Chapter Two. It will include a summary of the evolution of intelligence testing, the change in demographics of CLD students, landmark cases and ethical standards, possible frameworks for culturally sensitive assessment, barriers hindering best practice in cognitive assessment, available nonverbal and norm-referenced cognitive batteries and the use of the UNIT. Additionally, critical analysis of the literature review will be provided and possible future recommendations to in-training school psychologists, practitioners, and those who train school psychologists will be addressed. Summary Early measures of intelligence were employed to differentiate those with severe cognitive impairments from those who were intellectually superior (Kaufman, 2000). The function in which intelligence tests serve has rapidly evolved from differentiating those with inferior to those with superior intelligence to aiding in screening procedures for military personnel, allowing clinical psychologists to locate specific mental processes with precision, and have more recently served as a critical tool for school psychologists (Boake, 2002; Kaufman & Lichtenburger, 2006; Sattler, 2008). School psychologists utilize intelligence tests to gather information that can lead to diagnoses, plan for intervention and allow for access to special education services. Because intelligence testing has become so prominent in today's schools, school psychologists are currently the greatest users of intelligence tests (Valencia & Suzuki, 2001). Ample research (Camara, Nathan, & Puente, 2000; Groth-Mamat, 2003; Kaufman & Lichtenberger, 2005; Stinnett, Havey, & Oehler-Stinnett, 1994) has documented that the most frequently used intelligence tests continues to be that of the Wechsler series. However,
41 administering an intelligence test that contain cultural and linguistic bias, may engender a series of negative implications for CLD students. The racial, ethnic and linguistic composition of the United States demographics has shown a monumental shift from a decreasing White population to that of a stark increase of the CLD population (Aud, Fox, & KewalRamani, 2010). More specific to the student body, the total number ofLEP students has spiked nearly 53 percent and nearly lout of2 teachers reported serving an LEP student in the classroom (U.S. Department of Health and Human Services, 2003; NCLEA,2010). In many schools, the CLD population in which the school psychologist serves is not that of the minority but rather the majority. Aud, Fox and KewalRamani (2010) report that CLD students are the majority in nearly 40 percent of the nations 20 largest school districts. Given the sharp increase of the CLD population and the popularity of intelligence testing, a number of historic landmark cases and ethical standards were brought about to ensure culturally sensitive practices. Legal cases such as Hobson v. Hansen (Harvard Law Review, 1968), Diana v. State Board of Education (Childs, 1990) and Larry P. v. Riles (Jacob & Hartshorne, 2007) all involved the misuse of intelligence testing with CLD students. Influential cases such as the previous legal cases have prompted professional bodies that govern practicing school psychologists to produce a series of ethical standards and principals to ensure school psychologists administer cognitive assessments in a culturally sensitive fashion. The National Association for School Psychologists (NASP, 2010) has continued to create and revise a series of ethical principles for practicing school psychologists to adhere to when working with CLD students. Similarly, the American Educational Research Association (AERA), the American Psychological Association (APA) and the National Council on Measurement in Education (NCME) have created a joint set of
42 principles for the school psychologist to adhere to when conducting cognitive assessments with CLD students (Standards for Educational and Psychological Testing, 1999). Although a series of ethical principles and standards have long been established, the research has documented a significant number of school psychologists are challenged with a number of barrier that inhibits best practice when conducting cognitive assessments (McCloskey & Athanasiou, 2000; Valencia & Suzuki, 2000; Bainter & Tollefson, 2003; Ochoa et aI., 2004; Kranzler, Flores, & Coady, 2010; O'Bryon & Rogers, 2010). Bainter and Tollefson's (2003) research found that there is no clear agreement amongst school psychologists as to what signifies best practice when administering cognitive assessments to CLD students. Ochoa, Rivera and Ford (1997) and McCloskey and Athanasiou's (2000) research has exposed that a significant portion of school psychologists appear to lack confidence and competence when assessing CLD students. Further troubling is that many school psychologists lack knowledge regarding second language acquisition, fail to receive adequate graduate training to assess CLD students and use a number of unsupported practices when conducting language proficiency assessments (Ochoa, Rivera, & Ford, 1997; Rhodes, Ochoa, & Ortiz, 2005; Vazquez-Nuttall et aI., 2007; O'Bryon and Rogers, 2010). Despite these barriers, school psychologists can choose from a variety of cognitive assessment batteries and utilize culturally sensitive frameworks when conducting cognitive assessments with CLD students. School psychologists have the freedom to select from a host of nonverbal assessment batteries, which include: K ABC-II, CTONI, TONI-3, NNAT, Leiter-R, Raven's, WNV and the UNIT (Garcia-Vazquez, Crespi, & Riccio, 2010; Sattler, 2008). In addition to nonverbal measures of intelligence, a cross-battery approach can be used when assessing CLD students. Using a 3x3 matrix, select subtests are placed within the matrix,
43 ranging from low, moderate, to high cultural and linguistic loadings (Vazquez-Nuttall et aI., 2007). This innovative approach now allows school psychologists to determine if the child's performance is due to cultural and linguistic factors of is a valid indicator of the child's cognitive processes. In addition to the cross-battery approach, Rhodes, Ochoa and Ortiz (2005) have created the MAMBI, a culturally sensitive framework for school psychologists to follow when assessing CLD students. The MAMBI framework is comprehensive in that it evaluates a child's language proficiency in both languages, using formal and informal measures and accounts for a child's grade, the instructionallanguage(s) in which the child was educated, and CALP profile of the child. Once this information is gathered, the school psychologist is directed to assess the child in Ll, L2, or use a nonverbal measure of intelligence (Rhodes, Ochoa, & Ortiz, 2005). Lastly, the UNIT is a promising nonverbal measure of intelligence that school psychologists may use when conducting cognitive assessments with CLD students. Bracken & McCallum (1998) have demonstrated that the UNIT is a reliable and valid measure of intelligence. This battery was piloted using special populations, such as students receiving English instruction only, bilingual students and students who only spoke a language other than English (Bracken & McCallum, 1998). In addition, the UNIT is favored over many nonverbal measures as it captures general intelligence as opposed to nonverbal intelligence (Bracken & McCallum, 1998). Administering intelligence tests has long been a staple of school psychologists and will continue to remain a critical assessment tool. Patterns and trends demonstrate that the nation's schools are increasingly diverse in regards to race, ethnicity, culture and language. This diverse student population has created a number of variables that challenge school psychologists to adhere to best practices when conducting assessments with CLD students. Despite the numerous
44 challenges, alternative frameworks can guide school psychologists to ensure culturally valid cognitive assessments. One valid tool school psychologists may utilize when conducting cognitive assessments with CLD students is the UNIT. Critical Analysis The highlighted research addressed in this literature delineates that cognitive assessment, especially with CLD students has greatly progressed. Garcia-Vazquez, Crespi and Riccio (2010) have demonstrated that school psychologists have a wide array of possible nonverbal cognitive assessments to draw from. However, although a select amount of nonverbal batteries appear to be valid indicators of cognition, batteries such as these do not encompass the many facets of intelligence battery, like a Wechsler series would. Like nonverbal batteries, cognitive assessments, administered in other languages have expanded. Cognitive assessments, such as the WISC-IV Spanish, (Wechsler, 2005) are a promising sign that the testing arena is expanding beyond Eurocentric, English-based assessments. Researchers (Rhodes, Ochoa, & Ortiz, 2005) have documented that culturally and linguistically loaded tests are not appropriate for many CLD students but yet batteries standardized on bilingual students remain scarce. Additionally, a wealth of research exists regarding the pervasive barriers school psychologists face when assessing CLD students' cognition. McCloskey and Athanasiou (2000) have demonstrated that a significant portion of school psychologists perceive themselves as lacking competence and requisite skills when assessing CLD students. Further research needs to examine what is currently being done and what can be done to alleviate this dilemma. Finally, culturally sensitive frameworks, such as the cross-battery approach and the MAMBI have provided, but very little, if any research has applied these frameworks, replicated the results and have been shown to be valid. In fact, Kranzler, Flores and Coady (2010) have shown that some
45 of the most novel approaches and frameworks for assessing LEP students have been invalid. There is an abundance of literature that describes, in great detail, what practices are unethical, ineffective, and invalid when assessing CLD students; however, there is a indiscernible body of research that provides valid assessment practices with CLD students that is commensurate to their Caucasian peers. Recommendations The following recommendations are suggested for further research regarding culturally sensitive cognitive assessment practices with CLD students: 1. Due to a significant number of practicing school psychologists perceiving low levels of confidence and competence when working with CLD students, an increase in training workshops and professional development opportunities are needed to equip practicing school psychologists with the requisite skills to provide culturally sensitive assessments. 2. Controversy continues to surround second language acquisition. Specifically, many school psychologists are unfamiliar with the theory regarding second language acquisition and lack the knowledge of how to assess for second language acquisition. Because of this, it is recommended that school psychology training programs incorporate theory and assessment of second language acquisition. 3. A number of school psychologists have reported that their graduate training programs have not provided adequate instruction in regards to bilingual assessment or ample experiences to work with CLD students. Because of this, further research is needed
46 to determine what opportunities and experiences are provided to work with CLD students as well as the breadth and depth of curriculum targeting culturally sensitive evaluations. 4. Although many school psychology training programs typically provide at least one class targeting regarding multiculturalism, this does not adequately prepare school psychologists-in-training to work with CLD clients. Training programs need not only to infuse culturally sensitive practices within current curriculum, but also provide field-based, extended practicum opportunities where students can directly assess the needs CLD students and receive feedback from trained supervisors. 5. Culturally sensitive assessments that are standardized on the demographics of the child in which the school psychologist is intending to assess are extremely rare or possess weak psychometric properties. Research is needed to develop and identify valid, culturally sensitive instruments to assist school psychologists who assess language, academic and cognitive competencies of CLD students. 6. The demographics of school-age students are extremely diverse. However, the racial and ethnic makeup of practicing school psychologists does not mirror this diverse change in student demographics. Further research is needed to determine what barriers prevent and inhibit CLD individuals from becoming students of school psychology or instructors of school psychology training programs. As such, there is a need to determine what is considered best practice to recruit and retain students of color in training programs and academia in particular.
47 References American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: Author. American Psychological Association. (2002). Ethical principles ofpsychologists and code of conduct. Washington, DC: Author. Artiles, A. J., & Ortiz, A. A. (2002). English language learners with special education needs. Retrieved from http://www.misd.netlbilinguallellsandspedcal.pdf Artiles, A. 1., Rueda, R., Salazar, J. J., & Higareda, 1. (2005). Within-group diversity in minority disproportionate representation: English language learners in urban school districts. Council for Exceptional Children, 71, 282-300. Athanasiou, M. S. (2000). Current nonverbal assessment instruments: A comparison of psychometric integrity and test fairness. Journal ofPsychoeducational Assessment, 18, 211-229. doi: 10.1177/07342829001800302 Aud, Fox, & KewalRamani, (2010). Status and trends in the education ofracial and ethnic groups. Washington, DC: U.S. Government Printing Office. Bainter, T. R., & Tollefson, N. (2003). Intellectual assessment oflanguage minority students: What do school psychologists believe are acceptable practices? Psychology in the Schools, 40, 599-603. doi: 10.1 002/pits.l 0131 Boake, C. (2002). From the Binet-Simon to the Wechsler-Bellevue: Tracing the history of intelligence testing. Journal ofClinical and Experimental Neuropsychology, 24,383405. Bracken, B., & McCallum, R. S. (1998). Universal Nonverbal Intelligence Test:
48 Examiner's manual. Itasca, IL: Riverside. Camara, W.J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in professional psychology. Professional Psychology: Research and Practice, 31, 141154. doi: 10.1037//0735-7028.31.2.141 Childs, R. (1990). Legal Issues in Testing. Retrieved from EBSCOhost. Colorado Department of Education. (1999). Special education for culturally and linguistically diverse (CLD) students: Meeting the challenges, realizing the opportunities. Retrieved from http://www.cde.state.co.us/cdesped/download/pdf/c1d_all.pdf Coutinho, M. J., & Oswald, D. P. (2000). Disproportionate representation in special education: A synthesis and recommendations. Journal ofChild and Family Studies, 9, 135-156. Cummins, J. (1984). Bilingualism and special education: Issues in assessment and pedagogy. San Diego, CA: College-Hill Press. Curtis, M. J., Lopez, A. D., Castillo, J. M., Batsche, G. M., Minch, D., & Smith, J. C. (2008). The status of school psychology: Demographic characteristics, employment conditions, professional practices, and continuing professional development. Communique, 36,2729. Fives, C. J., & Flanagan, R. (2002). A review ofthe universal nonverbal intelligence test (UNIT): An advance for evaluating youngsters with diverse needs. School Psychology International, 23, 425-448. Frisby, C. L. (2008). Academic achievement testing for culturally diverse groups. In L. A. Suzuki & J. G. Ponterotto (Eds.), Handbook ofmulticultural assessment: Clinical, psychological, and educational applications (520-541). San Francisco, CA: John Wiley & sons, Inc.
49 Garcia-Vazquez, E., Crespi, T. D., & Riccio, C. A. (2010). Handbook ofeducation, training, and supervision ofschool psychologists in school and community, volume I: Foundations ofprofessional practice. New York, NY: Routledge. Groth-Marnat, G. (2003). Handbook o/psychological assessment (4th ed.). Hoboken, NJ: John Wiley & Sons, Inc. Guiberson, M. (2009). Hispanic representation in special education: Patterns and implications. Preventing School Failure, 53, 167-176. Hobson v. Hansen. (1968). Judicial supervision of the color-blind school board. Harvard Law Review, 81, 1511-1527. Hu, S., & Oakland, T. (1991). Global and regional perspectives on testing children and youth: An empirical study. International Journal ofPsychology, 26, 329-344. Jacob, S., & Hartshorne, T. S. (2007). Ethics and law for school psychologists (5 th ed.). Hoboken, NJ: John Wiley & Sons, Inc. Kaufman, A. S. (2000). Intelligence tests and school psychology: Predicting the future by studying the past. Psychology in the Schools, 37, 7-16. Kaufman, A. S., & Lichtenberger, E. O. (2006). Assessing adolescent and adult intelligence (3rd ed.). Hoboken, NJ: John Wiley & Sons, Inc. Kindler, A. (2002). Survey ofthe States' Limited English Proficient students and available educational programs and services: 2000-2001 summary report. Washington, DC: Author Kranzler, 1. H. (1997). Educational and policy issues related to the use and interpretation of intelligence tests in the schools. School Psychology Review, 26, 150-162.
50 Kranzler, J. H., Flores, C. G., & Coady, M. (2010). Examination of the cross-battery approach for cognitive assessment of children and youth from diverse linguistic and cultural backgrounds. School Psychology Review, 39, 431-446. McCloskey, D., & Athanasiou, M. (2000). Assessment and intervention practices with secondlanguage learners among school psychologists. Psychology in the Schools, 37,209225. Merrell, K. W., Ervin, R. A., & Gimpel, G. A. (2006). School psychology for the 21st century. New York, NY: The Guilford Press. National Association of School Psychologists. (2010). Principles for professional ethics. Bethesda, MD: Author. National Clearinghouse for English Language Acquisition. (2010). The growing number of English learner students [Data set]. Retrieved from http://www.ncela.gwu.edU/files/uploads/9/growingLEP_0708.pdf Newell, M. L., Nastasi, B. K., Hatzichristou, C., Jones, J. J., Schanding, G. T., Yetter, G. (2010). Evidence on multicultural training in school psychology: Recommendations for future directions. School Psychology Quarterly, 25,249-278. doi: 10.1037/a0021542 No Child Left Behind Act of2001, Pub. L No. 107-110, 115 Stat. 1425 (2002). O'Bryon, E. C., & Rogers, M. R. (2010). Bilingual school psychologists' assessment practices with English language learners. Psychology in the Schools, 47, 1018-1034. doi: 10.1002/pits.20521 Ochoa, S. H., Riccio, C., Jimenez, S., Garcia de Alba, R., & Sines, M. (2005). Psychological assessment of English language learners and/or bilingual students: An investigation of
51 school psychologists' current practices. Journal ofPsychoeducational Assessment, 22, 185-208. Ochoa, S. H., Rivera, B., & Ford, L. (1997). An investigation of school psychology training pertaining to bilingual psycho-educational assessment of primarily Hispanic students: Twenty-five years after Diana v. California. Journal ofSchool Psychology, 35,329349. Rhodes, R. L., Ochoa, S. H., & Ortiz, S. O. (2005). Assessing cultural and linguistically diverse students: A practical guide. New York, NY: The Guilford Press. Salend, S. J., & Duhaney, M. G. (2005). Understanding and addressing the disproportionate representation of students of color in special education. Intervention in School and Clinic, 40,213-221. Sattler, J. M., (2008). Assessment ofchildren: Cognitive foundations (5th ed.). La Mesa, CA: Jerome M. Sattler, Publisher, Inc. Sepulveda, J. (2010). Striving for excellence in Hispanic education. National Civic Review, 99 (2), 15-20. Schon, J., Shaften, J., & Markham, P. (2008). Contemporary issues in the assessment of culturally and linguistically diverse learners. Journal ofApplied School Psychology, 24, 163-189. Stinnett, T. A., Havey, M. J., & Oehler-Stinnett, J. (1994). Current test usage by practicing school psychologists: A national survey. Journal ofPsychoeducational Assessment 12, 331-350. Suzuki, L. A., Prevost, L., & Short, E. L. (2008). Multicultural issues and the assessment of aptitude. In L. A. Suzuki & J. G. Ponterotto (Eds.), Handbook ofmulticultural
52 assessment: Clinical, psychological and educational applications (490-519). San Francisco, CA: John Wiley & Sons, Inc. Thorndike, R. M. (1997). Measurement and evaluation in psychology and education (6th ed.). Upper Saddle River, NJ: Prentice-Hall Inc. Tomes, Y. 1. (2010). Culture and psychoeducational assessment: Cognition and achievement. In E. Garcia-Vazquez, T. D. Crespi, & C. A. Riccio (Eds.), Handbook ofeducation, training, and supervision ofschool psychologists in school and community, volume 1: Foundations ofprofessional practice (167-184). New York, NY: Routledge. U.S. Census Bureau. (2009). American community survey [Data file). Retrieved from http://factfinder.census.gov/servlet/DTTable?_bm=y&-geo_id=O 1OOOUS&ds- name=ACS- 2009- 5YR- GOO- &-mt- name=ACS- 2009- 5YR- G2000- B02001 U.S. Department of Education, & U.S. Department of Health and Human Services. (2003). National symposium on learning disabilities in English language learners. Washington, DC: Author. U.S. Department of Education, National Center for Educational Statistics. (2008). Language minority school-age children (NCES 2008-031). Washington, DC: Author. U.S. Department of Education, National Center for Educational Statistics. (2009). Characteristics to public, private and bureau ofIndian education elementary and secondary schools in the United States (NCES 2009-321). Washington, DC: Author. Valencia, R. R., & Suzuki, L. A. (2001). Intelligence testing and minority students: Foundations, performance factors, and assessment issues. Thousand Oaks, CA: Sage Publications.
53 Vazquez-Nuttall, E., Li, C., Dynda, A. M., Ortiz, S. 0., Armengol, C. G., Walton, J. W., & Phoenix, K. (2007). Cognitive assessment of culturally and linguistically diverse students. In G. B. Esquivel, E. C. Lopez, & S. Nahari (Eds.), Handbook ofmulticultural school psychology: An interdisciplinary perspective (265-288). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Wechsler, D. (2005). Wechsler intelligence scale for children -fourth edition Spanish: Examiners manual. San Antonio, TX: Hartcourt Assessment. Zehler, A. M., Fleischman, H. L., Hopstock, P. J., Stephenson, T. G., Pendzick, M. L., & Sapru, S. (2003). Descriptive study of services to LEP students and LEP students with disabilities. Retrieved from http://www.ncela.gwu.eduifiles/rcd/BE021199/special_ed4.pdf

TF Miller

File: the-impact-of-cognitive-performance-of-english-language-learners.pdf
Author: TF Miller
Published: Wed May 11 12:00:09 2011
Pages: 53
File size: 0.49 Mb


Manual of Style, 30 pages, 0.14 Mb

, pages, 0 Mb
Copyright © 2018 doc.uments.com