National Certification in Unified English Braille: Validation Report

By Edward C. Bell, Ph.D.

Edward C. Bell is the Director of the Professional Development and Research Institute on Blindness, Louisiana Tech University.

Abstract

Unified English Braille (UEB) became the standard for literary braille in the United States in January 2016. In the process of this transition, myriad stakeholders have been and continue to be engaged in all aspects of producing, procuring, preparing professionals, and teaching children and adults the UEB code. The National Blindness Professional Certification Board is one such organization whose mission it is to develop certification standards to measure proficiency in UEB for teachers of the code. The National Certification in Unified English Braille (NCUEB) was created for that purpose. This manuscript (a) describes the steps that have been taken to test the efficacy of the NCUEB exam, (b) presents data which was generated by 83 individuals who took the exam between January and July 2015 across eleven different testing venues, and (c) provides a statistical report on the internal consistency, generalizability, content validity, and criterion-referencing for this examination. Results suggest that the NCUEB is valid and appropriate for its stated purpose.

Keywords

Unified English Braille, UEB, Proficiency, Certification Standards, National Certification, NCUEB

Introduction

The declining literacy rate of blind and visually impaired persons in the United States is a well-documented problem which is also associated with higher rates of unemployment (Amato, 2002; Frieman, 2004; Ryles, 1996; Toussaint & Tiger, 2010). Fewer students who are classified as blind/visually impaired are being taught braille as a primary reading medium, in part, because more teachers are choosing to use large print, audible materials, and other technology in place of braille for students with some residual vision (Amato, 2002; Bell, Ewell, & Mino, 2013; Frieman, 2004; Ryles, 1996). This trend has remained equally true for adults who are blind (Bell & Mino, 2013; Mullen, 1990).

The shift away from braille as a primary reading medium is significant. Toussaint and Tiger (2010) stated, “According to the National Braille Press only 12% of legally blind individuals can read braille, in contrast to 50% of blind individuals who could do so in the 1960s” (p. 181). In their 2012 Annual Report, the American Printing House for the Blind (APH, 2012) reported that less than 9% of K-12 students receiving quota funds used braille as their primary reading medium. Teachers proficient in braille are desperately needed to reverse this trend.

Approximately 20% of legally blind adults in rehabilitation centers are currently taught braille (Ponchillia & Durant, 1995), and only 15% of those individuals who are learning braille receive instruction on a daily basis (Pester, 1993). Ponchillia and Durant (1995) surveyed 130 braille instructors and found that 50% of instructors taught braille to fewer than 1 in 5 clients; only 19% taught braille to all their clients. 50% of instructors taught only uncontracted braille, and 55% of those who taught contracted braille offered it to fewer than 1 in 5 clients. This speaks volumes about the lack of emphasis being placed on braille literacy for children and adults, despite the data that show its importance in literacy and employment.

In late 2012, the Braille Authority of North America, the official governing body for braille in the United States, adopted the Unified English Braille (UEB) code. As of January 4, 2016, UEB is the recognized standard code for literary braille in the United States, replacing what had been referred to as English Braille, American Edition (EBAE). This requires retraining for all current braille readers and all individuals whose job it is to teach, produce, procure, grade, and/or support braille instruction.

The adoption of UEB represents the most significant changes ever to braille. Many professionals and consumers are extremely anxious about their ability to read and write the new code. Previously, there have been demonstrated problems with the erosion and/or lack of maintenance of professionals’ braille skills. Mullen (1990) noted that a leading cause of the shortage of qualified teachers stemmed from inconsistent preparation across various programs, including a lack of standardized tests for assessing teacher competency and a lack of standards by which instructors were held accountable. Mullen concluded that in order to remedy training program variability, supplemental training programs were needed to provide advanced training, retraining, and feedback to in-service teachers. Many teachers providing services to individuals with visual impairments have been trained in an unrelated area of special education or in general elementary education and have become certified with as few as three blindness-specific courses (Mullen, 1990). Still others enroll in programs because certification is a required component of their continued employment. Yet they are not able to dedicate the time and effort necessary to effectively learn the braille code (Amato, 2009). Once more, the same trend remains in evidence in 2013 (Bell, Ewell, & Mino, 2013).

Braille proficiency standards vary widely from state to state as has been reported by Frieman (2004). At that time, there were 19 states that required teacher candidates to graduate from an appropriate program in order to qualify for positions teaching blind or visually impaired youth. There were seven states that required candidates to have a generic degree in special education with no mention of a course or competency in braille. There were 24 states that required candidates to have courses in order to earn an endorsement, but how many courses, what those courses cover, and whether candidates must pass a braille competency exam varied by state. Frieman (2004) concluded that:

Today principals have no guarantee that a candidate with formal credentials from a state will be fluent in Braille. Administrators need to ensure that every candidate hired to work with children who are visually impaired has the skills to teach Braille (Action for Administrators section, para. 2).

Currently, there are three national certification processes related to braille competency. The National Library Service for the Blind and Physically Handicapped, through the National Federation of the Blind, offers courses leading to certification for transcribers of braille. The National Braille Association provides certification specific to the formatting of braille textbooks. The National Certification in Unified English Braille (NCUEB), administered by the National Blindness Professional Certification Board (NBPCB), demonstrates that teachers and others are competent in the braille code.

While it is true that many individual states and regional organizations also administer braille certification tests, this means, unfortunately, that the standards for braille proficiency are quite varied and diverse, with some states holding high requirements for demonstration of braille proficiency, while others have virtually none. Although January 2016 had been established as the “tipping point” for when UEB should be fully implemented, clearly, the course of this conversion will be more of a drunken meandering rather than a precise delineation to the sole use of UEB. There must be a way to provide a concrete method of determining who has developed the competencies needed to teach or produce correct UEB.

One nation-wide standard has the benefit of ensuring unequivocally that a professional can demonstrate a minimum knowledge and skill in the use of the UEB code. This is especially true given that most professionals in the field were trained prior to the adoption of UEB and therefore certification is even more important to demonstrate professional qualifications in reading and writing of the braille code. Such a standard assures consumers and their families that those teaching or producing braille have met a minimum threshold regardless of specific locale throughout the country. Establishing this benchmark is the express purpose of the National Blindness Professional Certification Board (F. Schroeder, personal communication, February 15, 2015) by offering the NCUEB exam. This study analyzed the procedures and outcomes of the validation process for this exam and their implications.

The NBPCB reported in 2015 that 257 individuals were at that time certified in literary Braille through their organization, representing 64% of those who sat for the exam (M. Morais, personal communication, October 6, 2015). This means that at least 36% of individuals who voluntarily sought braille certification were able to successfully complete all sections of the exam on standard literary braille (EBAE), the very code that has been used in the United States since 1932. With every person who teaches braille in this country needing to learn the changes brought by UEB, this statistic will worsen dramatically if a coordinated training and standards-based evaluation solution is not implemented.

The National Certification in Literary Braille (NCLB) was based on the National Literary Braille Competency Test (NLBCT). Waugh (2008) demonstrated that the NLBCT had acceptable content validity and internal consistency and was a valid measure for teachers of the braille code. The NBPCB took seriously its obligation to continue monitoring the data from exam administrations to ensure that the test remained stable across administrations. Analysis of the data obtained during the administrations of the NLBCT has demonstrated that the reliability and validity of the examination remains as high as or is higher than it was during the National Federation of the Blind pilot test (Bell, 2010). At the time of writing, complete data were available on 149 applicants who took the initial examination or retook an examination. It was important to examine the overall content validity by examining the correlations among the four sections of the test (braillewriter, slate and stylus, proofreading, and multiple choice). For these 149 examinations, the four sections of the NCLB were more highly correlated than was the pilot test, ranging from r = .53 to r = .71, and all the correlations were significant at p < .01 (Bell, 2010).

These administrations were spread out over four different versions (or forms) of the examination: the original Forms A, B, and C, which were used in the pilot test, and Form D, which was created in 2009 on the basis of the blueprint. When the individual forms were examined more closely, it was found that each form remained independently intercorrelated. Specifically, the correlations for the four sections of Form A (n = 47) were between r = .50 and r = .66 and were significant at p < .01; for Form B (n = 44), the correlations ranged from r = .51 to r = .73 and were significant at p < .01; for Form C (n = 40), the correlations ranged from r = .39 to r = .79 and were significant at p < .05; and for Form D (n = 17), the correlations ranged from r = .50 to r = .89 and were significant at p < .05 (Bell, 2010). These data will continue to be tracked throughout this project to ensure that the NCUEB exam remains the primary national standard that is reliable and valid for teachers and instructors.

The NCUEB was created by (1) converting existing forms of the exam over to UEB, (2) utilizing the blueprint matrix to compare the conversion to established protocol for validation, (3) reviewing and revising items with the Subject Matter Experts (SMEs), and (4) making modifications where necessary.

The NCUEB consists of three sections: (1) Braille Transcription using a Perkins braillewriter; (2) Proofreading, in which errors have been embedded and must be accurately identified; and (3) Multiple Choice, which tests the applicant’s knowledge of braille rules. The NBPCB dropped the slate section of the certification exam because it felt that it was a duplication in determining if test takers could demonstrate their knowledge of the braille code. This in no way should be construed as any diminishment of the importance of teaching both children and adults to use a slate. Although a large number of the existing certification tests currently on the market have been developed for use by sighted teachers of the braille code, and hence are very often inaccessible to professionals who are themselves blind, the NCUEB has maintained its commitment to ensuring that all materials are completely accessible to both sighted as well as blind test takers.

In January 2015, the NCUEB was made ready for pilot testing concurrently with a research protocol that was adopted to enlist participants, provide training, assess readiness, and to have participants take the NCUEB exam. Between January and July 2015, close to 100 individuals agreed to participate in this validation research. Those individuals were spread out across eleven different testing venues, representing more than a dozen states. What follows are the findings of that research and the factors that were evaluated in the process of assessing the NCUEB reliability and validity.

Method

Participants

After all data were compiled, usable data were retained on 83 individuals who were an average age of 42.82 years (SD = 13.60); ranging from 18 to 72 years. These individuals were 66 females (79.5) and 17 males (20.4). These individuals represented five African Americans (6%), three Asian/Pacific Islanders (3.6%), seven Hispanic/Latin (8.54%), one Native American (1%), 65 White/Caucasian (79.3 %), and five who did not identify any racial identification. This sample consisted of 52 blind persons (62.6%) and 31 individuals who reported being fully sighted (37.3%).

Instrumentation

The validation study of the NCUEB exam consisted of two sets of instrumentation, (1) the NCUEB exam itself and (2) the participant preparation survey. The NCUEB exam consists of three sections: (a) braille writing, using a braillewriter, where participants were given 90 minutes to transcribe print (or uncontracted braille) into Unified English Braille; (b) proofreading, where participants were given 90 minutes to read a four-page, single-sided UEB passage that contained embedded errors—along with a print (or uncontracted braille) corrected reference passage—which required participants to circle all errors; and (c) a multiple choice section, where participants were given 45 minutes to answer questions about braille rules, proper word formation, and formatting. There are currently three separate forms of the NCUEB exam. Participants were given one of each form of the exam depending on the testing venue where they participated. The participant preparation survey was provided to individuals via email link to a Survey Monkey form that asked questions about the individuals demographic background, professional capacity as it relates to the learning and teaching of braille, training, certification, and the amount and type of preparation they had in studying UEB and specifically for this test.

Procedure 

All individuals who participated in one of the Unified English Braille Overview Workshops sponsored by the NBPCB were contacted and asked to participate in this survey. Additionally, individuals from two university preparation programs who were learning UEB were contacted along with other individuals who had previous contact with the NBPCB. Individuals were sent an invitation letter asking for their participation in this research. In order to participate, those individuals had to (a) agree to complete work on studying UEB during spring, 2015; (b) commit to at least one of the testing venues that were identified; (c) track the hours and types of studying they were doing; and (d) complete the study preparation survey. For their efforts, study participants were provided with study materials valued at $50, links and information to additional resources, and one administration of the official NCUEB examination, valued at $250.

Results

Participant Diversity

Individuals who participated in this initial validation study of the NCUEB attended one of eleven different testing venues spread across the United States. Participants resided in more than 46 different cities within 15 different states. When participants were asked about their personal professional capacity as it relates to braille, they responded as follows: seven individuals were blind consumers learning braille (10%); 18 braille instructors for blind adults (25%); 16 persons who produce, transcribe, or procure braille (22%); 24 teachers of blind or visually impaired students (33%); four university faculty/braille trainers (5%); and four university students currently learning braille (5%). When participants were asked how they learned UEB, they responded as follows: 20 were consumers of braille, primarily self-taught (26%); 46 professionals were primarily self-taught, including workshops (59%); three students in a rehabilitation center (4%); four students in a university braille class (5%); and five students taught in another training form (6%).

Participants were asked what resources they used in learning the UEB code. Many individuals used more than one method including: 13 individuals used The McDuffy Reader: A Braille Primer for Adults, 12 individuals used Ashcroft’s Programed Instruction: Unified English Braille, 57 individuals participated in one or more UEB overview workshops presented by the NBPCB, 70 individuals used the National Certification in Unified English Braille Test preparation materials, 35 individuals used International Council on English Braille’s The Rules of Unified English Braille; 49 individuals used the documents produced by Braille Authority of North America, eight individuals used books and materials produced by National Library Service for the Blind and Physically Handicapped, 33 individuals took the transition braille course from Hadley School for the Blind, 25 individuals used Duxbury translation software in the learning process, and 42 individuals used self-produced UEB materials for reading and writing. Although three individuals reported not using any of these study resources, the vast majority of participants (75%) reported using four or more of the sources listed here.

As noted, individuals were asked to keep a log of the total minutes they spent studying UEB in the month leading up to the NCUEB exam. In particular, we were interested in how much time was specifically spent in reading, writing, and understanding the rules that govern UEB. When asked about the number of hours spent in reading UEB during the past month, 76 individuals reported spending an average of 11.5 hours (SD = 10.44), with some  reportedly spending no time reading and at least one reporting 40 hours. With respect to writing UEB, 77 individuals reported an average of 9.34 hours (SD = 9.02), ranging from no writing to 40 hours. In regards to studying UEB rules, 75 individuals reported spending 7.27 hours (SD = 7.2), with as little as none to 32 hours. In total preparation time during the month leading up to the exam, 74 individuals spent an average of 28.43 hours (SD = 22.06), with a minimum of 2 hours in total and 105.83 hours in overall time spent in study preparation. Interestingly, there was no correlation to the total hours spent in studying and the number of errors made (r = .04; p = .72). This may be in part related to the individuals’ strength in EBAE leading up to the exam, as well as the social desirability of self-reporting study time.

Analysis of Internal Stability

On all three sections of the NCUEB exam, the data that are measured regarding an individual’s performance are the number of errors made in producing braille (braillewriter), the number of errors correctly detected versus missed in a reading passage containing embedded errors (proofreading), and the number of rules/concepts marked correct from content knowledge (multiple choice). Consequently, the determination of pass versus fail is the overall number of errors made in each section as compared to a set criterion, which is the maximum number of allowable errors that can be made such that the examinee will still be considered “competent” or “proficient” in braille (See Waugh, 2008). The exact “cut score” used for each section and form of the test is proprietary to the NBPCB and is beyond the scope of this article. What is germane here is that the remaining data in this report refers to the total number of errors a person made on each section, which equates to their “score.”

For the 83 individuals who completed the braillewriter section, the average number of errors made was 8.2 (SD = 6.3) ranging from zero to 28 errors (Median = 7; Mode = 3). Based on analysis of the distribution of the range of scores obtained, this distribution was normally distributed (W = .885, p < .01). Based on this distribution, the confidence limits around the Mean (Upper = 9.8) suggest that the criterion level set by the Subject Matter Experts (SME) to determine the pass/fail decision-rule is appropriate for 95% of likely candidates. Similarly, for the 83 individuals who completed the proofreading section, the average number of errors made in detection was 8.87 (SD = 5.87) ranging from zero to 27 errors (Median = 8; Mode = 10). A check for the normalcy of the distribution also showed to be correct (W = .91, p < .01). Again, in attempting to determine whether the criterion (or cut score) set by the SME committee was appropriate for this section, the data demonstrated that the confidence limits around the Mean (Upper = 10.16) remained within the criterion set for likely test candidates. Finally, for the 82 individuals who completed the multiple choice section of the exam, the average number of errors made was 2.9 (SD = 2.56), ranging from zero to 10 errors (Median = 2; Mode = 1). The test for normal distribution demonstrated that the population remained normal (W = .88, p < .01). It should be noted for this sample that 95% of this distribution made nine or fewer errors, and one individual had 25 errors, which was a statistical outlier that consequently was removed from this analysis. One other thing that needs to be noted here is the tight cluster of low scores (Mean = 2.9, Median = 2, Mode = 1) results in a confidence limit around the Mean of 3.4, which means that the SME committee set a higher criterion score for this section than is statistically necessary. The net result is that this makes the multiple choice section of the exam much easier to pass than the other two sections, but still appropriate for the population of likely test takers.

Correlation of Test Sections

The three sections of the NCUEB Exam (Braillewriter, Proofreading, and Multiple Choice) each test different aspects of braille proficiency. The braille writing section assesses a person’s ability to produce properly formatted UEB, recalling from memory the code itself, rules for paragraphing, translation, and accuracy in transcribing into braille. Proofreading consists of a passage in contracted braille that has errors embedded within it. The test taker must identify those embedded errors and not erroneously identify correct braille as errors. This requires knowledge of the code, comparison to a reference passage, and accuracy in identification of correct use of braille and rules. Multiple choice requires direct recall of specific rules, identification of correct braille formation, and understanding of specific rules without a given context. Each of these skill areas is unique, requiring complementary skill sets, which collectively assist in augmenting proficiency in writing, reading, and understanding UEB. Because these are three separate, but highly related skill sets, it is very important to demonstrate the relationship between the three sections of the exam (i.e., if performance on one section of the exam relates to performance on other sections).

The following table demonstrates that there is a positive and highly significant relationship between how participants performed on one section of the exam relative to the others. Table 1 shows that the highest relationship (r = .55) was between scores on the braillewriter and multiple choice sections of the exam and that this relationship was highly significant beyond the p < .01 level of significance. The next highest relationship was between braille writing and proofreading (r = .48) followed by proofreading and multiple choice (r = .45), and in each of these cases the relationship was positive and beyond the p < .01 level of significance.  

Table 1: Intercorrelation of Test Sections

Variable

Braillewriter

Proofreading

Multiple Choice

Braillewriter

_

Proofreading

r.48***

_

Multiple Choice

r.55***

r.45***

_

Note: The r is the Pearson Moment Product Correlation; one asterisk (*) indicates significance at the p = .05 level, two (**) asterisks indicate significance at the p = .01 level and three (***) asterisks indicates significance beyond the p < .01 level.

Equivalency of Test Forms

The preceding statistics are based on data that were gathered from the scores of the 83 individuals who took the NCUEB. The test is currently available in three operational forms. All participants of the validation study took one of the three forms (A, B, or C). Consequently, it is important to evaluate how each form (or version) of the test functions irrespective of the individual test taker. The average score for the 32 individuals who took form A of the exam was slightly below the overall mean (Mean = 7.1; SD = 5.1; Range = 0-20), while the 23 individuals who took Form B of the exam tended to have more errors on average (Mean = 10.88; SD = 8.6; Range = 0-28). Finally, those who took Form C of the exam scored almost identically to those who took Form A (Mean = 7.25, SD = 4.81, Range = 0-17). For the proofreading section, those taking Form A scored below the overall average (Mean = 6.8, SD = 5.2, Range = 0-24), whereas those taking Form B scored closer to the average (Mean = 7.3, SD = 4.2, Range = 2-19). Those taking Form C had the highest average number of errors (Mean = 12.6, SD = 6.1; Range = 5-24). For the multiple choice section of the examination, those taking Form A had the fewest number of errors on average (Mean = 2.3, SD = 2.1, Range = 0-9), whereas those who took Form B had a very similar number of errors (Mean = 3.2, SD = 2.5, Range = 0-10). Those who took Form C were almost identical (Mean = 3.3, SD = 2.9, Range = 0-10). 

Errors Made

Beyond the total number of errors made, subsequently, it was important to examine what those errors were and to classify the types of errors that were being made. In order to do this, five categories were created to classify the types of errors made by participants. These were: (1) Errors in Formatting; (2) Errors in Composition Sign Use; (3) Errors in EBAE signs or usage; (4) General Errors and Omissions; and (5) Errors specific to UEB knowledge and usage. Analysis of the errors made demonstrated that:

  • For category 1(Errors in Formatting), the errors in formatting were the same in EBAE and UEB—predominately margins, indenting, and blank lines between paragraphs.
  • The errors in composition signs were mostly in mixed numbers and letters (i.e., if errors were made in the area of composition signs or rules that were included in EBAE and did not change they were categorized as composition sign errors). In other words, if it would have been considered an error in EBAE, it was counted here. If it was an error that was not previously in the braille code it was attributed as a UEB Error.
  • The Errors in EBAE sign or usage categorized those errors that were mistakes in code knowledge or usage, specifically in the aspects of braille that had not changed from EBAE. The errors in symbols and rules were only considered in this category if they would have also been so on the NCLB. There was no overall pattern in this category.
  • The errors in the Errors and Omissions section consisted of either omitting words completely, making an error on a sign that was otherwise correct throughout the section, or ignoring the instructions such as hyphenating words. Errors and omissions also occurred when participants accidentally left out entire words, sentences, or paragraphs. 
  • The errors in the category of UEB consisted of incorrect capitalization, incorrect usage of type forms, and composition sign rules that are new to UEB. These were errors that specifically demonstrated misapplication of new UEB rules, symbols, and punctuation.

Across all participants, the total number of errors in each category was captured. For many, errors were made in some but not all categories, and in other cases no errors were made at all. When adding the errors, it was possible to generate the percentage of errors attributed in each category. Overall, errors in formatting were 4.2% of all errors made, as composition sign usage consisted of 5.3%. EBAE misapplication errors composed 17.8% of all errors, while errors and omissions not attributed to lack of code knowledge were 48%. Errors that could specifically be attributed to gaps in UEB knowledge were 24.6% of all errors made. As can be seen from these data, one quarter of errors can be attributed to gaps in UEB knowledge or production; however, almost half of all errors made were a result of simple mistakes people made by rushing, skipping words, or making similar omissions.

Generalizability for Population

One factor that is critical to establish in the analysis of the NCUEB is the extent to which it is generalizable across demographic factors (i.e., the extent to which it does or does not show demographic biases). In the analysis of the sample population demographics and the number of errors made by participants, the data established that the average total number of errors made by women (Mean = 19.97, SD = 12.1) was nearly identical to those made by men (Mean = 18.8, SD = 11.9), and that this difference was not statistically significant (F(1, 79) = 0, p = .97). Similarly, when looking at the age of participants and the total number of errors made on the exam, there was no relationship between the age of the participants and the number of errors made (r = .002, p = .98). When looking at the performance based on racial classification, White or Caucasian Americans made the most average errors (Mean = 22.2, SD = 12) and the fewest on average were made by African Americans (Mean = 16, SD = 4.7), with Hispanic, Asian, and Native Americans falling in-between. However, these differences were not statistically significant (F(5, 78) = 0.22, p = .98). In considering the access to the exam forms (visual versus tactual), the data revealed that there was virtually no difference in errors made by those who reported being blind (Mean = 19.48, SD = 12.3), and the number made by those who reported being sighted (Mean = 20.0, SD = 11.28), and in fact, this difference was statistically non-significant (F(1, 79) = 0.03, p = .85).  

Concurrent Validity

The data have demonstrated great internal consistency, normal distributive factors, and positive intercorrelation. What was also important to evaluate was the extent to which performance on the NCUEB exam was related to braille knowledge in general. It has been suggested by Bogart (2009) that those who were more proficient in EBAE would more quickly adapt to the UEB changes and would thus do better on proficiency exams. In fact, 74 of these participants (93%) reported that they were proficient in EBAE prior to taking this exam. Of these individuals, 52 reported that they previously took the National Certification in Literary Braille (NCLB). When these 52 individuals were examined on their current performance, it was discovered that 45 individuals (85%) passed the NCUEB exam. Interestingly, there was no statistical difference in pass/fail rates for those reading braille tactually versus those reading visually. 

Relationship to Preparation

There was a positive relationship related to the number of methods used in preparing for the exam. In general, the greater number of different methods used to prepare had a significant correlation to making fewer errors on the exam (r = .32, p < .01). However, not all study methods were equally effective. When all errors for braillewriter, proofreading, and multiple choice were added together, participants made an average of 19.89 errors (SD = 12.15), ranging from a minimum of 2 errors to a maximum of 57 errors. On average, the fewest errors were made by those who used Duxbury translation software as part of their preparation (14.9 errors), next by those who took the Hadley preparation course (17.1 errors), followed by those who participated in the NCUEB Overview workshops (17.4 errors), and proceeded finally by those using the ICEB manual (17.4 errors)

As suggested earlier, those individuals who had already demonstrated proficiency in EBAE should more quickly adjust to the UEB changes. When asked whether individuals in this study had previously taken a braille proficiency test, 53 individuals (63%) reported having previously passed the NCLB. The remaining 31 individuals (37%) had either not taken or not passed the NCLB. In analyzing the performance of those who had versus had not passed the NCLB exam, the data demonstrated that having previously demonstrated proficiency in EBAE was significantly related to the likelihood of passing the NCUEB exam (df = 1, Chi = 17.48, p < .01).  The following table provides a visual description of this:

Table 2: Pass/Fail Rates for NCUEB

Individuals

PassNCUEB

FailNCUEB

Yes, NCLB

44

8

NO NCLB

12

18

Note: The chart represents the number of individuals who previously passed the NCLB and their performance on the NCUEB exam, as compared to those who either had not taken or passed the NCLB relative to their performance on the NCUEB.

Discussion

The preceding data represent the culmination of work done in 2014 and 2015 in developing, testing, and evaluating the NCUEB. These data are based on more than 80 individuals who took the examination in one of eleven testing locations across the nation and participated in a survey about their background and study preparation. The data establish that the NCUEB exam is internally consistent, with all three sections showing normal distribution, confidence limits set well within the criterion (cut score) set by the Subject Matter Experts, and significant correlations between the three sections (braillewriter, proofreading, and multiple choice). Similarly, the data demonstrated that all three exam forms studied were equivalent and correlated.

Given this population that was based on Teachers of the Blind/Visually Impaired, university personnel preparation faculty, braille instructors for adults, and consumers themselves, the demographics were wide and varied. Nevertheless, the data indicated that the NCUEB exam performed equivalently across age, gender, racial, and visual differences. Finally, the data demonstrated that studying for the exam resulted in fewer errors being made overall and that previous knowledge and proficiency with EBAE were both associated with better performance on the NCUEB exam.  

Conclusions

Validation is a concept that is critically important when discussing examinations, especially examinations that are high-stakes or consequential in nature. The degree to which the exam results may be used in hiring, firing, promotion, or demotion decisions has critical consequences for individuals in the field of work for the blind. It is therefore essential that an examination of the proficiency levels of an individual’s knowledge of the braille code be valid in its measurement. Although the concept of validity has broad-sweeping implications, the best that can be done in any field-based research such as this is to ensure that the internal consistency, generalizability, content-validity, and criterion-reference are acceptable to meet the needs of the target population. After being subjected to the aforementioned statistical analyses, the NCUEB has demonstrated the rigor and accuracy to serve as a national standard for measuring the proficiency of knowledge of the Unified English Braille code.

Unifying the braille code has been a process that has been undertaken for more than 20 years. While this transition is far from finished, it is clear that UEB is here to stay. Agencies, organizations, teachers, consumers, and producers are all working in parallel to make this transition smooth and efficient. One aspect along this path to transition is ensuring the competency of professionals whose job it is to teach the code to children and adults. University personnel preparation programs are working on this effort during this process, and they are all working at different speeds and with differing foci. Rehabilitation agencies are utilizing refresher workshops, correspondence courses, and practice to upgrade their skills to the UEB code. Additionally, consumers are using newer technologies, textbooks, and reading materials to make the transition to UEB.

All of these methods will work to a greater or lesser degree to help America make the transition to Unified English Braille. The problem is with the statement of “to a greater or lesser degree.” Inevitably these diverse and disparate methods of instruction may yield wildly different levels of proficiency in UEB. As this study has demonstrated, different methods of preparation result in different levels of proficiency. How then, can we ever know that our professionals are competent in UEB at the same levels and to the same standard? The only way is by having a standard of excellence and a method of measuring that level that can be applied equally nationwide. There will be state by state efforts to develop competency tests; other organizations that work with universities will develop tests that cater to university students. But the only way to ensure that all professionals attain the same high level of braille proficiency is by holding them to the same high standard of proficiency. The National Certification in Unified English Braille is now demonstrated as valid to be that standard. 

Implications for Practitioners

Practitioners, consumers, agencies, and organizations whose job it is to produce, procure, teach, or prepare individuals in the Unified English Braille code can help this effort in many ways. Here are a few suggestions:

  • Become familiar with the salient findings of this research and share them with relevant stakeholder organizations;
  • Become familiar with the information about the NCUEB by visiting www.nbpcb.org/ncueb;
  • Encourage employers to adopt the NCUEB as the standard certification for all individuals who are hired to teach braille to children or adults;
  • Encourage University preparation programs to adopt the NCUEB as the gold standard for matriculating students prior to their program completion;
  • Inquire of Teachers of Blind/Visually Impaired Students what certifications they hold, and encourage their attainment of the NCUEB;
  • Promote braille proficiency as a concept and as a standard whenever the topic of literacy is raised.

There are dozens of stakeholder organizations, all of which play a vital role in ensuring that UEB becomes widely adopted, accepted, and fully implemented. The National Blindness Professional Certification Board is an equal stakeholder, and one dedicated to the pursuit of professional standards for proficiency in Unified English Braille.

References

Amato, S. (2002). Standards for competence in braille literacy skills in teacher preparation programs. Journal of Visual Impairment & Blindness, 96(3), 143-154. Retrieved from http://www.afb.org/afbpress/pubjvib.asp?DocID=JVIB960303  

Amato, S. (2009). Challenges and Solutions in Teaching Braille in an Online-Education Model. Journal of Visual Impairment & Blindness, 103(2), 78-80. Retrieved from http://www.afb.org/afbpress/pubjvib.asp?DocID=jvib030203

American Printing House for the Blind. (2012). Annual report 2012: Distribution of eligible students based on the federal quota census of January 3, 2011 (Fiscal year 2012). Retrieved from http://www.aph.org/federal-quota/distribution-2012/

Bell, E. (2010). U.S. national certification in literary braille: History and current administration. Journal of Visual Impairment & Blindness, 104(8), 489-498. Retrieved from http://www.afb.org/afbpress/pubjvib.asp?DocID=jvib040805  

Bell, E. C., Ewell, J. V., & Mino, N. M. (2013). National reading media assessment: Complete report. Journal of Blindness Innovation and Research, 3(2). Retrieved from https://nfb.org/images/nfb/publications/jbir/jbir13/jbir030201abs.html. doi http://dx.doi.org/10.5241/2F3-37

Bell, E. C., & Mino, N. M. (2013). Blind and visually impaired adult rehabilitation and employment survey: Final results. Journal of Blindness Innovation and Research, 3(1). Retrieved from https://nfb.org/images/nfb/publications/jbir/jbir13/jbir030101abs.html. doi http://dx.doi.org/10.5241/2F1-35

Bogart, D. (2009). Unifying the English Braille codes. Journal of Visual Impairment & Blindness, 103(10), 581-583. Retrieved from http://www.afb.org/afbpress/pubjvib.asp?DocID=jvib031002

Frieman, B. B. (2004). State braille standards for teachers of students who are blind or visually impaired: A national survey. Braille Monitor, 47(1). Retrieved from https://nfb.org/Images/nfb/Publications/bm/bm04/bm0401/bm040105.htm

Mullen, E. (1990). Decreased braille literacy: A symptom of a system in need of reassessment. RE:view, 22(3), 164-69.

Pester, E. (1993). Braille instruction for individuals who are blind adventitiously: Scheduling, expectations, and reading interests. RE:view, 25(2), 83-87.

Ponchillia, P. E., & Durant, P. A. (1995). Teaching behaviors and attitudes of braille instructors in adult rehabilitation centers. Journal of Visual Impairment & Blindness, 89(5), 432-439. Retrieved from http://www.afb.org/afbpress/pubjvib.asp?DocID=jvib890507

Ryles, R. (1996). The impact of braille reading skills on employment, income, education, and reading habits. Journal of Visual Impairment & Blindness, 90(3), 219-226. Retrieved from http://www.afb.org/afbpress/pubjvib.asp?DocID=jvib900309

Toussaint, K. A., & Tiger, J. H. (2010). Teaching early braille literacy skills within a stimulus equivalence paradigm to children with degenerative visual impairments. Journal of Applied Behavior Analysis, 43(2), 181-194. doi: 10.1901/jaba.2010.43-181

Waugh, G. W. (2008). NFB NLBCT braille test: Pilot test results. Report prepared for the National Federation of the Blind (Report No. FR-08-99). Retrieved from National Blindness Professional Certification Board website: http://www.nbpcb.org/pages/NCLBstatisticalreport.php.

The Journal of Blindness Innovation and Research is copyright (c) 2016 to the National Federation of the Blind.