Dear Editor, My name is David Boulton, I have been a learning theorist and educational philosopher for almost 20 years. What I wish to share with you is from the heart and for the children who are suffering from reading problems . I have no commercial intentions in writing you. My purpose is simply to stimulate your thinking; and perhaps your suggestions. As you know the stakes are enormous - I think you will find what I have to say is fresh - please take a few minutes and read me out. I have placed in this document 3 short pieces of work: An Abstract. An Analogy that illustrates how we (maybe not you but the public in general) have missed a significant point in our thinking about what's causing our reading problems and a short article which explains my proposal. In addition to the information contained in this email, my web site, http://www.implicity.com/reading/ contains additional research and a more elaborate unfolding of my thinking. It seems to me that our most basic information technology - the 'code' we read with needs the scrutiny of information scientists such as yourself. Thank you - David Boulton AbstractThe greater the number of ambiguous letter-sounds (and letter-sound combinations) coexistent in a word, the greater the number of iterations of ambiguity reduction required before the word can be virtually-heard or spoken. The greater the number of ambiguity reducing iterations (disambiguations) involved the longer the reader�s attention must stretch to process them. The longer the span of attention required, the greater the vulnerability to miscues in decoding causing drop outs from the decoding-stream-flow-rate necessary to sustain the flow of reading. The single most significant underlying cause for this, ambiguity-overwhelm > stutter > drop out, is the archaic �technology� we read with. A 1000-year-old lack of leadership in managing the relationship between the Latin alphabet and the English spoken language has resulted in a deeply entrenched, convoluted and highly ambiguous �code�. Every attempt to change the alphabet or reform spelling - to render their relationship more simply phonetic - has failed. Phonics and phonemic awareness pedagogies are both attempts to compensate for, not directly address, the ambiguities created by the idiomatic correspondence of these two systems (the code).
With
modern font technology it is relatively easy to add another dimension of
functionality to the concept of a character or letter. Specifically, it is
possible to print (paper or screen) letters with shape, size, intensity and
spacing variations that, while retaining unambiguous letter recognition
features, convey additional information or cues about how the letter sounds in
the particular word in which it is encountered. What I
am proposing is that
a small number of alphabet-general letterface variations, acting as phonetic
cues, can dramatically reduce the disambiguation-overhead involved in learning
to read. My intent is
to catalyze if I can, and, drive if I must, the development of a new learning to
read system based on developing this concept and subsequently integrating it
with the best of what remains relevant from phonemic awareness, phonics and
whole language pedagogies and practices. Imagine
that a fictional product called AlphaPhon is the world�s leading
English language GUI (Graphemic User Interface). AlphaPhon is the entry-level product of
a company (also fictitious) named USASoft. All is not well with USASoft. Market
research has revealed that 92 million older AlphaPhon customers, due to their
poor use of the product, are suffering major financial losses. 42
million adult
Americans can't read; 50 million can
recognize so few printed words they are limited to a 4th or 5th grade reading
level. According to Literacy Volunteers of America, 237 billion dollars a year in unrealized earnings
is forfeited by persons who lack basic reading skills. Perhaps
even more alarming, user-test reports indicate that 60% of the company�s new
customers are less than proficient with AlphaPhon even after 12 to 13 years of
day in and day out attempts to learn it: USASoft is in serious risk of losing
its future customer base. 69% of
4th graders read below the proficiency level The
first casualty is self esteem: they
soon grow ashamed� about half
of youths with a history of substance abuse have reading problems. Disturbed
by these reports, USASoft undertakes a massive research campaign to discover why
their customers are having such difficulty learning to use AlphaPhon.
Billions of dollars and thousands of research studies later a
scientific consensus emerges: the customers who have difficulty learning
AlphaPhon exhibit a common �core deficit� in something the researchers call
�alphaphonemic awareness� Phonemic
Awareness: It�s the hottest topic in
education. Moreover,
they also lack �alphaphonic� code knowledge and skills. Letter
knowledge, which
provides the basis for forming connections between
the letters in spellings and the sounds in pronunciations, has been
identified as a strong predictor of reading success
Based on
this new understanding, USASoft issues orders to all of its distributors to
initiate a nationwide training program designed to train the minds of its users
in the alphaphonemic awareness and alphaphonic skills required to use
AlphaPhon. zyxwvutsdrqponmlkjihgfedcba If it worked, the analogy started to sound absurd as USASoft began to act like AlphaPhon�s problems were exclusively in the minds of its users. As if it were inconceivable that anything could be wrong with AlphaPhon, or, that if something was wrong, that AlphaPhon could be in any way changed or improved. What�s disturbing, of course, is this is exactly how we have come to think about our reading problems and the role our reading technologies play in creating them. How could USASoft be so blind and negligent about the usability implications of such a human-engineered human-interface product? How could we? A Short Article: Training Wheels for
Literacy 69% of
4th graders read below the proficiency level
42 million adult Americans can't read; 50 million can recognize so few printed words they
are limited to a 4th or 5th grade reading level. According to Literacy
Volunteers of America, 237 billion
dollars a year in unrealized earnings is forfeited by persons who lack basic
reading skills. There is no natural, biological-evolutionary precedent for reading. Spoken language - yes - the ability to discriminate among sounds and associate distinct sounds with distinct meanings has been evolving for millions of years. But, nothing about our natural evolutionary development has prepared us to read � to focus our eyes into small static spaces and translate and assemble strings of visual symbols into virtually heard sequences that simulate the sounds of spoken words. Human beings invented reading (and writing), and it should be added, those that did were far less familiar with how our brains work and children develop than we are today Reading is a technology skill that requires the use of two archaic technologies or systems (the 3,000+ year old alphabet and the 1,000+ year old system of English spelling) that were developed by adults for adults and were never designed (or since in any way optimized) for use by young developing minds. Moreover, they were never designed to work together; like the proverbial square peg/round hole, we have been �force fitting� them for over a thousand years. The fact is, that most people who struggle to read are suffering from a kind of interface incompatibility with our reading technologies that is the fault of the technologies, not them! HISTORICAL ROOTSAncient Greek and Latin were almost completely phonetically
written� Just as in learning to read, I said, we were
satisfied when we knew the letters of the
alphabet� The
major cause of today�s reading problems began taking root nearly a thousand
years ago as the Latin alphabet and the English language collided. The Latin
alphabet was nearly phonetic; it had one letter for each sound spoken in the
Latin language. But in trying to represent the English, the Latin alphabet came
up short by over a dozen letters.
There were simply more sounds spoken in English then there were letters
to represent them in the Latin alphabet. With
religion, politics and academia so entrenched in the written Latin any thought
of changing it was virtually inconceivable. Consequently, instead of adding letters
to the alphabet, a series of rules developed whereby some letters, (but not all)
would no longer have just one sound but could have other sounds depending on
which of the other letters (in what sequence) they preceded or followed (most
but not all of the time). Sound pretty convoluted? The consequence of this
�hack� has haunted us ever since: the phonetic principle was broken and the
relationship between written letters and spoken sounds became complex and
confusing. What
happened significantly altered the course of human history and its effects are
still felt today by over 700 million people. The result of making up for the
shortage of letters was an ambiguous alphabetic �code� that strained the process
of learning to read - a process that had for over fourteen hundred years been
based on the phonetic simplicity of one letter for one sound. No longer as
quickly self-evident, reading now involved the need to determine which of a
letter�s possible sounds it was supposed to actually sound like in the
particular word in which it was appearing. The stress involved in such decoding
has remained deep in the �overhead� of our reading process ever
since. To make
matters worse further complications followed as the words, spellings and accents
of Greek Philosophers, French clerks, Danish typesetters and others were added
to the system. Now, in addition to
idiomatic codes for missing letters, spellings became incoherent as the various
spelling conventions of non-English influences were imposed on the language.
With the combination of Luther�s reformation, Guttenberg�s printing press and
the King James translation of the Bible, the cement began to harden. All these diverse and complicating
stresses heaped upon a writing system that was already inadequate resulted in a
seriously flawed and dysfunctional system. TODAY The
underlying cause of our reading difficulties is that we have rigidly held to an
inherited, technologically archaic, symbol system (the Alphabet) that was
developed in an entirely different 'age of the world', for adults not children,
and that was never designed to represent the 44+ sounds of the English spoken
language. As a
result, the number and duration of the mental processing iterations necessary to
resolve the ambiguity of letter sound correspondences all too frequently exceeds
the attention span of beginning readers. The consequences are �reading stutters�
and 'drop outs� in reading flow.
The core problem is AMBIGUITY-OVERWHELM
and it is an artifact of the �technology� involved. For some
reason - its 'sacredness' or simply its institutional inertia - we have been
unable to update the technology to reflect what we know about human neurological
processing and to make it friendly to the self-esteem and developing mental
processes of our young people. The
first casualty
is self-esteem: they
soon grow ashamed� about half of youths with a history of substance abuse
have reading problems. There
aren�t many parallels to this. Under what other circumstances do people spend
years trying to learn something that continually makes them feel bad about
themselves as they do? Most children and adults have very limited patience for
repeatedly trying to do something that results in self-esteem-lowering
feelings. Yet, we must compel people to learn to read. They can't function
in our modern world if they can't. However, the way things stand the technology
is causing real and significant damage to people's lives (and costing us
billions of dollars). Up to
this point, absent a new alphabet or a way of spelling phonetically with the one
we have, our only course of action was to facilitate the development of explicit
skills and attention span increases such that developing readers might be better
able to process the ambiguities we can't spare them from experiencing. This has
been the role of explicit phonemic awareness exercises and explicit, systematic
phonics both of which are attempts to compensate for, not directly address, the
ambiguity created by the archaic alphabet and spelling
system. But what
if we could, without changing the alphabet or the way English is spelled,
present our letters (on paper or screen) with cues embedded or accompanying them
that could significantly reduce the letter-sound ambiguity involved in
reading? This
kind of thinking was impossible until very recently, until computers and modern
font technology. Though the moveable type of the printing press was a
breakthrough innovation in its day, it restricted us to thinking about printing
through a paradigm that was based on what was and was not possible in a
mechanism that used real physical objects to print letters with. Whereas
moveable type made it relatively easy to set up any number of alternative
typefaces, once within a typeface it was impractical to offer letterfaces or
optional variations on the way each letter might appear. However,
today, with modern font technology, it is possible and relatively easy to add
another dimension to the idea of a character or letter. Specifically, it is
possible to print (paper or screen) letters with shape, size, intensity and
spacing variations, that while retaining unambiguous letter recognition
features, allows the presentation of the letter to convey additional information
or cues about how it sounds in the particular word in which it is encountered.
The
P-CUES Concept: Mind your Ps and Qs - Phonic
Cues - P-Cues The intention is to prompt the reader with unambiguous CUES that reduce the number and complexity of the instances of ambiguity encountered during the immediate decoding-stream-flow of the reading process. Imagine that while learning to read developing readers were able to immediately recognize cues 'built-in' to each letter that informed them that a letter�s sound is: alphabet-or-not: use variations in intensity to indicate
that the letter is to sound like its letter name silent-to-loud: use
variations in the size and intensity of letters to indicate the relative
amplitude of the letter's pronunciation from silent to loud distinct-or-blended: use
the space between letters to suggest distinction or degree of blend
Though
these cues may appear visually annoying to the advanced reader (though much less
so than Twain's example
of 'simplified' spelling), consider, if you can, how mentally annoying it is
to learn to read without such cues. The
Software and Font Technology Involved Conceptually,
the technology involved is relatively straightforward. The first component is
the "carrier" or shell that extends the font family to have the added capacity
to store the alternate presentations for each character in a font. The second
component is the "P-Cue presentation dictionary" which, like a spell checker in
a standard word processor, scans the words in documents and looks them up in its
database. When a word match is found, the P-Cue dictionary reads the character
presentation variations (P-Cues) for the letters in the word and substitutes the
P-Cued letters into the publication to match. IN
CLOSING The
examples I have put forth are placeholders. There is significant work ahead to map
the territory of letter-sound ambiguities and to determine which metaphors and
variations of letter presentation will best serve different types of developing
readers. With that said, I believe it is possible to develop a system of
variations that will cue developing readers in ways that reduce the 'overhead'
involved in reading by many times the 'overhead' involved in processing the
cues. Based on my preliminary and
informal experiments with children, I am confident that, once fully developed as
an overall system, this approach will dramatically simplify and speed up the
process of learning to read. What I
am proposing bridges the phonic and whole language ideologies. Instead of having to create �dumbed
down� reading materials or having to design reading materials around the awkward
pedagogical requirements of cryptic decoding, the P-Cue model reduces the
ambiguity involved in decoding and allows developing readers to access more
meaningful materials faster (extending the ceiling on �decodable text� to a more
meaningful and enjoyable level). Finally, it does this without changing the
alphabet or English spelling. This is
not meant as an alternative to learning other rules of decoding, as it won't
eliminate all the ambiguities. Rather, what I am proposing will provide
developing readers the means to quickly filter out a significant portion of what
would otherwise be ambiguities leaving them with a less dissipated attention
span to apply whatever rules remain appropriate (arguably new rules based on an
integrated approach to using this technology with phonemic awareness and phonic
instructional pedagogies). I call
this 'Training Wheels for Literacy" because this system is not intended to
replace our colossal inventory of written materials, but rather to provide
developing readers with an 'on- ramp' and the �training wheels� that enable them
to develop better phonemic awareness, phonic skills and greater attention span
by making it easier for them to keep themselves from 'falling' out of
reading. By enabling them to extend their reading flow, they will learn to
associate the P-Cues with the phonemic distinctions available in written word
structures and ultimately take the 'wheels off' - stretching into the next
step of becoming an empowered reader. In
summary, this learning to read barrier; it's pain, shame and life-disabling
consequences...our arguments about methodologies and the money we spend on
efforts intended to compensate for it, stem not from some deficit or lack
of natural capacities in our brains, but rather, from the change resistant
technology of our 3000 year old alphabet and its poor interaction with the
(nearly as change resistant) 1000 year old technology of English spelling. For
the sake of the children, in the spirit of plain good science, lets acknowledge
the fact and do something about it. Learning to read is a process of acquiring an �inner-interface� between our biologically native all-at-onceness processing and our enculturated mind�s one-at-a-time thought processes. Indeed, reading plays a significant role in creating the later. Taking up this challenge could create a breakthrough in literacy, reduce damage to self-esteem, reduce the waste of billions of dollars and, perhaps, beyond all of that, change the ecology and efficiency of the "inner interface" that regulates our learning, and, who we are. I know that, as they are, these pieces are not appropriate for Journals But I do think these ideas need discussion in the right circles. Do you know of someone in the community that might collaborate with me in seeing these issues come to light in the right journals? Thanks again, David Implicity From the
heart to the mind for the spirit |