In order to truly read quickly, we need a language that’s seen and not heard.

There are no staff in the Ximen Intelligent Library of Taipei. You pick the books you want, and an automated system of software and sensors keeps track of what you’re taking. This way, it can stay open late into the night — a time when librarians may want to sleep, but many readers are just about getting ready to leave work and start reading.

While the Intelligent Library is a novel idea, the problem is not so new. With so many responsibilities, gadgets and distractions, people around the world seem to have less time to read. Attention-spans are shortening as well, becoming too short to read even one book.

So many people don’t seem to bother with words and sentences, sending each other long strings of emoji instead. They say 😮, 😢 and 🤔 because they can’t bother to actually write things..

And then, there are the other people afflicted by tsendoku, the condition where you have a long line of books you want to read but no time to actually read them.


The problem is so widespread, we now have services like Blinkist that actually condense books, giving you a ‘lite’ edition that gives you the essential ideas in a shorter format.

While that’s pretty useful if you want to know what a book is about, it’s not the same as actually reading the whole thing through. If you want to do that with limited time — well, that’s what ‘speed reading’ is for.

Speed reading has become a popular topic, with apps like Spreedly helping you get through text faster. It does that by flashing words on a screen one at a time, to save you the effort of moving your eyes around.

Of course, not everyone wants to read books on apps. That’s why speed-readers generally focus on less hi-tech methods — such as avoiding reading out loud in your head.


One does not merely learn to read. One must first to learn to speak. And before that, one must learn how to listen, and how to understand what people are saying.

Except for the first step, all this has to do with voices.

When I first learnt to ‘read’, I basically memorised my favourite story — first linking paragraphs to pictures in the book, and then, later, linking words to text on the page. Most of the ‘reading’ I experienced was my parents reading stories to me out loud — and so, when I began to actually read, I used to read out loud too. (I also used to hold books upside-down, because that’s the angle I would see them when my parents were sitting opposite me).

It was only later that I leant to read ‘in my head’, with the words staying silent and not turning into sounds at all.

Actually, they still did make sounds — in my head.

This happens to everyone. Letters don’t turn directly into meaning. They’re first converted to sounds, and it’s those sounds that are understood as meaning. That explains why, when you want to understand a difficult paragraph, it helps to read it aloud — even if it’s just whispering to yourself.

Its also something I do when I’m too sleepy to continue reading but too awake to put my book down.


This saying-it-out-loud-but-inside-your-head is called “subvocalisation”. And it in fact goes beyond your head. When you subvocalise, your throat muscles also move slightly — as if they were speaking, but so gently that they don’t actually make a sound. That’s useful because scientists can measure when you’re subvocalising, and make cool things like a robot you can speak to without opening your mouth.

But for speed-readers, subvocalisation is a bad thing. It takes time to play back the sound in your head and then begin process it. So they try to bypass subvocalisation completely. They want to see a word and instantly know its meaning, the same way you would with a hand-gesture.

So they use many techniques to train subvocalising out. They may play constant music, or hum a tune, or pop chewing-gum in their mouth to keep it occupied. Over time, their subvocalisation reduces further and further — and their reading speed increases.

All well and good?

Not so fast.

In one study, speed-readers were made to read samples of text. They did it quite fast — and got the general gist of what was written — but also failed pretty spectacularly when recalling the tiny details. Studies have shown that when people read fast, the amount of information they actually absorb and understand goes down.

Of course, fast reading is still useful for things you don’t want to read, such as the long pile of unread emails in your inbox.


At one level, language is all about sounds. When babies first start learning to talk, they focus on the musical aspects of sound — the tone, pitch, and rhythm — rather than the actual meaning. And it’s not just babies. Dogs often use the tones of humans’ voices to make out how they feel, even if they don’t understand the actual words. (Humans, on their part, do the same to dogs).

The relation between sound and meaning never really goes away. In Japan, schoolchildren are taught the ‘kuku’ song as a pleasant song to sing. Only later do they learn its actual meaning: the multiplication table. After growing up, they still remember 6×7=42, not because they calculate it, but because it “sounds right”.

Even speed-readers need sounds to turn words into meaning. While they don’t feel their subvocalising, studies have shown that they actually do it — just much more lightly and subtly than everyone else.

Subvocalisation doesn’t just gave the brain time to think. It is the way the brain thinks.

But does it have to be that way?


When I read a book, there seem to be some things I don’t subvocalise. Complicated character names, for instance.

If I read that person’s name in print, I’ll recognise it immediately. But if you ask me to tell you the name of the character, I won’t be able to. I may not even know the spelling, let alone the pronunciation.

Taht’s bauesce I’m radineg the wrod as a wolhe, not just looking at the individual letters — and I never need to make a sound for that character’s name.

So if we can do it with names, why can’t we do it with anything else?


Most people don’t think of deafness as a big problem. Not as bad as being blind for instance.

But if you were born deaf, you wouldn’t be quite so casual. It’s not just a matter of not-hearing: you won’t be able to speak, either. How can you learn a language if you never hear anyone speak it?

Earlier, deaf people used to be languageless. Trapped in their own world, unable to communicate with anyone else. Like that one recurring nightmare where you try to speak and no voice comes out of your mouth: except that here, you wouldn’t even comprehend what a ‘voice’ is.

Some of those people are taught to lip-read, and to make vibrations with their mouth and throat to do what others call “speaking”. But what comes more naturally to them is to speak a “sign-language” — many of which have arisen spontaneously, by themselves, around the world.

A true sign-language is not just simple mime acting and gestures. It’s a language in its own right, with syntax, grammar, and all the rest, as good as any voice-based language. The only difference is that it’s spoken with hands and gestures instead of lips and sounds, and the way it travels from person to person is not sound-waves but waves of light.


I tried to think of a written language. I don’t mean English or Russian: I mean a language that’s only written, never spoken. Like mathematical notation, or a programming-language.

But even those aren’t purely written. I always catch myself saying “x squared” or “y cubed” in my head — or even made-up shortcut-words like “z foured”. Programming, at first glance, seems even closer to spoken languages. Most of the keywords, like if and drawRectangle, are basically co-opted words.

Programming also has structure, though: brackets, quotes, and the way blocks of code are arranged. These elements give new meaning to scripts: the same way punctuation does to paragraphs, but more elaborate. And those meanings, unlike punctuation, have nothing to do with sound.

These elements are used in “code poetry”, an art form that expresses things using programming elements, in a way ordinary poems can never do.

But in the end, there’s still sound. You still subvocalise some of the stuff, at the back of your head. None of the languages we have now seem to work: for true speed-reading, we need a language that’s only written, never spoken.

And how do we create such a language?

Where do we start?


You know those people I mentioned, whose messages are filled with emoji? The 😀’s and 😜’s and 😍’s and 😮’s which are instantly understood but can never be read out loud?

The thousands of people sending new-emoji requests to the Unicode Consortium, so they can be added to the next generation of apps and smartphones? The ones so enthusiastic that they translated a whole Charles Dickens novel into emoji format?

I guess they might just have a point.


Have something to say? At Snipette, we encourage questions, comments, corrections and clarifications — even if they are something that can be easily Googled! You can also connect on social media, or sign up for our email updates here.

Sources and references for this article can be found here.