Share this content
11

A 5P3C141 M3554G3

A 5P3C141 M3554G3

Didn't find your answer?

‎7H15 M3554G3 53RV35 7O PR0V3 H0W 0UR M1ND5 C4N D0 4M4Z1NG 7H1NG5! 1MPR3551V3 7H1NG5! 1N 7H3 B3G1NN1NG 17 WA5 H4RD BU7 N0W, 0N 7H15 LIN3 Y0UR M1ND 1S R34D1NG 17 4U70M471C4LLY W17H0U7 3V3N 7H1NK1NG 4B0U7 17, B3 PROUD! 0NLY C34R741N P30PL3 C4N R3AD 7H15. R3 P057 1F U C4N R35D 7H15

Replies (11)

Please login or register to join the discussion.

Steve
By SteveRobson
08th Sep 2011 08:27

Reminds me of when I used to type 54377 017 and 43770 on my calculator and turn it upside down!

Thanks (0)
avatar
By Rus Slater
08th Sep 2011 12:42

that if I type

"54377 017 and 43770 on my calculator and turn it upside down" I actually get ...

LIO LLEHS AND OLLEH

....because you have written them left to right instead of right to left

Which goes to show that though our minds are capable of making the change we tend to revert (with a lack of habituating the change) back to what we are used to [presuming that Steve writes in normal English!;-)]

Rus

Thanks for this Garry and Steve it will fit perfectly into the programme I'm just writingon Managing Change!

Thanks (0)
Steve
By SteveRobson
08th Sep 2011 13:19

You are correct Russ...my calculator is on my PC these days so I just did a head stand on my desk.

Could a Magic Eye pic be used in your course? If Greenwich market is anything to go by people love staring at these things!

Regards

573V3 (you might need to stand upside down)

Thanks (0)
Steve
By SteveRobson
08th Sep 2011 16:17

Tehre has been a sudty at Hravred Uinestvry, taht has porevn taht a preosn can raed any wrod as lnog as the frist and lsat ltetres are the smae.

Thanks (0)
avatar
By Martin Couzins
09th Sep 2011 12:54

Garry, this is briliant. Thought my eyes would start spinning in their sockets but then I got into it. Incredible.

Thanks (0)
avatar
By Carrol
12th Sep 2011 12:22

Are there really people who CANNOT read this .. ?  Hard to believe ..

Thanks (0)
avatar
By alexknibbs
12th Sep 2011 13:01

I tohguht taht it was cdceonutd at Cbrgdamie Uinv!

Fnuny tohugh, it deos wrok!

Aelx ;-)

Thanks (0)
avatar
By HazelStimpson
12th Sep 2011 16:38

I really do have work to do, but could not resist reading this posting.  I have one question - what arre "ceartain" people?

Yes, I too must remember this for a future training session.  Keep learning interesting and fun everyone!

Hazel.

Thanks (0)
avatar
By Sue Porter
13th Sep 2011 16:04

I love it!! 

5U3

Thanks (0)
avatar
By Rus Slater
19th Sep 2011 06:29

Rus (dman, I cna't do it to my own nmae!)

Thanks (0)
avatar
By John Brown
12th Nov 2011 14:53

I will now try to explain why we can read the above.
In his 1996 book, David Caplan explained how experiments in Cognitive Psychology have shown that there exist 8 dictionaries in the brain, that can be classified into a taxonomy that branches between speech and text, then between input and output, then between whole-word or part-word (phoneme for speech, grapheme for text). This work was the basis of the shallow and deep distinction between different variants of dyslexia, and it found its way into undergraduate texts by Eysenck on Cognitive Psychology.

In the case of text input, the data originally comes from the hypercolumns in the striate cortex at the back of the brain, where short lines or “edges” at different angles are extracted from the image coming from the eye.

The whole-word-text-input dictionary matches all the edges in a single word with the vocabulary that it contains, and finds the closest matching known word. It does this with a lot of neurons working in parallel so word-recognition is very fast. It seems likely that the white edges between letters form edges that are also recognized. (Think of the optical illusion with the vase and the two women looking at each other where foreground and background can be easily interchanged. )

Since we have to recognise different fonts, or handwriting styles in which some letters may be pushed down below the main line of the word, or even be suppressed completely, recognition is always a statistical process that can be characterised by a weighting indicating how good the match is.
The part-word-text-input dictionary works more slowly, and matches at one time only a single letter or possibly two or three letters that our brains have learnt do often occur frequently together. So treating the whole of a word is a serial process. (Artificial Intelligence programmers can mimic this sort of word look-up in a trie data structure.) This serial matching seems to be going on in parallel with the operation of the whole-word-text-input dictionary, but if this returns a high weighting the part-word results are discarded before they are complete. Otherwise, reading becomes slower as the results from the two dictionaries have to be compared and a consensus reached by some kind of voting strategy.

The funny text message above is a bit of a cheat. Most of the letter substitutions maintain a similar set of edges although these have been shuffled around a bit.
5 --> S
7 --> T
1 --> I
4 --> A

In this last case, on both sides of the substitution, there is a horizontal bar, a diagonal at the left, and on the right there is vertical edge in ‘4’, but a similarly-sized and positioned diagonal in ‘A’. Screw up your eyes a bit, and both sides look very similar.

The hardest substitution to make is
3 --> E
but even here the curves at the top and bottom of ‘3’ are fairly similar to the bars at the top and bottom of ‘E’, and ‘3’ even contains what is close to a vertical edge, although in ‘E’ the matching vertical edge is shifted to the left. And since E is the commonest letter in English, that match already has a high prior probability of being selected. We can fancify this up with Beyesian statistics, but another way of looking at it is that a trained Neural Net will automatically tolerate a wider variation in the formation of ‘E’ than for other letters, since its training set will present ‘E’ more frequently. For a subject who reads a lot of handwriting, the shape of that ‘E’ will vary a lot depending on its surrounding letters. So later processing levels will tend to prefer ‘E’ to less frequent false matches (and no words in our dictionary contain digits).

Experiments in AI with artificial Neural Nets show that they will quite automatically learn to extract differing metrics from a presented instance. In this case, the number of edges is the same (if the striate cortex presents alternative edge interpretations of what are slight curves), giving a high weighting, but their relative positioning forms a second metric where the match is not so good. That suggests that the whole-word dictionary will produce a reasonably good match even where ‘3’ is substituted for ‘E’.

The part-word-text-input dictionary is more accurate and less tolerant since as you descend through the trie that encodes the whole vocabulary, only a small number of potential letters are admissible at any point. For example, “be” cannot be followed by any of the following {i, j, k, o, p, u, x, y, z} in a person with a normal English vocabulary (unless they like football or pop-music, in the case of ‘x’ or ‘y’). If a letter is incorrectly identified, then the chance that the next letter just happens to be correct becomes very small indeed.

So part-word recognition is more reliable than whole-word recognition, over the whole word. Another mechanism that comes into play is sub-vocalisation, where we move our lips and tongues to match the letters as we work through a word. Since we have a part-word-speech-input dictionary, this gives a second method of checking. Although we might confuse a “h” with a “b” based on its edges, the way we sound the two letters is completely different. It therefore probably jars with you that I did not write “an ‘h’”. I bet you didn’t remember sub-vocalising, in which case it was a subconscious process.

Finally, Cognitive Psychology teaches us that there is a lot of “top-down processing” where we interpret things according to what we expect. “Priming” is an example of this, and this can be implemented here by a dictionary of frequent consecutive words or “collocations”. In this example, “without even” would tend to make us more likely to recognize “even” once we had seen “without”. In fact, the only words I can think of that come frequently after “without” are {even, a, the, risk, damage, danger, loss}. At a higher linguistic level , where we allow for at least some degree of parsing, if “without even” is followed by a verb, it must end in “-ing”. Clearly we are parsing as we read, since that is an essential step in understanding the semantic meaning of a sentence.

I like the multi-level model of language that Jackendoff presents in his 2003 book. I haven’t yet read his 2010 book and I really should make time for that.

Sorry if I have bored you. But the funny text seemed so rich in unexplained phenomena that I haven’t been able to stop thinking about it.

 

Thanks (0)
Share this content

Related posts