Tuesday 21 October 2008

Symbian Essay Competition 2008

To my utter surprise, I have won the one competition I felt I had little chance to succeed in: the Symbian Essay Contest, whose motif this year was "the next wave of smartphone innovation: issues and opportunities with smartphone technologies". If you'd like to read the (abridged) answers by each of the 10 winners, including mine, take a soujourn to the Symbian site.

And if you're interested, here is my answer in full -- just click on the subheadings to show the text:

The smartphone of the future: A powerhouse or a mere terminal?



To foresee the future one must know the past


History always repeats itself, round and round it cycles, like a bike, to maintain its balance. This is no different for the evolution of computing generally, and smartphones specifically. In their early days, the functions of both computers and phones were accessed remotely: the actual machine in front of you was but one of many terminals, a transmitter of information from and to a "central executive". Eventually we began to forget our socialist roots, the "all for one and one for all" of the IT musketeers, and in the brashest spirit of capitalism we marked the advent of the personal computer (PC). While this let the average person have at their fingertips much of the power previously exclusive to the IT technician's caste, this step forward was also a step back: PCs were not initially designed as a lively hub of transport or a loud marketplace, but as an lonely, existential room for one where communication and collaboration were, if not impossible, then Hellish.

Eventually, the Internet opened up portholes in our tiny cubicles, helping us to see each other once more, but the walls remained solid, and electronic communication was as non-communist as ever. We sent each other letters and packages, passed photographs under each other's doors, sometimes we coughed and sneezed, infecting one other with our viruses and ailments, but this was as personal as things got since our doors were always locked, our lives tucked away, our keys held firmly fast to our chains. Despite the virtually endless potential and possibilities of the Web, the philosophy of physical ownership that brought technology to the masses ultimately forced us to constantly rely on the limited brainpower contained in the physical frames of the device at hand.

The modern smartphone has had a similar trajectory, where initial focus on the transmission of information shifted to a greater and greater emphasis on local capabilities. Here too, the motto of "bigger is better", instead of "simpler is better", has brought about much waste of our scarce resources: energy, materials and time, both for producer and consumer. The dream of the PC was necessary, but its scars upon the smartphone industry plague us up to this day with the its elder problems in communicating and sharing information easily and instinctively.

With the smartphone now in its teenage stage, its development is at a crossroads between growing out of the lap of its parents to become a unique mature product in its own right or to be more of the same. If it can leave the PC concept behind to embrace what we truly need as human beings, the SC, or social computer, it could become the hero that will help us return to our primeval state of sharing and socialization, if only in our IT world. This essay will build the skeleton of this mythical device, and help its shaky bones keep a balance between being a powerhouse and a terminal. Let us unveil the full potential of smartphones and thus make them more attractive for those who have not taken the leap. That includes me.



It's the features, stupid


Before smartphones can change the world, they must be bought, and like in the primeval days of the first PCs, the greatest problem in adoption is that they are simply too expensive. STOP. Yet expense is relative: smartphones are expensive for most not as much because their cost puts them out of reach of consumers, but because their usefulness does not yet justify their cost. Most buyers only ever use the peak of the iceberg, in terms of the features available, leaving the rest of the functionality hidden under the dark sea of user interface.

But this problem can't be solved by cramming more features in. In fact, the greatest problem facing smartphones today is the persistent focus on the quantity of features, rather than their quality and usability on a small platform. Let's be direct and frank: feature bloat approaches the proportions of Moby Dick: Telephones with dual cameras, double button-pads, infinite arrays of menus, settings, games and ring-tones, the latest, full-fledged version of Microsoft Word. And Excel. And PowerPoint. And widgets, gadgets and crapgets all tossed around the screen like toys in a sandbox.

To be successful, we must learn to follow a middle way between too many features and too few. Since "a little" and "a lot" mean different things to different people, the best way to achieve a balance is to endow the smartphone with modularity and customization, particularly in terms of software. The basic capabilities of the everyday phone provide a solid base to build on. On top of this foundation of calling, messaging, photography and web browsing we must erect the other features. Modularity empowers the users with choice, a commodity that is in short supply in the smartphone market as yet, and one that the consumer appreciates as much as "capability" or "potential".

It is easiest to achieve this on an open platform like Linux, rather than a purely proprietary one such as, say... Windows. For a developer, direct access to the code, makes it easier to use all the system's nuts and bolts and integrate an application with the rest of the OS. This transparency also allows programmers to find the most effective ways of injecting and drawing information from each program, helping interoperability and resulting in programs that can operate together seamlessly. For instance, if we want to build a photograph manager, we could add a module into the bundled photo taking software to add photos to a particular folder and upload them to a specific web album on the fly, as we take each shot. Or if we want to insert a map into our emails, we could construct a bridge between the map and the mailing applications. All this in turn will make the platform more attractive to consumers and help designers custom fit the software to the phone in question, adding or subtracting features as the hardware requires it, rendering the platform more versatile and opening up markets in a wider range of phones.

Don't misunderstand me. There's nothing wrong with a smartphone being able to serve many functions, in fact it is meant to, but these must be simple and clear to start with. Its core functions must be easily and directly accessible with extras built on top and around them, not stuck messily inside like the bones of the builders of the Great Wall of China. This modern, entropic tendency towards chaos and confusion makes smartphones simply too difficult to use for the 90% of people who just can't be bothered to learn to wade through the mess. You should not have to exert significant mental effort to take advantage of every single function: it's supposed to be a tool, not a mathematical problem! Instead, usage of the phone must be an extension of our own mind, as routine as daydreaming and as quick as our sight.

To help us reach "user interface Nirvana" we should keep a record of frequently used applications and make them easiest to access relative to the ones we never access. However, our needs vary from one environment to another and what is useful at one point in time can often get in the way of our experience at another point. Hence, the record we keep must be dynamic and flexible, such that application accessibility varies with respect to the context or modus operandi of the device. This environmental state can be chosen by the user or deduced by the smartphone by using info of time, location (GPS), previous and planned events, and even acceleration. The OS could even work autonomously depending on the occasion, an idea best illustrated with an example:

Oh dear, is it already 5pm? Ring a quick alarm to let you know that your husband is due to arrive and that your lover is due to leave, then launch the map application, zooming in on the house, indicating sensible escape routes ("No, not the balcony, Charles, we're on the 6th floor") or if that's too late, then use some of its gathered information to suggest a good excuse ("Oh dear, now you've ruined my surprise, and only days before your birthday. Rex, this is the party decorator").

Now seriously, context can be far more useful than simple convenience but help in matters of life and death, as demonstrated by smartphone programs like Life360. This app helps you keep track of the health of your close ones, alerting you about natural (and unnatural) disasters relevant to their location and yours. A panic button alerts others around you of your whereabouts and that you need help, a feature that can be activated automatically if the app deduces that you have suffered a traffic accident through the use of information from the accelerometers. Another program, cab4me, can call a taxi to your location regardless of where you are. CompareEverywhere lets you photograph bar codes to find out the price of, say, a DVD at the shop you are at compared to nearby stores and cites reviews to help you decide whether the laptop you're pondering to purchase is actually worth it.

The Web too will benefit from the smartphone's capabilities, making our surfing experience more streamlined and custom-fit to our personalities, helping us bypass "junk" content. The context information integrated in the phone itself can aid us when doing web searches, bringing results that are more relevant to our present location, the places we frequent, the services we use and the things we like and dislike. Here the smartphone has a distinct advantage over full-fledged computers, since they can be always with us, accumulating information about our personal and social habits.



The flesh in the machine


The endless pit of features in modern smartphones goes hand in hand with greater and greater hardware requirements. This, aside from frightening the wits out of the wallets of the most people, threaten to devour our paltry batteries in a matter of hours, rather than days, making our phones intelligent anchors: smart, but not really mobile. The hardware part of the solution comes in the shape of newer chip architectures, which rise up as faithful Spartans to meet the power challenge, providing more and more processing power per electron. These new chip architectures forbear the coming of ever more powerful phones, whether based on classical processor-on-motherboard designs, like the Silverthorne architecture of the Intel Atom, or comprising an all-inclusive system-on-a-chip, such as the NVidia Tegra. Yet we must not overload their capabilities given the trend of modern software to bloat faster than hardware can support it, needing more and more of those tired electrons as they complicate simple tasks. Remember that the Spartans did fail in the end and that Windows Vista "capable computers" are capable of little more than booting the system. We must learn from past mistakes and avoid the scenario where our smartphones, whose stuffed electronic minds slug on ever slower even now, lose the worth of their name.

On a more optimistic note, the rise of touchscreens gives each new generation of smartphones less reasons to possess any physical buttons, since virtually all their functions can be emulated as easily using software. This also results in more robust devices that can be taken anywhere, since there's less holes for dust and sand and rain to pass through. What is the need to a physical keypad if the dial conjures itself up on your screen when you start a call? Why do you need a keyboard if you can just as easily type on the display? Another advantage of touchscreen keyboards is that more information is available with respect to where a finger is relative to the key, allowing for far easier auto-correction than with physical keyboards, where the coarser grid of pressed keys provide the sole input. The one obstacle in the way of devices embracing this technology has been the common complaint that touchscreens, unlike keys, don't provide you with any feedback when you press them. Thankfully, new developments in haptics by Nokia will soon bring us the Haptikos touchscreen, which uses sensor pads under the screen to give you the same tactile response as a pressed button. So you see, now there's no excuse to go touchy-feely with smartphones.

Multi-touch screens multiply the fun, expanding the number of tasks that can be done with a flick of the wrist. Aside from already popular gestures like two finger scrolling and pinch zoom, there is much opportunity to add more complex gestures tied to particular applications. For example, in an Internet browser, we might rotate our index and thumb around each other to reload the page, or draw clockwise and counter-clockwise spirals to move forward and back. Alternatively, while using the camera app, these very same gestures might change the zoom of the lens and scroll through different photography modes. Gestures are an important step in making communication between humans and computers more language-like and intuitive, easing the use of smartphones and helping us spend more time taking advantage our smartphone capabilities and less time trying.

On the visual side Organic Light Emitting Diodes (OLED), the next likely screen technology, will enhance the experience of using a smartphone in several ways. Firstly, the reduction in power usage of an OLED screen compared to current LCDs means that larger screens will be less disadvantaged with respect to battery life, facilitating the transition to devices fully usable by touchscreen. Secondly, the improved picture quality is to be reckoned with, since it will make watching media on the device an enjoyable experience rather than a last resort. Finally, a less advertised advantage is that OLEDs don't require a backlight, which means that screens can do without a bezel. Imagine screens that literally occupy the entirety of the face of your, (now far thinner) smartphone. Thus, OLEDs are a match made in gadget heaven for touchscreen based smartphones.

The likely successor or competitor of OLEDs is electronic paper (e.g. eInk), which, like a chameleon, changes the pigmentation in it's electronic skin by modulating its reflectance, rather than emitting light of its own. The advantage of this is that the display needs no energy except when changing the picture, enhancing battery life even beyond that possible with OLED. e-paper is also far more readable in well lit environments and lets you easily read books from the screen, a feat not impossible, but rather torturous in practice on current smartphones.

Flexible displays will change the name of the game in terms of smartphone design, since their coupling with flexible hardware architectures can effectively enlarge a smartphone without changing its form factor. Imagine a touchscreen phone twice or thrice as wide as an average smartphone that can be folded along its vertical axis to be placed comfortably next to your ear or in your pocket. Fold it the other way to convert it into a dedicated photo-camera. Such tricks can be used to place smartphones in closer competition with devices like Ultra Mobile PCs (UMPCs) and netbooks, with the added advantage of being far more versatile. The increase in the unfolded display size up to 9 inches would enable far more comfortable touch typing and Internet usability.



All roads lead to smartphone


Just as smartphones converge onto becoming computers in their own right, so do other devices converge into the smartphone, such that more and more separate gadgets are fused into one. Nobody wants to spend the money on and carry a bucketload of gadgets if a multi-featured device can do the job as well if not better than each specialized one. Also, since we virtually never use more than one item at once, it's far less wasteful to use the same materials, hardware and energy for all of these tasks than to fulfill each one separately. The more gadgets we build, the more valuable resources we effectively pour into our already overfilled landfills; like in Japan, where such dumping sites contain more precious metals than are consumed in a year globally.

We've seen cell phones take baby steps at becoming cameras, first tumbling, but progressively more and more successfully to the point that now they're about to become the main photography devices for most of us, hobbyists. This makes perfect sense, since the resolution of our cameras already exceeds the capacity of our vision, such that further increases in resolution, rather than improving the quality of our photos, merely allow us to enlarge them further and further. Let's face it, the pixel race is rendered useless, since who has the album- or wall space for 20 megapixel photographs?

Smartphones are also dipping their greedy fingers into the portable media player market. A vital advantage of smartphones over typical players is their inherent, painless programmability. If you'd like your iPod or Zune to play, say, open source Vorbis ogg files, you'll have to strap yourself in for a torturous weekend: Hacking your device, uninstalling its firmware and installing yours, all the while holding no warranty that this will work on your particular device and risking wrecking your gadget. In a smartphone, all you have to do is to go online and find an application that fulfills your needs and install it.

With the advent of recent hardware developments, not even emergent markets are safe from the smartphone menace. As mobile Internet experience improves, netbooks may find themselves redundant. Similarly, with screens that are larger and easier to read, electronic book readers may become absorbed into the smartphone. And as GPS becomes a standard feature, smartphones will become the map of choice in both your car and on foot, delineating routes and channelling traffic information to help you find your way, replacing Personal Navigation Devices (PNDs) and Sat Navs.

In the not too distant future, smartphones will take the place of your wallet, your public transport card and perhaps even your keys and means of ID. The pervasive possession of mobile phones and the increasing ownership of smartphones create the chance to do what the plastic of our credit cards could never achieve: to liberate us from our dependency on physical currency. Thanks to Near Field Communication (NFC) technology, in the near future you will wave your phone in front of the till to buy your newspaper or give your phone to the waiter when the bill is due. In the airport, the passport checkpoint will be scanning your phone instead of your booklet. These steps, as well as convenient, will lead to better international security, as hardware encryption techniques become so sophisticated that illegal decryption and falsification becomes impracticably slow and uneconomic. This in conjunction with biometric identity tests will decrease the chances of our dark alleys resounding with a "your phone or your life", since stealing the phone won't allow one to draw any money from it, aside from perhaps selling it.

All these developments are leading to a future where devices are more convenient, economical, eco-friendly, secure and actually have a positive impact on quality of life. All this in turn makes smartphones more viable for the average consumer, particularly in developing markets where individuals cannot afford to buy more than one device for their needs. Wider adoption will also lead to a greater number of developers being available to build new applications, leading to more innovation and a growing advantage over more expensive and specialized devices in the market.



A walk in the clouds


Let us turn our attention to the issue of evaporating smartphone's capabilities into a computing cloud. One of the reasons why the realization of cloud computing will be unavoidable is because most people do not require their smartphones to be full fledged word processors nor to be permanent mail clients or agendas. Many such capabilities are already provided by the Internet, in the shape of web applications, such as lightweight online document processors and calendars. Instead of creating programs from scratch, a better strategy is to provide easy integration of these services with the phone, allowing them to run offline and store part of their information locally, while keeping the great bulk of it online. The smartphone thus becomes a vessel for the capabilities of the new Internet, rather than remaining anchored to and entangled in the old solid Web and rusty personal computing.

In a similar vein, a better way to save resources and speed up processing on programs local to the phone itself is to outsource the most processor intensive tasks. For instance, given a fast enough Internet connection, instead of using local resources to create complex graphics it would be feasible to use a nearby server to compute the most intensive operations and send the result back to the smartphone in the shape of images. This would let you use Photoshop without having to overload your own smartphone with complex matrix computations. Alternatively, imagine you need to quickly tell something to your boss, who is at this moment in an important company meeting and can't speak. It would be neat to simply speak your message into the phone and for it to be converted into a text message on the fly by a dedicated computer and sent on.

The importance of cloud computing is even greater for a possible market incursion into developing countries. Since these nations tend to have a far less advanced mobile infrastructure, this initiative will be aided by recent developments in the implementation of mesh networking. Projects like the OLPC (One Laptop Per Child) intend to make the Internet available to everyone by turning each device into a transmitter, as well as receiver, of information. Using each active device in this way creates a solid interconnected infrastructure, effectively spreading the reach of the Internet and allowing technologies like VoIP (Voice over IP) to serve as a basis for mobile connectivity. Collaboration between such projects and smartphone companies has not merely economical benefits, but also demonstrates the social value of smartphones and their importance in helping people and communities.

In developed nations, the pervasive Internet connectivity afforded by the emergent technology of WiMAX will make high speed web access on the smartphone a reality. This development is vital not only for the endless possibilities of cloud computing to ripen, but also to render smartphones more popular and fashionable, therefore making them a "necessity" rather than a want or luxury. Then we can let Metcalfe's law do our work for us. This law states that the value of a network is proportional to the square of the number of users. The trick then, is to reach critical mass and speed in the market when the value of the platform is sufficient to attract the average user and create a domino effect, breaking the market barrier between early adoption and mass ownership.

Last, but certainly not least, we must remember that the original promise of mobile phones was social, one that is yet to be fulfilled. The spread of social networks has been a step in the right direction, but will never achieve its full potential if we only ever use them while sitting in our rooms, alone, chatting across a blank terminal, exchanging, at most, emoticons ;-) For social networking to be truly social, it must be as flexible as social interactions, being available on the go. This is where smartphones come into the stage. But also, we need our smartphones and its apps to interact with our social networks, using information not only about us, but also our close ones to help guide our decisions: What music should I get for my cousin? Will the girl I fancy like the film I invited her to see? Where can I take my grandfather for his birthday lunch that he's never been to? This last is something we can only achieve when our lives are built on cloud castles.



The meaning of life? Not 42


So, does the smartphone need to be a powerhouse or a mere terminal? I think it must be a little bit of both. In our age, web services are starting to replace the capabilities of installed programs. It's only logical that the transition to cloud computing should happen first in an inherently mobile device like the smartphone, paving a stairway to heaven for the rest of the computing industry. But at the same time, our smartphone must be powerful enough to be able to survive on its own two feet and be able to straddle the power of web services.

While at first sight most of this essay talks of developments aimed at the device proper, each prepares the ground for a steady transition to a distributed framework. Most applications we are likely to download or buy for our phones in the future will be based on the cloud, and the contextual information our phone is likely to gather will surely be interpreted in relation to our nearby electronic environment. Indeed, the main aim of hardware improvements is to make our smartphone a better receptacle for the web. And device convergence onto the smartphone is fuelled by the ease of both downloading and uploading content from a mobile device onto our home base anywhere, anytime.

In the end, the smartphone is set to become not a simple computer terminal, but a real enactive window into the world. That's a voyage we're lucky to witness!

Thursday 16 October 2008

The other half of our Moon

My iambs always seem to feed on sorrow and memories past, but are meant well.

"These fleshy fruits about my beak,
have twisted, turned around in time,
to petal, sepal, arid spine,
and lastly listless, lifeless seed.

Within - no flesh, I fear, cut out
by blunt, by slavic, stumpy hand.
My florid tongue was but your land,
depleted, languished in my snout.

Two orbits bare, two shrivelled stars,
inside slain seas surge charcoaled isles:
Is quit the quiver of my eyes,
and Amor's arrow's but a scar.

Despite, each eve, waves plan their flight
from these their coves, to stony shores,
to lap the wounds and salt the sores.
Could keep them captive not tonight.

They tumble to Electra's tear
enwrapped in chain, swaggers a rent.
A "whoosh", Psyche erupts the dent,
her argent cord - impaling spear.

Bronze heart, subsister, knew not rust
until past pores your pasty dew
infested it, its warmth withdrew.
You taught it love can come with lust."

Wednesday 15 October 2008

Nothing is whole or part, but thinking makes it so

And last not least, this essay I wrote for the Max Perutz Science Writing Award about my first fMRI project and its philosophical implications. The marriage made in Nirvana between Buddhist mysticism and science, a la Fritjof Capra...

Physics and Buddhism. These seemingly opposite ways of knowledge share one common denominator. The Buddhist doctrine of Pratītyasamutpāda ("dependent origination") says that all within our universe is interconnected and interdependent, every apparent 'thing' depends on everything else, and ultimately on the universe as a whole. Similarly, Niels Bohr, a founding father of quantum physics, argued that "isolated material particles are abstractions, their properties being definable and observable only through their interaction with other systems."

Naturally, if everything is interconnected, nothing is divided. Therefore separate objects cannot truly exist, it's the mind that creates them. Buddha realised this over two millennia before psychologists did, stating that "with Vijñāna as condition, Nāmarūpa arises". Vijñāna is "divided knowing", cognition; Nāmarūpa is "name and form" seen as one. Indeed, what is an object to our minds? It is but its form or features, and its name, a tag binding these together and separating them from the rest of the world.

Cognitive neuroscientists as myself are interested in the neural correlates of mental objects, their "name" and "form", since these help us understand how we structure our visual world, granting us insight into the nature of visual consciousness, the essence of experience. But you may ask, how can you argue that we construct the objects within our visual world if they appear so constant? Let me illustrate our mental malleability with an example:

As you contemplate this page, the very same image enters your eye, but what objects do you see? Now 'tis a paragraph, then a line, a word, a letter even, and if you focus clearly, the traces making up these letters become the objects of your awareness. And as you go back up in this hierarchy, what were once objects become parts thereof, and so on. Thus, a visual object has no objective reality, pardon the pun, but is a subjective matter dependent on occasion, task and mood.



My research is centred on finding, through visual short term memory tests combined with functional Magnetic Resonance Imaging (fMRI), where within the brain we represent visual features and the cage encapsulating them into a single object. fMRI shows that they are kept somewhere in the parietal cortex at the top and back of our head, vital for organising our attention and perception of space. Indeed, it's this attention that is thought to bind features and assign them a tag, constructing objects we can perceive and play with in our mind.

To find where these two components of an object are held separate, I take advantage of an analogy of the above example: I show you a scene made up of coloured discs and ask you to remember them as groups of distinct coloured triangles or as one complex whole. The number of objects depends on how I ask you to remember the discs, since a set of features can only belong to one object at a time. Hence, by comparing brain activity related to memory for the very same discs either as an aggregated whole or a handful of parts, I can find the brain locus where features are glued into single items. Similarly, by changing the number of discs you must remember, I can alter the number of features independently from the number of objects, and locate where these features are kept.



But to what use can this knowledge be brought to bear?

Emerging brain imaging methods let us explore the topography and properties of visual maps in detail, allowing us to predict what a person sees from their brain activity and bringing us ever closer to reconstructing and viewing the content of our inner display. But to truly succeed at reading and comprehending perception we need to be able to image the spectator, the object making homunculus within.

This information can then help us better understand neural conditions arising from parietal brain damage, such as simultanagnosia, whose sufferers cannot perceive more than one object at a time, and often report illusory conjunctions by grouping disparate features into a single object.

The questions on our table are ancient, but the framework of cognitive neuroscience slowly unravels an opening in the thick unknown, promising to illuminate our ignorance and enlighten us. By learning how we structure our external world, we discover how our internal cosmos is built.

So what's the moral? Misquoting Hamlet, "Nothing is whole or part, but thinking makes it so"

Tuesday 14 October 2008

Out of sight, out of mind

In these silent times, I've aught but some more essays to share with you. This one I submitted to the Daily Telegraph Science Writer contest:

Our eyes, windows to our soul, are not one-way streets. Our mental life, irrigated by our perception, depends on the images illuminating it. This simple metaphor for our vision, a matter more complex than our blunt portholes, sheds light on the mental condition of autism. We recognize people with autism by the trouble they show in socializing, their language deficiencies and their insistence on sameness and repetition. But a less well-known fact is that they see the world rather differently from the rest of us, a fact that can allow us to understand and aid them.

Sight starts in the eyes, as does the crux of the matter according to psychologists Kate Plaisted and Greg Davis. They argue the key to understanding this is the magnocellular (MC) system of cells in the retina. MC cells respond to coarse, global features, and brief changes in the visual fringe, making them vital in guiding attention to salient aspects of our surroundings. Dr's Plaisted and Davis have shown that these cells are less sensitive in autistic individuals, which can explain two outcomes related to the properties of these cells. Firstly, children with autism easily concentrate on the fine aspects of scenes, helping them to quickly spot slight changes and making them impervious to visual illusions; but have trouble grasping the gross context of a scene. Secondly, they find it hard to move their attention from one thing to the next, which explains their tendency toward reiteration.

Also, this MC deficit may cause a domino effect on social brain functions like face perception or imitation. We start watching faces in our infancy, an ability depending on our MC system, which directs our attention to the archetypal, gross t-shaped form of the human face. This focus seeds the growth of our adult abilities, and disturbing it mars our natural tendency to respond to social hints, resulting in severe social difficulties. Mark Johnson argues this MC system is also critical in adulthood, by helping us comprehend face expressions which cause global visual changes. Facial gestures are the physical twins of emotion, that very abstract concept that lets us make sense of not only our own, but also others' mental world.

The MC system also feeds into the so called dorsal visual stream in the brain that underlies such processes as perception of coherent motion, also impaired in autism. This in turn impairs our perception of other people's actions and our ability to imitate them. Indeed, in this dorsal stream, 'mirror neurons' respond to actions we see others do and those we make ourselves. This allows us to cross the frontier between self and other, not only in terms of imitating acts but also mental states, letting us understand others and share our happiness or pain with them. Hence, according to Marco Iacoboni and Mirella Dapretto, it is a deficit in this system that leads to problems in imitation and empathy in autism.



Lastly, autistic people don't perceive objects in the way we do. Sarah Grice showed that they don't show the same characteristic electrical brain activity when seeing illusory objects, like the Kanisza square (see above image), as normal individuals do. Instead, they respond like 6-month-old infants who can't yet integrate the display into a square, making their visual world fragmented, stopping them from seeing the forest from the trees. Crucially, since our interactions depend on the big picture - exuberant dancing and loud singing may make us the spirit of a party but will land us in detention if we try it at school during an exam - it's not surprising that an inability to judge context can be socially crippling.

So what if autism is all about vision, or lack thereof? Well, this understanding will hone our own foresight and help tackle the root problems in autism, letting us intervene sooner by using MC sensitivity as an early diagnostic tool. Our knowledge can also focus our intervention schemes at the visual problems and their developmental consequences, particularly since brain plasticity and flexibility is greatest in infancy. For instance, we could engage infants at risk in educational games that require quick shifts of attention and binding of features together. Or, perhaps, educate them to focus on and recognize faces and their expressions, even train their mirror neurons through imitative play. By learning how they see the world and how we can help them see ours, we can make life easier for these our children, the apples of our eyes.