The news of the week is IBM's announcement that C2 – their joint project with Lawrence Berkeley National Laboratory, which intends to build a real-time simulation of the human neocortex on top of LLNL’s Dawn Blue Gene/P supercomputer – has succeeded in produce simulations at unprecedented scales of 109 neurons and 1013 synapses. While this is not yet enough to fulfill the project's ultimate goal, it's more than enough to simulate a cat's cortex – a first for the field of computational neuroscience.
I think that a couple words on C2's point are in order. News like that are always met with well-deserved excitement, but also with some unreasonable expectations, and a bit of healthy, but sometimes misdirected, skepticism.
First off, there is no risk of C2 "waking up" or something – at least, no more than a weather simulation risks producing a real storm inside the server room. Decades of "frankenrobot" movies seemingly made people believe any slightly more advanced computing experiment could become conscious and develop a psychotic personality, but that simply isn't how simulations (or computers for that matter) work. So no, the likelihood of a robot-apocalypse isn't much greater now than it was last week, which – for better or worse – is pretty low.
On the other side of the spectrum, some have questioned – given how different brains and computers are – what is the point of simulating brain structures in a computer anyway. How could this ever work, if we don t even fully understand what we are trying to simulate?
Actually, under the assumption of Turing equivalence, brains and computers have essentially the same computing characteristics – in principle it's possible not only to simulate a brain inside a computer, but also to simulate a computer inside a brain. Granted, whether the brain is or isn't a Turing machine is still an open question – but at this point, even experimental evidence that the brain isn't a Turing Machine would be a scientific discovery worth of the money and effort put on C2's research.
Besides, one way scientists check what they think they know about a subject is by creating simulations. If a simulation's results match the current experimental data available for the subject, then this is evidence that the theoretical model it implements is going in the right direction. Moreover, simulations enable scientists to economically perform virtually any experiment they can imagine, improving their ability to gain new insights into the subject: experimental conditions can be controlled in a level unattainable in the real world, to the point of turning time back and forth, fiddling with whatever parameters you like along the way. Later, simpler real-world experiments can be devised to validate any new results, thus reducing the need for expensive, difficult, and sometimes ethically complicated, procedures.