These fluffy snowflakes, known as aggregates, form when snow crystals collide with other snow crystals. Many of these flakes also show some riming, or an icy coating. A new high-speed, three-camera system developed at the University of Utah made these pictures as the snowflakes fell.
Credit: Tim Garrett, University of Utah
This webpage is about one mile long (depending on your browser resolution). It has one figure for every person on Earth, color-coded by region.
It is a stunning way to put into scale the 7+ billion people on Earth. I’ve zoomed in and out and my mind is just sort of blown. I don’t know who you are, person #5,779,280,035, but you look great.
People often think that other people are staring at them even when they aren’t, vision scientists have found.
In a new article in Current Biology, researchers at The Vision Centre reveal that, when in doubt, the human brain is more likely to tell its owner that they’re under the gaze of another person.
“Gaze perception – the ability to tell what a person is looking at – is a social cue that people often take for granted,” says Professor Colin Clifford of The Vision Centre and The University of Sydney.
“Judging if others are looking at us may come naturally, but it’s actually not that simple – our brains have to do a lot of work behind the scenes.”
To tell if they’re under someone’s gaze, people look at the position of the other person’s eyes and the direction of their heads, Prof. Clifford explains. These visual cues are then sent to the brain where there are specific areas that compute this information.
However, the brain doesn’t just passively receive information from the eyes, Prof. Clifford says. The new study shows that when people have limited visual cues, such as in dark conditions or when the other person is wearing sunglasses, the brain takes over with what it ‘knows’.
In their study, the Vision Centre researchers created images of faces and asked people to observe where the faces were looking.
“We made it difficult for the observers to see where the eyes were pointed so they would have to rely on their prior knowledge to judge the faces’ direction of gaze,” Prof. Clifford explains. “It turns out that we’re hard-wired to believe that others are staring at us, especially when we’re uncertain.
“So gaze perception doesn’t only involve visual cues – our brains generate assumptions from our experiences and match them with what we see at a particular moment.”
There are several speculations to why humans have this bias, Prof. Clifford says. “Direct gaze can signal dominance or a threat, and if you perceive something as a threat, you would not want to miss it. So assuming that the other person is looking at you may simply be a safer strategy.”
“Also, direct gaze is often a social cue that the other person wants to communicate with us, so it’s a signal for an upcoming interaction.”
There is also evidence that babies have a preference for direct gaze, which suggests that this bias is innate, Prof. Clifford says. “It’s important that we find out whether it’s innate or learned – and how this might affect people with certain mental conditions.
“Research has shown, for example, that people who have autism are less able to tell whether someone is looking at them. People with social anxiety, on the other hand, have a higher tendency to think that they are under the stare of others.
“So if it is a learned behaviour, we could help them practice this task – one possibility is letting them observe a lot of faces with different eyes and head directions, and giving them feedback on whether their observations are accurate.”
You’ll Probably Never Upload Your Mind Into A Computer
Many futurists predict that one day we’ll upload our minds into computers, where we’ll romp around in virtual reality environments. That’s possible — but there are still a number of thorny issues to consider. Here are eight reasons why your brain may never be digitized.
Indeed, this isn’t just idle speculation. Many important thinkers have expressed their support of the possibility, including the renowned futurist Ray Kurzweil (author of How to Create a Mind), roboticist Hans Moravec, cognitive scientist Marvin Minsky, neuroscientist David Eagleman, and many others.
Skeptics, of course, relish the opportunity to debunk uploads. The claim that we’ll be able to transfer our conscious thoughts to a computer, after all, is a rather extraordinary one.
But many of the standard counter-arguments tend to fall short. Typical complaints cite insufficient processing power, inadequate storage space, or the fear that the supercomputers will be slow, unstable and prone to catastrophic failures — concerns that certainly don’t appear intractable given the onslaught of Moore’s Law and the potential for megascale computation. Another popular objection is that the mind cannot exist without a body. But an uploaded mind could be endowed with a simulated body and placed in a simulated world.
To be fair, however, there are a number of genuine scientific, philosophical, ethical, and even security concerns that could significantly limit or even prevent consciousness uploads from ever happening. Here are eight of the most serious.
1. Brain functions are not computable
Proponents of mind uploading tend to argue that the brain is aTuring Machine — the idea that organic minds are nothing more than classical information-processors. It’s an assumption derived from the strong physical Church-Turing thesis, and one that now drives much of cognitive science.
But not everyone believes the brain/computer analogy works. Speaking recently at the annual meeting of the American Association for the Advancement of Science in Boston, neuroscientist Miguel Nicolelis said that, “The brain is not computable and no engineering can reproduce it.” He referred to the idea of uploads as “bunk,” saying that it’ll never happen and that “[t]here are a lot of people selling the idea that you can mimic the brain with a computer.” Nicolelis argues that human consciousness can’t be replicated in silicon because most of its important features are the result of unpredictable, nonlinear interactions among billions of cells.
“You can’t predict whether the stock market will go up or down because you can’t compute it,” he said. “You could have all the computer chips ever in the world and you won’t create a consciousness.”
2. We’ll never solve the hard problem of consciousness
The computability of the brain aside, we may never be able to explain how and why we have qualia, or what’s called phenomenal experience.
According to David Chalmers — the philosopher of mind who came up with the term “hard problem” — we’ll likely solve the easy problems of human cognition, like how we focus our attention, recall a memory, discriminate, and process information. But explaining how incoming sensations get translated into subjective feelings — like the experience of color, taste, or the pleasurable sound of music — is proving to be much more difficult. Moreover, we’re still not entirely sure why we even have consciousness, and why we’re not just “philosophical zombies” — hypothetical beings who act and respond as if they’re conscious, but have no internal mental states.
In his paper, “Facing Up to the Problem of Consciousness,” Chalmers writes:
How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
If any problem qualifies as the problem of consciousness, argues Chalmers, it is this one.
3. We’ll never solve the binding problem
And even if we do figure out how the brain generates subjective experience, classical digital computers may never be able to support unitary phenomenal minds. This is what’s referred to as the binding problem — our inability to understand how a mind is able to segregate elements and combine problems as seamlessly as it does. Needless to say, we don’t even know if a Turing Machine can even support these functions.
More specifically, we still need to figure out how our brains segregate elements in complex patterns, a process that allows us to distinguish them as discrete objects. The binding problem also describes the issue of how objects, like those in the background or in our peripheral experience — or even something as abstract as emotions — can still be combined into a unitary and coherent experience. As the cognitive neuroscientist Antti Revonsuo has said, “Binding is thus seen as a problem of ﬁnding the mechanisms which map the ‘objective’ physical entities in the external world into corresponding internal neural entities in the brain.”
Once the idea of consciousness-related binding is formulated, it becomes immediately clear that it is closely associated with two central problems in consciousness research. The ﬁrst concerns the unity of phenomenal consciousness. The contents of phenomenal consciousness are uniﬁed into one coherent whole, containing a uniﬁed ‘‘me’’ in the center of one uniﬁed perceptual world, full of coherent objects. How should we describe and explain such experiential unity? The second problem of relevance here concerns the neural correlates of consciousness. If we are looking for an explanation to the unity of consciousness by postulating underlying neural mechanisms, these neural mechanisms surely qualify for being direct neural correlates of uniﬁed phenomenal states.
No one knows how our organic brains perform this trick — at least not yet — or if digital computers will ever be capable of phenomenal binding.
4. Panpsychism is true
Though still controversial, there’s also the potential for panpsychism to be in effect. This is the notion that consciousness is a fundamental and irreducible feature of the cosmos. It might sound a bit New Agey, but it’s an idea that’s steadily gaining currency (especially in consideration of our inability to solve the Hard Problem).
Panpsychists speculate that all parts of matter involve mind. Neuroscientist Stuart Hameroff has suggested that consciousness is related to a fundamental component of physical reality — components that are akin to phenomenon like mass, spin or charge. According to this view, the basis of consciousness can be found in an additional fundamental force of nature not unlike gravity or electromagnetism. This would be something like an elementary sentience or awareness. As Hameroff notes, “these components just are.” Likewise, David Chalmers has proposed a double-aspect theory in which information has both physical and experiential aspects. Panpsychism has also attracted the attention of quantum physicists (who speculate about potential quantum aspects of consciousness given our presence in an Everett Universe), and physicalists like Galen Strawson (who argues that mental/experiential is physical).
Why this presents a problem to mind uploading is that consciousness may not substrate neutral — a central tenant of the Church-Turing Hypothesis — but is in fact dependent on specific physical/material configurations. It’s quite possible that there’s no digital or algorithmic equivalent to consciousness. Having consciousness arise in a classical Von Neumann architecture, therefore, may be as impossible as splitting an atom in a virtual environment by using ones and zeros.
5. Mind-body dualism is true
Perhaps even more controversial is the suggestion that consciousness lies somewhere outside the brain, perhaps as some ethereal soul or spirit. It’s an idea that’s primarily associated with Rene Descartes, the 17th century philosopher who speculated that the mind is a nonphysical substance (as opposed to physicalist interpretations of mind and consciousness). Consequently, some proponents of dualism (or even vitalism) suggest that consciousness lies outside knowable science.
Needless to say, if our minds are located somewhere outside our bodies — like in a vat somewhere, or oddly enough, in a simulation (a laThe Matrix) — our chances of uploading ourselves are slim to none.
6. It would be unethical to develop
Philosophical and scientific concerns aside, there may also be some moral reasons to forego the project. If we’re going to develop upload technologies, we’re going to have to conduct some rather invasive experiments, both on animals and humans. The potential for abuse is significant.
Uploading schemas typically describe the scanning and mapping of an individual’s brain, or serial sectioning. While a test subject, like a mouse or monkey, could be placed under a general anesthetic, it will eventually have to be re-animated in digital substrate. Once this happens, we’ll likely have no conception of its internal, subjective experience. It’s brain could be completely mangled, resulting terrible psychological or physical anguish. It’s reasonable to assume that our early uploading efforts will be far from perfect, and potentially cruel.
And when it comes time for the first human to be uploaded, there could be serious ethical and legal issues to consider — especially considering that we’re talking about the re-location of a living, rights-bearing human being.
7. We can never be sure it works
Which leads to the next point, that of post-upload skepticism. A person can never really be sure they created a sentient copy of themselves. This is the continuity of consciousness problem — the uncertainty we’ll have that, instead of moving our minds, we simply copied ourselves instead.
Because we can’t measure for consciousness — either qualitatively or quantitatively — uploading will require a tremendous leap of faith — a leap that could lead to complete oblivion (e.g. a philosophical zombie), or something completely unexpected. And relying on the advice from uploaded beings won’t help either (“Come on in, the water’s fine…”).
In other words, the quality of conscious experience in digital substrate could be far removed from that experienced by an analog consciousness.
8. Uploaded minds would be vulnerable to hacking and abuse
Once our minds are uploaded, they’ll be physically and inextricably connected to the larger computational superstructure. By consequence, uploaded brains will be perpetually vulnerable to malicious attacks and other unwanted intrusions.
To avoid this, each uploaded person will have to set-up a personal firewall to prevent themselves from being re-programmed, spied upon, damaged, exploited, deleted, or copied against their will. These threats could come from other uploads, rogue AI, malicious scripts, or even the authorities in power (e.g. as a means to instill order and control).
Indeed, as we know all too well today, even the tightest security measures can’t prevent the most sophisticated attacks; an uploaded mind can never be sure it’s safe.
Papua New Guinea’s Manam Volcano released a thin, faint plume on June 16, 2010, as clouds clustered at the volcano’s summit. The Advanced Land Imager (ALI) on NASA’s Earth Observing-1 (EO-1) satellite took this picture the same day. Rivulets of brown rock interrupt the carpet of green vegetation on the volcano’s slopes. Opaque white clouds partially obscure the satellite’s view of Manam. The clouds may result from water vapor from the volcano, but may also have formed independent of volcanic activity. The volcanic plume appears as a thin, blue-gray veil extending toward the northwest over the Bismarck Sea.
Located 13 kilometers (8 miles) off the coast of mainland Papua New Guinea, Manam forms an island 10 kilometers (6 miles) wide. It is a stratovolcano. The volcano has two summit craters, and although both are active, most historical eruptions have arisen from the southern crater.
NASA Earth Observatory image created by Jesse Allen, using EO-1 ALI data provided courtesy of the NASA EO-1 team. Caption by Michon Scott. Instrument: EO-1 – ALI
Photograph by NASA / Jesse Allen