Commenting on the results of the poll, chief executive of the Childrens Society Bob Reitemeier said: "As adults we have to take responsibility for the current level of marketing to children. To accuse children of being materialistic in such a culture is a cop out. Unless we question our own behaviour as a society we risk creating a generation who are left unfulfilled through chasing unattainable lifestyles."
I don't mean computer memory. That stuff's half-price at Costco these days. No, I'm talking about human memory, stored by the gray matter inside our heads. According to recent research, we're remembering fewer and fewer basic facts these days.
This summer, neuroscientist Ian Robertson polled 3,000 people and found that the younger ones were less able than their elders to recall standard personal info. When Robertson asked his subjects to tell them a relative's birth date, 87 percent of respondents over age 50 could recite it, while less than 40 percent of those under 30 could do so. And when he asked them their own phone number, fully one-third of the youngsters drew a blank. They had to whip out their handsets to look it up.
That reflexive gesture — reaching into your pocket for the answer — tells the story in a nutshell. Mobile phones can store 500 numbers in their memory, so why would you bother trying to cram the same info into your own memory? Younger Americans today are the first generation to grow up with go-everywhere gadgets and services that exist specifically to remember things so that we don't have to: BlackBerrys, phones, thumb drives, Gmail.
I've long noticed this phenomenon in my own life. I can't remember a single friend's email address. Hell, sometimes I have to search my inbox to remember an associate's last name. Friends of mine space out on lunch dates unless Outlook pings them. And when it comes to cultural trivia — celebrity names, song lyrics — I've almost given up making an effort to remember anything, because I can instantly retrieve the information online.
In what sense might something as intrinsically human as the imagination be biological? How could the products of the imagination – a novel, a painting, a sonata, a theory – be thought of as the result of biological matter? After all, such artefacts are what culture is made of. So why invoke biology? In this essay, I will argue that the content of the imagination is of course determined more by culture than biology. But the capacity to imagine owes more to biology than culture.
For more than a century, researchers have been trying to work out the raw ingredients that account for personality, the sweetness and neuroses that make Anna Anna, the sluggishness and sensitivity that make Andrew Andrew. They have largely ignored the first-person explanation — the life story that people themselves tell about who they are, and why.
Stories are stories, after all. The attractive stranger at the airport bar hears one version, the parole officer another, and the P.T.A. board gets something entirely different. Moreover, the tone, the lessons, even the facts in a life story can all shift in the changing light of a person’s mood, its major notes turning minor, its depths appearing shallow.
Yet in the past decade or so a handful of psychologists have argued that the quicksilver elements of personal narrative belong in any three-dimensional picture of personality. And a burst of new findings are now helping them make the case. Generous, civic-minded adults from diverse backgrounds tell life stories with very similar and telling features, studies find; so likewise do people who have overcome mental distress through psychotherapy.
The ability to behave differently in different social settings and with different social partners is a built-in survival mechanism. What works in one setting won't necessarily work in another. There is no better example than the child of immigrants who learns to speak one language at home and a different language outside the home. Her parents will always speak English with a foreign accent, but she will drop the accent when she is away from them. Her accent outside the home will be the same as that of the other children in her neighbourhood. Children seem to know instinctively that patterns of behaviour acquired at home must be cautiously tested, and perhaps modified or abandoned, when they start to have a life outside the home. The child quickly learns that crying brings one response from Mummy, but quite a different one from the other children at the daycare centre. The influence of peers doesn't begin in the teenage years: it can be seen as early as age three.
After all, we do not exercise focused attention all the time. We are aware in fits and starts. We focus on something ... then there is a gap of no particular focus ... then we focus on something else. This gap may be very brief, but it is there and can be detected by meditation. In fact, the process of meditation consists of homing in on this gap and extending it, until awareness, in the sense of focused attention on particular thoughts, drops away, and what is left is only a generalized being. (One might say that meditation involves losing awareness or Mind in order to make room for consciousness or pure being.)
Let us say, then, that this generalized being is consciousness, as distinct from the focused attention that is awareness or Mind.
Okay. Then we might say that "the more fundamental principle that is neutral between" Mind (awareness) and matter is consciousness.*
Now, if the ground of being is consciousness in the above sense, then everything that exists is a part of consciousness, an aspect of consciousness, a manifestation of consciousness, or however you want to say it.
I believe that Dartmouth College philosopher Adina Roskies is right when she suggests "knowing that one part of our biological system for identifying persons is automatically entrained and subject to error should make us more cognizant of its operation and more skeptical of its output as we engage in the countless moral decisions we make each day." If Farah and Heberlein have correctly identified an innate personhood network in our brains, they will have helped free us from its mandates, just as other natural scientists freed us from our misconceptions about the sources of disease and rain. We are not just slaves to our brains' personhood networks -- we can use our rationality to figure out which entities count as persons and which do not.We will most likely conclude that personhood is a continuum, not an all or nothing property. Just where to draw moral lines along that continuum will be a long hard fought debate...
In cats, the protozoan reproduces sexually, while it reproduces asexually in other animals. The germ seems to especially like infesting the brain—"parasites hijacking the mind," Vyas said. Although the disease it causes in humans is rarely dangerous, it is the reason that pregnant women are sometimes told to avoid cat litter boxes (toxoplasmosis is risky for infants and others with compromised immune systems). Some scientists have suspected it might be linked to mental disorders such as schizophrenia and even neuroticism.
In 2000, scientists revealed T. gondii could modify the brains of rats to make them attracted to cat urine instead of afraid of it. Researchers suspect the germ does so to make it easier for it to jump into cats to begin the sexual part of its life cycle.
The critical component to being assured that any transfer does not end with the death of you is continuity – your consciousness has to continually exist as it is moved (not just copied) into the AI computer. So here is my proposal for how to do this.
For background it is helpful to understand that our brains, in a way, are actually two brains: the right and left hemispheres. Each hemisphere is capable without the other of being fully awake and self-aware (although each only contains a piece of our neurological function). In fact you can pharmacologically “turn off” one hemisphere or the other and the subject will remain awake and aware, they will just lose the specific functions that exist only in the other hemisphere.
Let others rhapsodize about the elegant design and astounding complexity of the human brain—the most complicated, most sophisticated entity in the known universe, as they say. David Linden, a professor of neuroscience at Johns Hopkins University, doesn't see it that way. To him, the brain is a "cobbled-together mess." Impressive in function, sure. But in its design the brain is "quirky, inefficient and bizarre ... a weird agglomeration of ad hoc solutions that have accumulated throughout millions of years of evolutionary history," he argues in his new book, "The Accidental Mind," from Harvard University Press. More than another salvo in the battle over whether biological structures are the products of supernatural design or biological evolution (though Linden has no doubt it's the latter), research on our brain's primitive foundation is cracking such puzzles as why we cannot tickle ourselves, why we are driven to spin narratives even in our dreams and why reptilian traits persist in our gray matter.
Just as the mouse brain is a lizard brain "with some extra stuff thrown on top," Linden writes, the human brain is essentially a mouse brain with extra toppings. That's how we wound up with two vision systems. In amphibians, signals from the eye are processed in a region called the midbrain, which, for instance, guides a frog's tongue to insects in midair and enables us to duck as an errant fastball bears down on us. Our kludgy brain retains this primitive visual structure even though most signals from the eye are processed in the visual cortex, a newer addition. If the latter is damaged, patients typically say they cannot see a thing. Yet if asked to reach for an object, many of them can grab it on the first try. And if asked to judge the emotional expression on a face, they get it right more often than chance would predict—especially if that expression is anger.
In the decade of Darfur and Iraq, and shortly after the century of Stalin, Hitler, and Mao, the claim that violence has been diminishing may seem somewhere between hallucinatory and obscene. Yet recent studies that seek to quantify the historical ebb and flow of violence point to exactly that conclusion.
Some of the evidence has been under our nose all along. Conventional history has long shown that, in many ways, we have been getting kinder and gentler. Cruelty as entertainment, human sacrifice to indulge superstition, slavery as a labor-saving device, conquest as the mission statement of government, genocide as a means of acquiring real estate, torture and mutilation as routine punishment, the death penalty for misdemeanors and differences of opinion, assassination as the mechanism of political succession, rape as the spoils of war, pogroms as outlets for frustration, homicide as the major form of conflict resolution—all were unexceptionable features of life for most of human history. But, today, they are rare to nonexistent in the West, far less common elsewhere than they used to be, concealed when they do occur, and widely condemned when they are brought to light.
At one time, these facts were widely appreciated. They were the source of notions like progress, civilization, and man's rise from savagery and barbarism. Recently, however, those ideas have come to sound corny, even dangerous. They seem to demonize people in other times and places, license colonial conquest and other foreign adventures, and conceal the crimes of our own societies.
Research in Brazil has produced fresh evidence that primates may have something approaching human "culture". A scientist has observed capuchin monkeys banging stones together, apparently as a signalling device to ward off potential predators. The researcher says the animals appear to be learning this skill from each other - and even teaching incomers to the group how it should be done.
The scientists "bred" the robots by creating 100 pairs and using parts of each one's program to create a new one. Each new program also had a small chance of spontaneously changing in one part (how strongly it reacted to the red light, for example). After several rounds of this mating, the new programs were plugged back into robots, which then groped around again for food. And once again the scientists selected the fastest ones. They repeated this cycle 500 times in 20 different replicate lines. When they were done, they plugged the program into real robots and let them loose in a real arena with real food and poison (well, as real as food and poison get for experimental robots). The real robots behaved just like the simulated ones, demonstrating that the simulation had gotten the physics of the real robots right.
The results were impressive, although perhaps not surprising to people who are familiar with experimental evolution with bacteria. From their randomly wired networks, the robots evolved within a few dozens generations until they were scoring about 160 points a trial. That held in all twenty lines. Each program consists of 240 bits, which means that it could take any of 2 to the 240th power configurations. Out of that unimaginable range of possibilities, the robots in each line found a fast solution.
For decades, environmental educators, conservationists, and others have worked, often heroically, to bring more children to nature—usually with inadequate support from policymakers. A number of trends, including the recent unexpected national media attention to Last Child and “nature-deficit disorder,” have now brought the concerns of these veteran advocates before a broader audience. While some may argue that the word “movement” is hyperbole, we do seem to have reached a tipping point. State and regional campaigns, sometimes calledLeave No Child Inside, have begun to form in Cincinnati, Cleveland, Chicago, the San Francisco Bay Area, St. Louis, Connecticut, Florida, Colorado, Texas, and elsewhere. A host of related initiatives—among them the simple-living, walkable-cities, nature-education, and land-trust movements—have begun to find common cause, and collective strength, through this issue. The activity has attracted a diverse assortment of people who might otherwise never work together.
But there are disadvantages too. A herd animal who wakes up one morning to find the rest of the mob have folded their tents and vamoosed is a sorry sight. He wanders listlessly, clutches his heart in despair, then runs around in circles looking for any collective whatever he can join. Upon finding one he gratefully embraces everybody, and by nightfall calls them his new best friends.
Fate however made some of us differently, and the difference may be in our genes. Awaking at dawn to find the herd has departed we breathe a sigh of relief. The fact is (speaking personally) I never saw a herd I liked. Individuals yes—lots of them. Herds never. To men of my sort a room filled with a hundred people is a cause for dubiety. A room with a hundred like-minded people is a cause for alarm. A room filled with a hundred people “of one mind” is deeply implausible in itself and almost certainly a sign of intimidation.
About 40 years ago, the late psychologist Stanley Milgramtapped into the commonsense notion that "it's a small world." Milgram asked 60 people to send a folder to a certain individual whom none of them knew. Participants were given a little information about the target person and asked to mail the folder to a friend or acquaintance who, in their view, was more likely to know the stranger than they were. Each recipient of the folder was asked to do the same, until the material reached its destination.
Only one-quarter of the chains were completed. In those cases, though, the folder passed through an average of six intermediaries. Milgram's project inspired the phrase "six degrees of separation" and led to, for example, people calculating movie actors' working relationships to actor Kevin Bacon.
The small-world phenomenon got a big boost in 1998. Steven Strogatz of Cornell University and Duncan Watts of New York University used mathematical simulations to show that all sorts of large networks can be traversed in a small number of steps. Strogatz and Watts demonstrated how this effect applies to the more than 4,300 elements of the electric-power grid in the western United States and to the collaborative relationships of more than 225,000 professional actors... Small-world networks have a distinctive structure: There's a cluster of nodes, each connected to its immediate neighbors, with a few that connect to distant nodes. This structure enhances the power and efficiency of these systems, Strogatz and Watts argued. More and more neuroscientists agree.
This is because Pataphysics has reached such a perfection of the game and because it accords little importance to everything that it finally has little of. In themselves, all solennal nullity, all figures of nullites come to fail and petrify themselves before the gorgonal eye of Ubu. In it all things become artificial, venomous, and lead to schizophrenia, by the angels of pink stucco whose limbs rejoin in a curbed mirror.
Wikipedia: It is a parody of the theory and methods of modern science and is often expressed in nonsensical language.
A Small Part of the Brain, and Its Profound Effects
The insula itself is a sort of receiving zone that reads the physiological state of the entire body and then generates subjective feelings that can bring about actions, like eating, that keep the body in a state of internal balance. Information from the insula is relayed to other brain structures that appear to be involved in decision making, especially the anterior cingulate and prefrontal cortices.
The insula was long ignored for two reasons, researchers said. First, because it is folded and tucked deep within the brain, scientists could not probe it with shallow electrodes. It took the invention of brain imaging techniques, such as functional magnetic resonance imaging, or fMRI, to watch it in action.
Second, the insula was “assigned to the brain’s netherworld,” said John Allman, a neuroscientist at the California Institute of Technology. It was mistakenly defined as a primitive part of the brain involved only in functions like eating and sex. Ambitious scientists studied higher, more rational parts of the brain, he said.
The insula emerged from darkness a decade ago when Antonio Damasio, a neuroscientist now at the University of Southern California, developed the so-called somatic marker hypothesis, the idea that rational thinking cannot be separated from feelings and emotions. The insula, he said, plays a starring role.
The doctrine of the unchanging human brain has had profound ramifications. For one thing, it lowered expectations about the value of rehabilitation for adults who had suffered brain damage from a stroke or about the possibility of fixing the pathological wiring that underlies psychiatric diseases. And it implied that other brain-based fixities, such as the happiness set point that, according to a growing body of research, a person returns to after the deepest tragedy or the greatest joy, are nearly unalterable.
The first discoveries of neuroplasticity came from studies of how changes in the messages the brain receives through the senses can alter its structure and function. When no transmissions arrive from the eyes in someone who has been blind from a young age, for instance, the visual cortex can learn to hear or feel or even support verbal memory. When signals from the skin or muscles bombard the motor cortex or the somatosensory cortex (which processes touch), the brain expands the area that is wired to move, say, the fingers. In this sense, the very structure of our brain--the relative size of different regions, the strength of connections between them, even their functions--reflects the lives we have led. Like sand on a beach, the brain bears the footprints of the decisions we have made, the skills we have learned, the actions we have taken.
So the research has implications for a variety of illnesses, from Alzheimer's disease to anxiety disorders.
Unraveling the differences between kinds of memories, Helmstetter believes, depends on understanding the chemical changes that happen in the brain at the molecular level.
Helmstetter's work has already shown how memories are stored in certain neurons. Now he wants to know more about the molecular players that make the brain's whole network of constantly changing memory connections possible. His extramural funding has come from sources such as the National Science Foundation and the National Institute of Mental Health.
The team used high-resolution brain scans to identify patterns of activity before translating them into meaningful thoughts, revealing what a person planned to do in the near future. It is the first time scientists have succeeded in reading intentions in this way.
Take the famous cognitive-dissonance experiments. When an experimenter got people to endure electric shocks in a sham experiment on learning, those who were given a good rationale ("It will help scientists understand learning") rated the shocks as more painful than the ones given a feeble rationale ("We're curious.") Presumably, it's because the second group would have felt foolish to have suffered for no good reason. Yet when these people were asked why they agreed to be shocked, they offered bogus reasons of their own in all sincerity, like "I used to mess around with radios and got used to electric shocks."
It's not only decisions in sketchy circumstances that get rationalized but also the texture of our immediate experience. We all feel we are conscious of a rich and detailed world in front of our eyes. Yet outside the dead center of our gaze, vision is amazingly coarse. Just try holding your hand a few inches from your line of sight and counting your fingers. And if someone removed and reinserted an object every time you blinked (which experimenters can simulate by flashing two pictures in rapid sequence), you would be hard pressed to notice the change. Ordinarily, our eyes flit from place to place, alighting on whichever object needs our attention on a need-to-know basis. This fools us into thinking that wall-to-wall detail was there all along--an example of how we overestimate the scope and power of our own consciousness.
I said "cautiously" optimistic because, so far, for every person who shows a substantial lasting increase in happiness, 2 people show a decrease. Discarding the set point idea for a more malleable happiness baseline means that we will uncover vulnerability as well as hope.
I am also optimistic that we will uncover diverse ways that people can find sustainable happiness. But we'll need to dig beneath the surface and resist "once size fits all" formulas.
Psychology explores humans at their best: Though not denying humanity’s flaws, the new tack of positive psychologists recommends focusing on people’s strengths and virtues as a point of departure. Rather than analyze the psychopathology underlying alcoholism, for example, positive psychologists might study the resilience of those who have managed a successful recovery—for example, through Alcoholics Anonymous. Instead of viewing religion as a delusion and a crutch, as did Freud, they might identify the mechanisms through which a spiritual practice like meditation enhances mental and physical health. Their lab experiments might seek to define not the conditions that induce depraved behavior, but those that foster generosity, courage, creativity, and laughter.
It's all relative: How does the mind tell the time when it is too brief for us to register? Researchers think they have discovered the brain’s stopwatch and, along with it, a clue to conditions like dyslexia.
The brain has to constantly gauge time intervals so tiny that it must do it subconsciously, but how the brain measures such very short time spans has long eluded scientists. For example, when listening to someone speak, the mind continuously recalculates when one word ends and another starts. And when we walk our coordination relies on the brain’s ability to precisely time the movement of our feet.
Some researchers have argued that brain has an “internal clock” of regularly pulsing cells to measure split-second intervals. Dean Buonomano at the University of California Los Angeles, US, explains that if this were the case we would have a highly objective view of time intervals lasting less than a second long.
But he points out that it is easy to get confused when comparing events that last only a few milliseconds: “You have no idea how long milliseconds are.”
This viddy and the writings of the author were my fascination and inspiration last week. It's not really about being autistic, it's about how we interact with our environment, and the conscious dialogue each of us carries out with the world in our own language...