Thursday, August 28, 2008
The study examined brain scans of babies as they listened to various words: "Brain activity increased in the babies' temporal and left frontal areas whenever the repetitious words [baba or nana] were played. Words with non-adjacent repetitions ('bamuba' or 'napena') elicited no distinctive responses from the brain."
The experiments showed that "The brain areas that are responsible for language in an adult do not 'learn' how to process language during development, but rather, they are specialized — at least in part — to process language from the start."
My own $0.02 here is that this is just one aspect of the mind's penchant for pattern recognition. As we evolved, our minds were selected for their ability to recognize patterns in whatever we perceive. This study is a wonderful example of why - the newborn brain can recognize a pattern of repeated syllables (baba) but isn't yet sophisticated enough to recognize the repetition when there is noise in the signal (bamuba).
The infant brain is flooded with new inputs as it adapts to life outside the womb. With no context for any of it - with no existing mental model of the world - all of the sights, sounds, smells etc are essentially random noise. The baby has no frame of reference to distinguish one perception from another: at first, the mother and the father aren't any more noteworthy to a newborn than the janitor or photographer. The babe's brain takes its best shot at building symbols for all the things it sees, but as various symbols fail to re-occur, they wither and fade out. It is only the repetition of seeing the mother over and over again that reinforces that symbol in the brain and allows the baby's mind to say "this is not a random image I keep seeing".
Similarly, the baby has no instinctive way to know that a conversation between its parents is a type of sound any more relevant than the sound of a door closing or window opening. But the repetition of "parents talking" especially in concert with the repetition of seeing the parents over and over again lets the baby's brain start to recognize that the sounds are not random. From there, it's a short step to assigning meaning to some of those sounds - how natural that the first words a baby learns feature both the repetition of syllables ('mama') and an association to a highly non-random image (the mother's face.)
I wonder if you repeated the study, but instead of using repeated syllables (nursery words) you used repeated sound effects (a drum roll, maybe?) the same areas in the brain would fire. I don't know if there's anything magic about the fact that it is a word that the babies heard that matters - after all, baby animals learn to assign meaning to the sound of their parents (screeches, roars, hisses, etc) without speaking Italian.
Monday, August 4, 2008
I'm sure academics would tear into this with gusto (in which case, please use the comments section of this blog :) but the idea reminds me (as many things do lately) of Clay Shirky's talk on TED about distributed organization versus hierarchy.
Shirky made the point that when communication costs were prohibitive, you'd tackle a problem by founding an organization, raising resources, incurring overhead, and directing the people involved. He also suggests that as soon as you create an organization, its primary goal becomes self-preservation, and whatever goal you were trying to meet becomes its secondary (or worse) objective.
In one of his articles, Shirky talks about how blogging has enabled the mass-amateurization of writing. In other words, if you wanted to be a writer, you used to have to get a job at a newspaper/magazine or convince a publisher to publish your novel. Nowadays, any hack with a keyboard and an internet connection can put something out there for the whole world to see (c.f. this page...)
When I ponder Shirky's two points in relation to Easy's comment, I have to wonder: what is it about a university campus that makes it so special? There's no denying that it's nice to be in the same general area as a bunch of other people who share your interests. But there are a whole lot more people out there in the world who share your interests than can possibly fit onto a single campus - why exclude them from the group of people with whom you study subject X?
Basically, a university is one of those old-world insitutions that was created to solve a problem of communication and coordination: how do we preserve, pass on, and add to the body of knowledge in subject X (economics, biology, politics, etc.) The answer in the past was necessarily: rent a building, pay some experts to hang around in it, and charge admission to the rest of the world...
Now, though, you might wonder: do we still need universities? Certainly the many courses that are already offered in a "Distance Education" format suggest that we don't. But just as certainly, there are many courses with experimental components that can't be replaced by reading a page on your laptop (...not looking forward to having surgery done by the guy who got his certification online...) And at a bare minimum, a university is a probably-trustworthy source for identifying experts; on the internet, it can be hard to verify someone's credentials (Wikipedia, for example, has suffered from this...)
So I think the answer is: we will always need experts, and we still need physical resources for certain types of learning (physical sciences, medicine, etc.) But what we maybe don't need is (to paraphrase the Shirky quote in my last article) to be "genuflecting to the idea of a university degree."
How can we enable the 90% of the world who may be interested in subject X (but don't attend a university for whatever reason) to contribute to X? Rather than settle for a 1-in-a-billion Einstein to break through the walls of academia and contribute -- how can we make it easier for the other 999,999,999 amateurs to participate?
Here's one way to start: every professor in the world could publish to the web a set of "open questions", with forum responses enabled. To paraphrase yet another quote: "with enough eyes, every problem is shallow." How long would it be before people start chipping in answers to the open questions of subject X?
PS: this can only work if the system guarantees that people who participate in answering a question are forever associated to the answer. The thing would fail instantly if a prof could delete any post responding to his questions, and so delete the winning answer to publish it for himself...
Wednesday, July 30, 2008
The gist of it is that we're transitioning from a thoughtful, deep-thinking society (i.e. one that reads books) to one which demands instant gratification and mere soundbites of surface-level knowledge gleaned from skimming our favorite websites as fast as possible. We're somehow losing an important ability to concentrate on and/or appreciate good writing.
My own view is that the internet lets you learn as quickly as you're able; to focus on the things that matter to you and skip the noise. But I'll come back to my own thoughts in a minute.
The article generated some activity on the website for Edge magazine, where extremely smart people discuss thorny issues. I was very happy to see my new hero Clay Shirky there, tearing into Carr's article.
Shirky makes a wonderful point: "The threat isn’t that people will stop reading War and Peace. That day is long since past. The threat is that people will stop genuflecting to *the idea* of reading War and Peace."
George Dyson adds, "We will certainly lose some treasured ways of thinking but the next generation will replace them with something new...Perhaps books will end up back where they started, locked away in monasteries (or the depths of Google) and read by a select few."
My own take is this: I have learned an awful lot, very quickly, from the internet. In the last year alone, I stumbled onto and become very interested in (and enlightened by) the world of Complex Adaptive Systems and some of the thinking adjacent to it (such as Edge.com and TED.com). Without the internet, I'd never have heard about any of this because I wasn't lucky enough to get into it at University. Without the internet, even if I *were* aware of the field, I wouldn't have any way to dig into it because I have a full-time job and a family, which together don't leave me with endless days to spend in the library, hunting for relevant texts and then reading them in one sitting.
The problem is not that the internet makes information available to us too conveniently - that's a feature, not a bug! The problem is that we're missing good tools to organize, retain, and leverage the flood of useful info. More and better information is available to each of us than ever before. It's no surprise that we each eventually hit a limit of what our brains can handle - the need to enlist tools to ride the storm isn't something to be ashamed of, it's something to welcome.
To come full circle, I suppose my bottom line question is this: If Carr thinks that our use of the internet is making us stupid -- why did he put his article there?
Friday, July 18, 2008
The lecturer, Clay Shirky, discusses distributed organizations and how they are different/better/worse than traditional top-down or hierarchical organizations.
One of the advantages Shirky points out with a distributed organization is that many of the costs of a traditional command & control structure are avoided (such as paying for an office to house your workers and hiring a supervisor to monitor them). Because the cost-per-worker goes way down, the number of workers who can participate goes way up. This means that there's much less risk in letting amateurs & hobbyists participate; if they aren't very productive, well you haven't lost anything because you didn't pay for them in the first place. The distributed organization gains access to a much larger pool of resources: no corporation could afford the cost to hire every amateur out there in a traditional sense, and yet these people have the potential to contribute something of real value to the project.
One of the drawbacks Shirky identifies with a distributed organization is that while you gain access to many more participants, you surrender control over their efforts. If somebody is volunteering their time to work on a project, nobody can really boss them around because they aren't beholden to a boss!
The challenge, I suppose, is figuring out how to take existing problems or efforts such as "let's make a product and sell it for lots of money" or "lets feed all of the starving people in the world" and re-frame them in a distributed manner.
Or put another way: how do you incent masses of people to participate in your project?
The answer (as Shirky points out) is that you can't: it is too costly to provide incentives for all of the individuals who may want to work on your project.
So, for example, Wikipedia and Flickr work because people are self-motivated to publish facts and photos in which they are interested. What has to happen so that large numbers of middle class North Americans become self-motivated to send a meal-a-day to starving people in Third World countries? Problems which are based in the physical realities of manufacturing & logistics seem opposed to distributed solutions.
So it seems to me that a distributed organization of workers will only arise when the individuals have their own incentives to participate. And the unfortunate truth is that most people are *not* self-motivated to work on your project (such as eliminating starvation or developing a product that you can then turn around and sell for profit.)
Do you have any ideas or examples of real-world, distributed problem-solving? Put 'em in the comments section! (Here's one to get you started!)
Tuesday, May 27, 2008
The ant colony optimization algorithm, introduced by Marco Dorigo in 1992 in his PhD thesis, is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. They are inspired by the behaviour of ants in finding paths from the colony to food.
In the real world, ants (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep traveling at random, but to instead follow the trail, returning and reinforcing it if they eventually find food.
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over faster, and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained.
Thus, when one ant finds a good (i.e. short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads all the ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.
Ant colony optimization algorithms have been used to produce near-optimal solutions to the traveling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.
Monday, May 26, 2008
Quotes from this video which are most pertinent to stuff I've talked about before:
"You'll have little cells competing, then at some point, some of them decide to group together and cooperate & become specialized - and you get multicellular creatures. You see a similar pattern when individuals group together, forming very simple societies and out-competing the individuals."
"I see evolution as...this interplay of competition driving cooperation, driving specialization which then brings the competition to the next level."
In other words, competitive pressure drives organizational structures towards ever-greater complexity as previously unrelated entities are forced to either stand together or fall apart.
This is very similar to what Nonzero calls the "logic of human destiny": the reason why sentient life, societies, and world wars were inevitable from the time the first single-celled organisms appeared. I think Robert Wright (author of Nonzero, no relation to Will Wright that I know of) also refers to this as the "arms race" of cooperation: stubborn individualists are eventually dominated by coalitions of their enemies, whether we're talking about cells, species, or societies.
Monday, May 12, 2008
The scenario is that players are grouped into 2 teams, and then each player has 2 options:
- Produce 1 point for each ally and remove a point from the other team.
- Produce 1 point for each ally without affecting the other team.
The study concludes that people seem to prefer intra-group cooperation over inter-group competition. (Players tended to use option #2 instead of hurting the other team.)
I recently updated my program "The Rise of Cooperation" which you can download here (or from the bar on the right-side of your screen.) It tries to look at a similar theme - whether we can say which of cooperation or competition is a dominant strategy.
If cooperation tends to be a better strategy than competition, then it makes sense that natural selection would have shaped us into beings that prefer cooperation as the study above suggests. If we had an innate preference for an inferior strategy, then natural selection would likely have phased our species out long before certain unnamed hacks could blog about it on the web...
Thursday, March 27, 2008
Differentiation is the number of distinctions or separate elements (i.e., factors, variables) into which an event is analyzed. Integration refers to the connections or relationships among these elements.
Persons who are high in cognitive complexity are able to analyze (i.e., differentiate) a situation into many constituent elements, and then explore connections and potential relationships among the elements; they are multidimensional in their thinking. Complexity theory assumes that the more an event can be differentiated and the parts considered in novel relationships, the more refined the response and successful the solution. While less complex people can be taught a complex set of detailed distinctions for a specific context, high complexity people are very flexible in creating new distinctions in new situations."
Monday, March 24, 2008
For the sake of readability (and controversy), I'm going to label a society that is democratic with a strong rule of law "advanced", while a less-democratic society with a perceived weak government is "primitive".
In advanced societies, the average person cooperated with strangers for the greater good. When a freeloader is revealed, the average person was willing to give up a small amount of their own resources to ensure that the freeloader was punished.
In primitive societies, the average person was willing to free-load; if punished, the freeloader would "revenge punish" whoever chose to punish them previously.
After repeated plays, the net accumulation of resources was much higher in the advanced societies as freeloading was reined-in.