| CARVIEW |
How can they both be right? I think they are operating at different levels. Yes, individual agents make their particular planning decisions. In aggregate, these decisions drive monetary variables like interest rates, exchange rates, liquidity demand, etc. However, these variables then feed back into the next round of planning decisions. Moreover, at least some of these plans take into account the effect of the agent’s actions on the monetary variables. So you get classic chaotic/complex behavior with temporarily stable attractors, perturbations, and establishing new regimes. There may even be aspects of synchronized chaos. I think the monetary variables are the key emergent phenomena here. They are like “meta prices” that provide a shared signal across just about every modern economic endeavor.
Food for thought. I’m going to keep this in mind when processing future articles on the economy and see if it helps my thinking.
]]>If you’ve listened to this the whole way through (which you should), I’m curious as to how it will affect your habits, if at all. And why?
]]>
In just the past few months, a growing number of cities have taken to ticketing and sometimes handcuffing teenagers found on the streets during school hours.
In Los Angeles, the fine for truancy is $250; in Dallas, it can be as much as $500 — crushing amounts for people living near the poverty level. According to the Los Angeles Bus Riders Union, an advocacy group, 12,000 students were ticketed for truancy in 2008.
Why does the Bus Riders Union care? Because it estimates that 80 percent of the “truants,” especially those who are black or Latino, are merely late for school, thanks to the way that over-filled buses whiz by them without stopping. I met people in Los Angeles who told me they keep their children home if there’s the slightest chance of their being late. It’s an ingenious anti-truancy policy that discourages parents from sending their youngsters to school.
The column was based on a report by the National Law Center on Homelessness and Poverty, which finds that the number of ordinances passed and tickets issued for crimes related to poverty has grown since 2006.
Hey, no one likes poverty, right? Let’s pass a law!
]]>Thus, I was not surprised to read this article (hat tip to Tyler Cowen at Marginal Revolution) on modern farming by an honest to goodness family farmer. It is full of good examples of the tradeoffs I suspected were lurking. For instance, by using herbicides, farmers reduce the need to till, which is a major source of soil erosion. Hog crates and turkey cages may seem inhumane, but they prevent sows from killing piglets and turkeys dying from drowning. Crop rotations that decrease the need for synthetic fertilizer increase the amount of water needed to produce the desired crop.
Read the whole thing. It reinforced my confidence in the general rule of trying to avoid legislating solutions. Send pricing signals by allocating resource rights and taxing negative externalities. Then let the market do its optimization.
]]>[The incident] is an extraordinary example of what happens when you get… a dozen people with an average IQ of 160… working in a field in which they collectively have 250 years of experience… employing a ton of leverage.
It’s hard to overstate the significance of a [government-led] rescues of a private [corporation]. If a [company], however large was too big to fail, then what large [company] would ever be allowed to collapse? The government risked becoming the margin of safety. No serious consequences had come about in the end from the… near-meltdown.
Was the incident:
a) The savings and loan scandal
b) The collapse of Enron
c) The sub-prime mortgage meltdown
d) none of the above
First correct answer gets to invest in an exciting new bridge project I’m involved with in New York!
]]>While I agree that misplaced incentives were a fundamental problem, the question of how to change this is rather more deep and complex than I think many people realize.
Our economy is, of course, an evolutionary system. Successful businesses grow in size and their practices are imitated by others; unsuccessful businesses vanish. This process has led to many good business practices, even in the financial sector.
However, evolution does not always yield the best outcomes, in biology or in economics. Our recent crisis illustrates two key limitations of evolutionary systems, limitations which allow bad ideas to evolve over good ones.
The first problem has to do with time lags. Suppose Financial Company A comes up with an idea that will yield huge sums of money for five years and then drive the company to bankruptcy. They implement the idea, obfuscating the downside, and soon the company is rolling in cash. Investors line up to give them money, magazines laud them, and other companies begin imitating them.
Not so Company B. Company B believes in long-term thinking, and can see this idea for the sham it is. They persue a quiet, sound strategy, even when their investors begin pulling money out to invest in A.
We would like to think that in the end, Company B will be left standing and reap them benefits of their foresight. But there is a fundamental problem of time-scales here: by the time A folds, B may already be out of business, due to lack of interest from investors. In theoretical terms, there is a fundamental problem when the evolutionary process proceeds faster than the unfolding of negative consequences. In these situations, good ideas never have a chance to be rewarded, evolutionarily speaking.
One might argue that investors, not to mention government regulators and ratings agencies, should have forseen the flaw in A’s plan. But this highlights a second limitation of the evolutionary process: it favors complexity. Simple bad ideas can be detected by intelligent agents, but complex ones have a chance to really stick. If Company A’s idea was so complicated that no one aside from a few physicists could figure it out, investors and regulators could easily be fooled.
It’s not clear to me how to patch these flaws in the evolutionary system. Increased transparency and oversight will help, but unless we can somehow cap the complexity of financial instruments (difficult) or slow down the evolutionary process (impossible), I’m not sure how we’ll avoid similar crashes in the future.
]]>First up is a provocative post by the ever-interesting Scott Sumner. Rafe in particular should read it because Sumner starts from one of Rafe’s favorite premisse that “laws” of nature are purely cognitive constructs. We should measure them by their usefulness and not ascribe to them any independent existence. So Newton’s laws of motion are useful in certain contexts. Einstein’s are useful in others. But neither are ground truth. Moreover, we will never find ground truth. Just successively more accurate models.
Sumner uses this bit of philosophy to justify abolishing inflation, not, “…the phenomenon of inflation, but rather the concept of inflation.” More specifically, price inflation. He explains why this concept is ill-defined and not only unnecessary, but confusing, for understanding the macroeconomy. He asserts that we should expunge it from our models. It doesn’t really exist anyway, so if models do better without it, we won’t miss it in the least.
]]>RafeFurst: I strongly support a soda tax! RT @mobilediner: check it out: a Soda Tax? https://amplify.com/u/dvl
coelhobruno: @RafeFurst what about diet soda? Would it be exempt?
RafeFurst: @coelhobruno no diet soda would not b exempt from tax. Tax should be inversely proportional to total nutritional content. Spinach = no tax
Lauren Baldwin: I do as well … and while they are at it they should tax fake fruit juice too.
Kevin Dick: I think this would be an interesting experiment. I predict a tax does not cause any measurable decrease in BMI.
Kim Scheinberg: New York has had this under consideration for a year. Perhaps surprisingly, I’m against it. In theory, people will drink less soda. In reality, it will just be another tax on people who can afford it the least.
Leaving aside the “rights” issues and just focusing on effectiveness, I guess we can look towards cigarette taxes and gasoline taxes and see what the lessons are. What do these forebears suggest?
As an FYI, there is supposedly a new total nutritional score (zero to 100) that is to be mandated on all food in the U.S. by the FDA. Can anyone corroborate this and its current status? Presumably this would be the number to base a tax on.
]]>
The District, New York and Los Angeles are on track for fewer killings this year than in any other year in at least four decades. Boston, San Francisco, Minneapolis and other cities are also seeing notable reductions in homicides.
Full article is here, in which more sensible police approaches are given credit for the decline.
While it’s probably true that police deserve a lot of credit, it helps to remember that violence is a virus and it spreads from person to person. The more violence people see around them, the more violence that breeds. And the converse is also true. The exogenous factors are hard to suss out, but my suspicion is that the general rise in wealth and well-being in the world is the main factor. This is consistent with the counterintuitive but real fact that violence has been in decline for centuries and we currently live in the most peaceful time ever.
Kevin points out:
I often mention the availability bias when quoting statistics on crime. If you ask people, they’ll say crime has gotten worse–IMO because media has become consistently better at shoving stories of violence into our brains. But the statistics say otherwise.
This is especially poignant when you talk about child abduction by strangers. People think this is a much worse problem (and why you don’t see kids playing in their neighborhoods). But I believe the statistics show the incidence is not any different than it was when we were growing up.
Which brings us to the role of media in propagating myths and creating self-fulfilling prophecies.
I’m curious, what are the statistics on abductions and predatory crime towards children now versus 30 years ago?
]]>Shadows live in a simple world. They glide effortlessly across any sort of surface, oblivious to the higher dimension of space in which 3-D bodies move, collide and sometimes block the paths of rays of light.
Shadows have no idea how important that third dimension is, and how objects in it endow those very shadows with their quasi-physical existence. Indeed, the laws of shadow physics all depend on the third dimension’s presence. And just as the clueless inhabitants of the shadow world require an extra dimension to explain how they exist and interact, reality for humans may also depend on an invisible dimension or dimensions unknown.
This analogy by Tom Siegfried is the single best didactic tool I have encountered to explain the concept that there are (likely) many dimensions to the universe beyond the three (plus time) that we experience as humans.
His article just so happens to be about the unification of the supercold and the superhot (sort of like saying “nothing” and “everything” are really the same thing), and the experiment which may be the first to validate string theory’s real-world predictive power.
]]>What’s really great about it, in addition to the message itself, is that it uses visual language to make its point. I love what Cellucidate is doing (check out the video on their home page). If more cancer biologists used a system like this to validate and communicate ideas, we’d be a lot farther along in understanding and treating cancer effectively (see my previous post).
]]>One preface I think will help is to understand that genome, karyotype and chromosome refer roughly to the same thing. Here are several schematics that I will present without explanation that together illustrate how genes relate to genome/karyotype/chromosome structure, and how that in turn relates to the so-called genetic network (loosely equivalent to the “proteome”). Of course “gene” is an outdated and inaccurate concept, so don’t get too hung up looking for genes here, just understand that they are sub-structural elements of the genome.
From MSU website
From eapbiofield.wikispaces.com

From Heng’s paper titled”The genome-centric concept: resynthesis of evolutionary theory”
Now onto the paper. I’ll point out that I’ve eliminated the scholarly references in the original text simply for clarity, but I don’t want readers to think that the authors have not properly credited the research that goes into the statements/claims made below. If you’d like to read the original paper, email Henry Heng whose address is on the abstract above. Also note that all emphasis in the quotes below is mine.
Somatic Evolution
…cancer progression is an evolutionary process where genome system replacement (rather than a common pathway) is the driving force.
It has become clear that a correct theoretical framework for cancer research is now urgently needed and the concept of somatic evolution represents just such a framework.
The increased NCCA frequencies reflect increased survival advantage while increased CCAs reflect a growth advantage. [NCCAs and CCAs are chromosomal aberrations, like gene mutations but at the genome level]
This last quote is reminiscent of the RNA autocatalysis experiments reported on earlier this year which showed divergent evolution towards two co-existing phenotypes, one that more quickly gobbled up available resources and another that was more efficient at using resources to reproduce quicker. Perhaps there is a basic principle at work in both systems (autocatalytic RNA populations and somatic cell populations).
Instability / Heterogeneity / Diversity
Clearly, as there is no defined cancer genome (the vast majority of cancer cases display different karyotypes representing different genome systems), there is no defined cancer epigenome either.
…the most common feature in tumors is a high level of genome variation…
Understanding the importance of heterogeneity is the key to understanding the general evolutionary mechanism of cancer.
…the true challenge is to understand the system behavior (stability or instability)…
When closely examining the contribution of various genetic factors, it is clear that many of the genetic loci or events are only significantly linked to tumorigenicity when they contribute to system instability (which is closely linked to genome level heterogeneity).
…it is relatively easy to establish a causative relationship between system heterogeneity and cancer evolution, as heterogeneity is the necessary pre-condition needed for cancer evolution to occur….
…instability imparts heterogeneity, which is acted on by natural selection.
The predictability of cancer can be accomplished by measuring the system heterogeneity that is shared by most patients rather than characterize each of the individual factors that contributes to cancer.
Virtual Stability / Chaotic Synchronization
Heterogeneity provides a greater chance of success that a system can adapt to the environment and survive.
…heterogeneity ‘‘noise’’ represents a key feature of bio-systems providing needed complexity and robustness.
…epigentic alteration is an initial response when the genome system is under stress.
It turns out, lower levels of ‘‘randomness’’ are essential for higher levels of regulation when facing a drastically changed environment.
In a human-centric version of a perfect world, within the multiple levels of homeostasis, environmental stress should be counteracted by epigenetic regulation; disturbances of metabolic status should be recovered; the errors of DNA replication should be repaired; altered cells should be eliminated by cell death mechanisms; abnormal clones should be constrained by the tissue architecture; and the formed cancer cells should be cleared up by the immune-system. In a cancer defined perfect world, in contrast, the break down of homeostasis is the key to success. Unfortunately, continually evolving systems are the way of life and cannot be totally prevented. In a sense, cancer is the price we pay for evolution as an interaction between system heterogeneity and homeostasis….
Facilitated Variation / Baldwin Effect
When changes are selected by the evolutionary process, these changes can be fixed either at a specific gene level or at the genome level (achieving the transition from epigenetic to genetic changes).
This is corroborated by Spencer, et al and Brock, et al, the latter of whom says, “‘pre-selection’ of non-genetic variants would markedly increase the probability of producing a random genetic mutation that may provide the basis for the survival capability of the original non-genetically variant outlier population.”
Path Dependence
…cancer cases are genetic and environmentally contingent. The pattern of specific gene mutations can only be used within a specific population with a similar genome, mutational composition as well as a similar environment.
…the stochastic events referred to here are not completely random but rather are less predictable due to differences in the initial conditions reflected by the multiple levels of genetic and epigenetic alteration.
From a system point of view, significant karyotypic changes represent a ‘‘point of no return’’ in system evolution, even though certain gene mutations and most likely epigenetic changes can influence karyotypic changes.
Upon establishment of a new genome through karyotypic evolution, it is impossible to revert back to a previous state through epigenetic alteration.
As long as the genome does not significantly change, epigenetic reprogramming could work to bring the system to its original status.
Multiple Levels
…the multiple levels of homeostasis are more important than genetic factors in constraining cancer, as alterations of system homeostasis rather than individual genetic alterations are responsible for the majority of cancers. Accordingly, the robustness of a network, the reversible features of epigenetic regulation, tissue architecture, and the immune-system will play a more important role than individual genetic alterations.
…genome level alteration within tumors is a universal feature.


Note that although physically the epigenetic level sits “above” the genome, functionally it’s really below, as indicated in this last figure. Of course, it helps to remind ourselves that “level” is a convenient but not quite accurate concept, and they are not always clearly distinct and non-overlapping, as in this case.
Current Methodological Weaknesses
It should be noted that these weaknesses stem from an inherent paradigmatic conflict that exists in science as it’s practiced today. These weaknesses will not be addressed until complex systems thinking pervades science in general.
…methodologies of DNA/RNA isolation and sequencing from mixed cell populations artificially average the molecular profile.
…current methods used to trace genetic loci heterogeneity are not accurate, as the admixture of DNA from different cells will wash away the true high level of heterogeneity and only display the heterogeneity of dominant clonal populations.
There is a need to change our way of thinking by focusing more on monitoring the level of heterogeneity rather than attempting to identify specific patterns in this highly dynamic process.
…the benefit of cancer intervention depends on the phase (stable or unstable) of evolution the somatic cells are in.
The strategies of attempting to reduce heterogeneity to study the mechanisms of cancer represent a flawed approach. Without heterogeneity, there would be no cancer. That is the reason why many principles discovered using simplified homogenous experimental systems do not apply in the real world of heterogeneity.
…cancer progression is fundamentally different from developmental processes…. The terminology ‘‘cancer development’’ implies an incorrect concept and needs to be changed.
…we recommend focusing on correlation studies rather than search for a specific ‘‘causal relationship’’.
…the understanding of the overall contribution of epigenetic regulation should not focus solely on tumor suppressor genes, but rather focus on system dynamics and evolve-ability.
]]>A true genome project would focus on the way genomic structure and topology form a genetic network and should also include epigenetic features of the genetic network.
- People of my generation (40 and older) who have capital they want to invest in innovation but only know the VC for-profit-only value model and don’t have any true view into or understanding of social entrepreneurship business models;
- People coming out of college today (27 and younger) who are actually creating untold value for the world without taking on investors because they don’t (a) know how to attract them, and (b) have heard too many horror stories
Jay and I fall into category 1 and Michael falls into category 2. All three of us agree that the gap above exists — due in part to rapidly declining startup costs — and represents a very real (and lucrative) investment opportunity if it can be closed properly. This opportunity is partly what the so-called “black swan fund” is tapping into as well, but I’m talking here of a distinct effort, which we want your feedback and participation on.
Creating a Workable Micro-Investment Model
Michael, Jay and I represent the three basic classes of people in this entrepreneurial ecology. Michael is an entrepreneur, I am a micro-investor, and Jay is person who sees and creates deal flow. We are trying to come up with a model that works, especially for Michael and myself — if it works for the two of us, Jay’s life becomes much easier and he makes more money. To these ends, I’ve outlined here the important elements of a micro-investment from my perspective (that of the investor). Hopefully Michael and others will chime in and say what’s important from the entrepreneur’s perspective (and what they need to motivate people on their team).
- I’d like to invest $1K to $5K in a number of different nascent “projects”.
- Monetary ROI potential is a necessary pre-condition for this activity, but it’s not my main motivation.
- The main “return” on investment for me is a combination of:
- catalyzing social good (40%)
- being “in the mix” on a social/business level (30%)
- feeling personally useful and productive in my life (20%)
- getting external ego validation (10%)
- The thing I want to invest in is a tribe in the Seth Godin sense. Tribes by definition coalesce around a leader (aka, the entrepreneur).
- I specifically don’t care about the legal structure or legal standing of the tribe. It can be a whole company, a project within a company, a de-facto, fluid, amorphous partnership of individuals, or a crowdsource. If successful, it will evolve into the correct formal/legal structure and I will trust the entrepreneur to honor the spirit of my investment in compensating me (see “moral integrity” bit below).
- I have to like and trust the entrepreneur first and foremost (and then I take it on faith that the tribe is reflective of that person’s values and energy).
- I will only invest in projects whose leaders I like personally and who have high moral integrity.
- I know I will be disappointed on the moral integrity front at some point by someone, and that’s okay. There will be zero tolerance for moral lapses: no more investment for such people, and they will be ostracized from my circle of influence.
- I am looking to get about 10% of the equity of the project for my initial investment of $1K to $5K.
- I expect the first look for follow-on funding if the project gets traction, and I understand the price will go up since the risk is lower.
- I expect the tribe to know if the project gets traction within about 3 months, if not it needs to be ruthlessly abandoned.
- I will not shed any tears and will praise the entrepreneur/tribe for making that tough decision rather than dinging them for “failure”. True failure — tragedy in fact — results from missing the real opportunities due to wasting time and money clinging to a bad or mediocre one.
- Since the cost for me to get in is about a tenth of the typical angel investment, I feel no qualms about doing little to no diligence or business model validation — in fact I feel liberated.
- Some entrepreneurs will ask for my strategic help (connections and advice) more than others. I will (mostly) give it on an as requested basis and not worry if a leader is not making the most of the relationship.
- Those that do leverage my strategic help are definitely more likely to get funded by me in the future, unless of course they make me a ton of money without it :-)
Three Paradoxes
Here are some paradoxical-seeming truths that I have come to believe through my past experiences, and which the above model of investing relies upon and leverages:
#1 I am more happy and motivated to strategically help projects that I put only a small amount of money into (or no money!) than ones I put a big amount into. With the latter, the leader needed to convince me they have a brilliant, solid business plan and know exactly how they are going to go from concept stage to being wildly successful over the course of 3 to 5 years. This, of course, is utter bullshit: no success story follows the original business plan. But having that CEO come back to me (after convincing me to plunk down $100K) and admit they have no idea how they will eventually be successful, that does not inspire confidence. With a micro-investment, I’m happy to throw it on the wall and see if it sticks. If not, let’s all move on — and let’s keep the kitchen open to make more pasta while we still have some dough, the water is boiling and the staff is happy.
#2 Making money has little to do with why I want to invest like this, but if the project does not have making profits as it’s #1 goal, I’m not interested. Why? Because I don’t think it will be successful in achieving the non-monetary goals either in that case. When I evaluate a project for potential investment, I will concentrate most of my decision on the money-making potential. But I will need to convince myself that the project is set up so that non-monetary goals are structurally assured if the money flows.
#3 By “overly” and “naively” trusting the entrepreneur with my money, I know that I will be paid back manifold financially in the long run. Because by giving this trust so freely (once I am convinced of moral integrity) I am invoking powerful social influence factors that will make the entrepreneur feel like treating me more than fairly whenever they have any discretion in making decisions that affect me. And by not boxing them in with rigid contractual obligations and manufactured incentive schemes, I am increasing the number of discretionary decision points the entrepreneur has.
If you are interested…
If you have actual experience as an entrepreneur or angel investor, I want to hear from you most of all. Please comment below on what parts of the above resonate with you and what parts do not.
If you have put serious thought into becoming an entrepreneur or making an angel investment but have never done so, I want to hear from you as well, especially the reasons why you haven’t (there are no wrong reasons).
If you don’t fit any of these categories, please don’t respond, your opinion is not relevant. I will update everyone on what ultimately transpires.
]]>I do believe that if we all followed the Golden Rule as the basis for how we treat one another the world would be a better place. But I also think there is a a more fundamental rule, call it the Diamond Rule, which is even better:
Treat others as you believe they would want you to treat them, if they knew everything that you did.
The difference is subtle, and may not practically speaking yield different action that often. But when it does, the difference can be significant.
]]>Lest you think the concept of Homo Evolutis — a species that can control its own evolutionary path by radically extend healthy human lifespan and ultimately merging with its technology — is a fringe concept share by sci-fi dreamers who don’t have a handle on reality, check out the list of people in charge of Singularity University (link above), the Board members of the Lifeboat Foundation, and throw in Stephen Hawking for good measure, who says, “Humans Have Entered a New Phase of Evolution“. These people not only have a handle on reality, they have the combined power, resources and influence to shape reality.
For those who are still skeptical of the premise of Homo Evolutis, I present the strongest piece of evidence yet: it’s been featured on The Oprah Show. QED?
]]>The article, “A New Phylogenetic Diversity Measure Generlizing the Shannon Index and Its Application to Phyllostomid Bats,” by Ben Allen, Mark Kon, Yaneer Bar-Yam, can be found on the American Naturalist website or, more accessibly, on my professional site.
So what is it about? Glad you asked!
Protecting biodiversity has become a central theme of conservation work over the past few decades. There has been something of a shift in focus from saving particular iconic endangered species, to preserving, as much as possible, the wealth and variety of life on the planet.
However, while biodiversity may seem like an intuitive concept, there is some disgreement about what it means in a formal sense and, in particular, how one might measure it. Given two ecological communities, or the same ecological community at two points in time, is there a way we can say which community is more diverse, or whether diversity has increased or decreased?
Certainly, a good starting point is to focus on species. As the writers of the Biblical flood narrative were in some sense aware, species are the basic unit of ecological reproduction. Thus the number of species (what biologists call the “species richness”) is a good measure of the variety of life in a community.
But aren’t genes the real unit of heredity, and hence diversity? Is the number of species more important than the variety of genes among those species? Should a forest containing many very closely related tree species be deemed more diverse than another whose species, though fewer, have unique genetic characteristics that make them valuable?
And while we’re complicating matters, what about the number of organisms per species? Is a community that is dominated by one species (with numerous others in low proportion) less diverse than one containing an even mixture?
There is no obvious way to combine all this information into a single measure for use in monitoring and comparing ecological communities. Some previously proposed measures have undesirable properties; for example, they may increase, counterintuitively, when a rare species is eliminated.
In this paper we propose a new measure based on one of my favorite ideas in all of science: entropy. You may have heard of entropy from physics, where it measures the “disorderliness” of a physical system. But it is really a far more general concept, used also in mathematics, staticstics, and the theory of automated communication (information theory) in particular. At heart, entropy is a measure of unpredictability. The more entropy in a system, the less able you will be to accurately predict its future behavior.
The connection to diversity is not so much of a stretch: in a highly diverse community, you will be less able to predict what kinds of life you will come across next. Diversity creates unpredictability.
To be fair, we weren’t the first to propose a connection between diversity and entropy. This connection is already well-known to conservation biologists. But we showed a new and mathematically elegant way of extending the entropy concept to include both species-level and gene-level diversity. It remains to be seen whether biologists will take up use of our measure, but whatever happens I am happy to have contributed to the conversation.
]]>You can donate here (I gave them $500), but make sure to write “For the Fisher Madagascar Project” in the “Comments” field. Otherwise, you’ll be paying for the building lights. Go ahead and leave the “Allocation” field at the default, “Campaign for a New Academy”. Update: Forgot to mention that if you donate $2,000 they’ll name a new species after you or whomever you designate.
It’s hard to do justice to what I saw last night in a blog post, but here goes…
First, you may be wondering what an ant taxonomist is doing saving the Madacascar rainforest. Well, it turns out that ant species are incredibly specialized to their local environment. (They are the prototypical superorganism after all.) So the density of ant species should be a major component of any good proxy for overall ecological diversity. Thus Brian and his team needed to visit the remaining rainforest to catalog the ants (and other insects). To accomplish this task, he’s become a combination of McGyver and Steve Wozniak: part super handyman and part super technologist.
The deforestation of Magacascar occurred over thousands of years as colonists from Asia pursued unsustaintable rice farming techniques. So the only rainforests left, are the ones that are hard for humans to get to and work on. He’s had to figure out everything from how to cross rivers in an SUV (lashing plastic containers to the bottom) to how to collect specimens from forest canopies in mountains (going up in a mini-dirigible).
Then he’s had to figure out how to catalog all the specimens and sort out the thousands species. He’s helped develop composite imaging techniques that give you a full view of specimens (check out AntWeb for some unbelievable pictures). He’s had to convince Google to change the Google Earth interface so you can see layers of information at the same location (making it possible for the rest of us to see multiple photos taken at the same spot, BTW). He’s had to improve DNA sequencing and comparison techniques. He seems to have adopted the Internet/Open Source model for much of his innovations so they have a lot of positive knock-on effects.
However, I think the coolest thing about Brian is his commitment to helping Madagascarans help themselves. He gets grants for the science and expeditions behind the species cataloging. But that doesn’t solve the preservation problem. So he’s helping create a local community of preservationists. He’s helped them create their own Madagascar Biodiversity Center. He’s bringing local scientists to the US to train and then return to increase the pace of work. He’s working with the government to finalize their countrywide preservation plan. For that, he needs our help.
(BTW, did you know that ants sleep and queens even dream? And each species of leafcutter ants has a corresponding unique fungus species that they “farm” as a “crop”? queens carry away a sample of the fungus as well as a dozen or so supporting microbe species in specialized pouches as a “starter kit” for new colonies. Wild and wacky stuff.)
]]>The first point to note is that the potential counterparty and I are good candidates for a bet. We both are acting as reasonable Bayesians (neither of us have extreme, ideologically-driven views). We both appear to have a decent grasp of the domain. We both have publicly stated our beliefs.
I believe that the underlying warming signal is +.05 deg C per decade and he thinks it’s +.15 deg C per decade. We have agreed that an over-under bet on a decadal trend of +.10 deg C is fair and that 2000 to 2020 should be the measurement period. So far so good. But now things get sticky.
We have to specify how to determine who wins the bet. Now, there’s no big red thermometer sticking out of the North Pole that shows the temperature for the entire Earth. So we have to choose a global temperature “product”. There are several alternatives: GISSTEMP, HadCRUT, RSS, and UAH. The first two use land-based thermometers and the second two use satellite-based microwave sensors. Neither approach is perfect.
However, I refuse to use any of land-based products as a reference for a bet on AGW. They tend to reflect land-use changes as much as climate changes. Two separate papers have concluded that about 40% to 50% of the warming signal from these sources could be attributable to increased economic development around previously rural stations. The actual “boots on the ground” issues with station siting are well documented here. Pave a road or install an air conditioner near a thermometer and voila, instant warming.
Satellite products are no paragon of accuracy either. They actually measure microwave radiation and use a model to infer the temperature. Several refinements over the years have corrected for things like orbital drift, which makes one wonder what other issues are lurking. Moreover, the record contains readings from several different satellites, each with multiple sensors (which degrade over time, BTW). The research groups behind the products use overlapping samples to calibrate new data streams, but that’s not a foolproof process by any means.
In the end, it comes down to coverage. Satellites gives us much denser, homogeneous, and consistent coverage of temperature readings. I think they are therefore much less likely to be systematically biased in detecting trends.
Unfortunately, our quest doesn’t end there. Satellites actually generate temperature data for several different altitudes and latitudes. Which of these best reflect AGW climatic processes? To find out, I consulted with Ross McKitrick and Roy Spencer, two well-known scientists with relevant expertise. You may remember Ross from this post. He proposes linking AGW-targeted interventions to the tropical troposphere termperature (T3) because all the climate models we currently have produce T3 warming as a unique signature attributable to CO2 increases. Unsurprisingly, Ross asserted that if the counterparty was unwilling to use T3, he didn’t really believe in significant AGW.
I take a slightly different philospohical posture from Ross. The scenarios where T3 is a signature of significant AGW are logically a subset of all scenarios where significant AGW occurs. Sure it could be the same set, but it could be smaller. Therefore, it would not be in the the potential counterparty’s strategic interest to limit the scenarios in which he wins. So I suggested we go with Roy’s suggestion of using the global lower troposphere series as most representative of the climate we care about.
There is one more wrinkle. I had originally suggested we use the three year averages around 2000 and 2020 as the basis for the bet. Given my academic background, I am a bit embarassed by this. Both Ross and Roy pointed out that this bet would be more about big climate events around 2020 than general trends. A big El Nino and I lose. A big volcanic eruption and I win. Not exactly the prediction we intend to measure.
They both suggested we calculate the linear trend from 2000 to 2020 using least squares regression. A decadal trend greater than .10 and I lose. Smaller than .10 and I win. This was the obvious approach in retrospect and I have suggested it to the potential counterparty.
More news as events warrant.
]]>I posit that the hotel I stay in (let’s call it the Imperial Palace) though low end by Vegas standards is nicer than the normal living quarters of 75% of the world’s population. Given that I am in the room to sleep and shower, do I really need a nicer place? Is it karmically bad to stay in a nicer place that I’m not really using?
We came up with the following idea. On my next trip I will stay at the IP again but donate half the difference between its cost and the cost at Caesar’s Palace to a charity for the homeless. So now my frugality has a point.
The accommodation scenario is easy. It gets more complex when I decide between Subway meatball subs and the meatball appetizer at Rao’s…
Thoughts?
]]>In an effort to make Centmail a reality, a formal protocol and API has already been developed. While I am somewhat worried that a large-scale adoption of the protocol will incentivize significant non-profit and charitable fraud, the economic burden due to spam should be greatly reduced. It’s a cool idea by good people and I urge you to check it out.
]]>Thanks to Marissa Chien who found it and pointed me to it. She also suggests that people who are having trouble with their mortgage should seek advise from HUD. Information is power and many people (I’ve learned) are irrationally scared of approaching their lender and negotiating. More and more lenders are willing to cut deals to avoid foreclosures.
]]>I’m usually not skeptical in this way, and I’m loath to focus on the negative when it comes to philanthropy, but I can’t get these thoughts out of my head and I’d like some perspective from those who are better informed about the alleged U.S. hunger crisis. In the mean time, here’s my food for thought:
- Generally speaking when I get a Facebook cause request it’s from a friend (or a Friend), but this one came from Causes itself: “Please join the Kellogg Company and Causes as we take small steps towards creating BIG change.”
- When you go to the Causes page it features a giant banner ad for Kellogg and Kellogg as the well-branded sponsor.
- Then when you go to the website the first thing that catches your eye is an image of this family who supposedly is suffering from hunger:
- On the Feeding America website I tried to educate myself on hunger facts but all I could seem to find was poverty statistics and stats related to food-related programs (like how many people used food stamps).
- I understand poverty is a big problem, but unlike in other parts of the world, starving in America is nearly impossible to do. A friend of mine who works tirelessly to provide meals to homeless admits that the food is just a hook to get folks into a graduated self-sufficiency program.
- Being malnourished in the U.S., on the other hand, is becoming increasingly easy to do, especially if you eat Kellogg products which have very few nutrients relative to whole foods, particularly veggies and fruit. Malnutrition in the U.S. manifests itself differently than in poor countries though: obesity, diabetes, metabolic syndrome, heart disease, cancer, et al.
- I went to Charity Navigator to look into the Feeding America and was surprised to find it gets great marks. I was even more surprised to find that its revenues are $650 Million per year(!) And since they have such an incredibly low overhead rate and spend nearly 97% of all money raised directly on programs to feed the hungry, I’m flabbergasted. Hunger must be a huge and totally unappreciated problem in the U.S. if it can’t be solved with the billions spent trying.
- Looking at the breakdown of the $650M in revenue from their annual report, $560M of it is from “Donated goods and services”. Presumably that’s good and efficient. However I can’t help but wonder how much of that is food grown with subsidies from the government, which then Kellogg writes off against its taxes as an in-kind donation. Does anyone know whether this is the case?
I expect to be taken to task on this, but isn’t it really just a PR move by big businesses who’d rather give away product rather than feed people farther from home at greater expense (or better yet help them become self-sustaining)?
]]>Other ideas include spraying sulfur dioxide 65,000 feet up through a fleet of Zeppelins and firing 840 billion ceramic frisbees into orbit to block the sun’s rays. But this article also suggests that a rich “Greenfinger” could unilaterally do some Geo-Engineering without world consent and with disasterous consequences.
]]>Check out this even more interesting way of generating solar power at Cool Earth Solar. It is massively scalable and can compete on price with traditional sources.
]]>It turns out you can sell as many CDSs on the same mortgages as you want. So Amherst sold more CDSs than the face value of the mortgages. Way more as it turns out. Then they simply bought up the loans and paid them off. No default, so the CDSs don’t pay out. They pocket the difference between the value of the CDSs they sold and the face value of the mortgages they had to buy up. About $70M according to this report.
Of course, the Wall Street wizards are calling foul. This is silly because if each of them had only bought enough CDSs to cover his own exposure, they all would have been fine. But they got greedy and thought they’d try to take advantage of Amherst by buying several times as much coverage as they needed. My wrestling coach had a piece of advice for what to do when your opponent makes a monumental mistake. “NEVER give a sucker an even break.”
]]>
Mission critical at Quest is a translation of the underlying form of games into a powerful pedagogical model for its 6-12th graders. Games work as rule-based learning systems, creating worlds in which players actively participate, use strategic thinking to make choices, solve complex problems, seek content knowledge, receive constant feedback, and consider the point of view of others. As is the case with many of the games played by young people today, Quest is designed to enable students to “take on” the identities and behaviors of explorers, mathematicians, historians, writers, and evolutionary biologists as they work through a dynamic, challenge-based curriculum with content-rich questing to learn at its core. It’s important to note that Quest is not a school whose curriculum is made up of the play of commercial videogames, but rather a school that uses the underlying design principles of games to create highly immersive, game-like learning experiences. Games and other forms of digital media serve another useful purpose at Quest: they serve to model the complexity and promise of “systems.” Understanding and accounting for this complexity is a fundamental literacy of the 21st century.
Elsewhere they go into a bit more detail about how games are used to teach different subject areas:
At Quest students learn standards‐based content within classes that we call domains. These domains organize disciplinary knowledge in 21st certain ways—around big ideas that require expertise in two or more traditional subjects, like math and science, or ELA and social studies. One of our domains— The Way Things Work—is an integrated math and science class organized around ideas from design and engineering: taking systems apart and putting them back together again. Another domain—Codeworlds—is an integrated ELA, math, and computer programming class organized around the big idea of symbolic systems, language, syntax, and grammar. A third domain—Being, Space and Place—an integrated ELA and social studies class—is organized around the big idea of the individual and their relationship to community and networks of knowledge, across time and space. Wellness is the last of our integrated domains, a class that combines the study of health, socio‐emotional issues, nutrition, movement, organizational strategies, and communication skills.
OMG!OMG!OMG!OMG!
One of my favorite aspects of this school is that they have a separate staff of game designers working together with their teachers. As a former teacher I can tell you that designing good, creative lessons is a relatively different skill-set from actually implementing these lessons in front of a class and following up with your students, and that doing both well requires more time than is physically possible without traveling at relativistic speeds. So having designers who are there at the school and understand the teachers’ needs, and who have the time to make great lessons, is a really really good idea.
]]>hat tip: Annie Duke’s mom
]]>Alexander (a pseudonym) is an Air Force interrogator with a criminal investigation background. He was brought in as part of team trained to employ “new school” interrogation techniques in Iraq, post-Abu-Ghraib. These techniques focus on building rapport with prisoners and gradually winning their trust, instead of trying to establish dominance and control over them. The book is about his unit’s successful search for Abu Musab Al Zarqawi, the head of al-Qaeda in Iraq.
There isn’t a lot of door-kicking, badass-combat action. It’s a psychological workplace thriller. But the fact that I lived through the context of how important the mission was makes it rather heart pounding. It’s fascinating how well the new school techniques work on supposedly hardened al-Qaeda operatives and how resistant the old school practioners are to using them. The story also provides some insight into how primate politics can infect even the most clear and critical missions. In fact, a crucial advance in the search comes from Alexander bucking the political order at great risk to his career.
There’s some nice humor too. “Randy” is the ex-Special-Forces commander of the interrogation unit. He has a reputation as something of a badass. So a “Randy-ism” will occassionally and anonymously appear on the whiteboard. My favorites:
“Jesus can walk on water, but Randy can swim through land.”
“When Randy wants vegetables, he eats a vegetarian.”
“Little boys check under their beds at night for the bogeyman. The bogeyman checks under his bed for Randy.”
Next time I have to interrogate a prisoner, I’ll have some idea of what to do.
]]>I happen to be one of 53 lucky graduate students to be selected for this year’s Young Scholars Summer Program, meaning I get to paid to live in Vienna and do research. Can’t really complain about that. Tomorrow I get to hear mini-presentations on everyone’s research proposals, which should be very interesting. My own project will be on the long term, gradual evolution of cooperation in spatially structured populations, using a mathematical framework known as adaptive dynamics.
I’m expecting to learn a lot here, and I’ll share as much as I can with you readers. Looking forward to it!
]]>
Prediction markets occasionally exist concerning a single event of an individual company, but this represents only a fraction of the market opportunities that can be made available. I propose creating a large number of prediction markets, perhaps 500-1000, covering all types of companies: Those near liquidity like Facebook and Twitter, those that recently received Series A or B venture financing, and even those that are running on seed or angel funding, or with no outside investment at all. Each company could have a series of markets to assess its current value and prospects for success. While verifiable information concerning private companies is often hard to come by, there are a variety of metrics that could be used as the basis for markets:
– If and when a company goes public, as well as its closing IPO valuation
– If and when a company is sold, as well as its sale price (if disclosed.)
– Number of registered or active users.
– Website traffic (this is most applicable for certain types of companies, like Social networking)
I like to think there are many more applicable measures of success that can be used and would love to hear any suggestions from our readers. Most likely each sector has some specific metrics that are applicable to evaluating success or failure. With enough volume, a market for valuing private companies would emerge. This might not be as far off as you think.
The Commodity Futures Modernization Act allows for real money prediction markets that operate as an Exempt Board of Trade (EBOT.) The American Civics Exchange (ACE) is one of the first companies to take advantage of EBOT status. While today they have only a few markets, which concern tax changes and other political actions, they anticipate expanding to include contracts that allow individuals and business to hedge against a legitimate risk. Examples could include FDA drug approval, the outcome of major class action lawsuits, and even health care reform. Unfortunately, trading in these markets is restricted to high net worth individuals, and contracts relating to the success of private companies is likely prohibited. Still, this exchange represents a step in the right direction.
Recently, Google, Yahoo, and Microsoft co-wrote a letter to the Commodity Futures Trading Commision (CFTC) asking for “small stakes” real money prediction markets. They believe that real money prediction markets have the “potential to provide significant public benefit.” My guess is they believe there is significant money to be made as well, but that is ok with me. Momentum is building, and an actual real money stock market for private companies may not be far away.
While I would like to personally bet on the success and failure of many companies, I’m also curious how accurate the markets will be in predicting success. I think that the younger (not necessarily in absolute time) a company is, the less predictable its prospects become. Kevin Dick believes small groups like angel investors and venture capitalists can’t pick winners at the seed stage. But what about large groups and the “wisdom of crowds”? Perhaps aggregating information via prediction markets can yield better signals about a company’s prospects for success. Assuming it can, this data will be of interest to a number of parties including potential acquirers, partners, analysts, and even customers. You don’t really want to use a company’s products if you think they are going bust, do you? Today, if you have that opinion, there isn’t much for you to do other than decline to buy its products. But soon you might be able to bet on it and turn a profit.
]]>Yesterday, he put up a post titled CO2 Warming Looks Real. He’s not an expert. Like me, he has an economics background and did some detailed research. Yet from the title and body of the post, I though he must have reached a very different conclusion than I did. So I thought I’d try to engage him to find out where we differ. The results were interesting.
Obviously, and as Robin knows, the best way to elicit a person’s true beliefs is to observe where he puts his money. So I offered to negotiate a bet of up to $1,000. As you can see from reading the comments, he eventually offered an even-odds bet than the temperature would rise 0.1 deg C in 20 years. Yes, you read that correctly: 0.1 deg C.
So he’s as much of a skeptic as I am! I think a doubling of CO2 concentration would lead to about 1 deg C of warming. I think, it’s going to take about 200 years (from 1900-2100) to do that. So I believe that we should see warming of about .05 deg C per decade. In 20 years, that’s about… 0.1 deg C. (Yes, I know I’m linearly approximating a logarithmic function, but the amount of precision in all the input values is low enough that I don’t think it matters).
Now, as I pointed out to Robin, the official IPCC estimates are much higher. If you go to the latest Summary for Policy Makers and look at Table SPM.3, you see the rate of warming in the lowest impact scenario B1 is 2x to 5x higher. So Robin’s price reveals that he is quite skeptical of the IPCC estimates.
Moreover, if you believe that the over-under on temperature in 2100 is .45 deg C higher than today, why would you think that justifies any significant mitigation effort? To the extent that there are any costs, we could certainly pay them at a fraction of the economic cost of a carbon tax or cap-and-trade. Moreover, you’re saying that GHG emissions account for only a small part of the variance in future temperature. So it could get a lot cooler as well as a lot warmer. Seems like that implies we should save our money for adapting to whichever way the thermometer swings.
Personally, I’m pleased to know that Robin Hanson did not reach different conclusions from me and that, by my definition, he is a fellow member of the “well-informed skeptic” club.
]]>
As Krisztina Holly discovered on her recent visit,”there are no directors. No CEOs or presidents.” And
Because their community is close-knit and their most valuable currency is reputation, experimental physicists around the world know who contributes. Conversely, the few who have been too proprietary with their ideas have been ostracized. It’s like a crowd-sourced performance review.
Interestingly, because of such thorough amounts of collaboration, Holly points out that it’s unlikely for anyone to win the Nobel Prize. My favorite part of the LHC though is the (clearly) crowdsourced website.
As an aside, I was curious what the prediction markets think of the likelihood is of finding the Higgs boson (the last unobserved particle predicted by the Standard Model). Surprisingly, given how important this proposition is to the future of science, there is very little action on it. Intrade.com has two bets: will it be discovered before the end of 2009 and will it be discovered before the end of 2010. Based on the historical price of these — the latter was a 4-1 favorite as recently as last fall — it seems as though the low odd currently (10-1 and 4-1 against, respectively) has to do mostly with the timing of the discovery and not whether the discovery will happen. Personally, I’m willing to bet against the Higgs boson’s existence if given 4-1 odds, so anyone looking for some action on this, let me know.
]]>Still, I can’t help but feel that the focus on Creationism’s pseudoscientific claims have obscured what is really a debate about beliefs and values, not science. Moreover, the discourse on blogs often reflects a view that religion (of all forms) is inherently opposed to evolution, and that no intelligent person could possibly believe in both.
My partner addressed some of these issues in a final paper for her recently completed Master’s of Theological Studies degree from Harvard Divinity School. This paper begins with an anecdote describing the surprising and complex position toward evolution taken by middle school students she taught, and goes on to detail the history of Catholic responses to Darwin’s theory. Some of the ideas from her paper are articulated below in the hopes of adding nuance to the evolution blogversation. Neither she nor I intend to present either her students’ view or the Catholic view as models for how to reconcile (or not) evolution and faith, but only as voices that complexify the picture of this conversation as a fight between Bible-thumping evangelists on the one hand and atheist scientists on the other.
….
In 2004 I was a new, white teacher at a middle school in Dorchester, Massacusetts. The students in my seventh and eighth grade comparative religions classes were young women of color, and of primarily African and Caribbean descent. During the kind of cheerfully chaotic class that takes place the day before a lengthy school holiday, one of my most inquisitive students surprised me with a question about evolution. She wanted to know who was right: the scientists, or the people we were studying in religion class? Before I could collect my thoughts, a flood of powerful and emotional rejoinders issued from several of her classmates. My limited familiarity with the traditions with which they were affiliated gave me only a vague sense that I might expect a critical stance on the subject. What I had entirely failed to consider were the ways in which the cultural and racial identities of my students intersected with both their religious worldviews and their feelings about evolution. I soon learned that the ways in which many of these young women processed the tense public controversy over evolution and Biblically based accounts of creation were deeply tied to their sense of identity as people of color in a white-dominated culture. Many began quoting their pastors: “You are made in the image of God. You are beautiful, loved, and wanted in this world, and can’t nobody take that away from you.” Others followed with similar, powerful words that affirmed the essential pride and self worth that was instilled in any child of God. Still other students spoke of the historical influence of this notion of essential human worth on abolitionist and civil rights movements. They knew their history. The concept that all humans have been made with great love and purpose by God has historically operated as a powerful political and psychological resource to combat the behaviors of a racist society. Many students made it clear that they saw the promotion of evolutionary theory as a direct attack on the foundations of their personal identity, faith, and value as human beings.
Though I felt moved and honored to hear the powerful and nuanced ways in which my students articulated the liberative power behind the Biblical account of creation, I was pained to hear how rigidly and antagonistically they conceptualized the “evolution side” of this conversation. I found their monolithic image of “evolutionists” to be a clear example of the troubling state of an important conversation. These young women’s understanding of the relationship between evolution and faith led them to conceive of the scientific claim as only another attack on their sense of self by a hostile, dominant majority 1. Such an understanding represents a disconcerting deficit of education on both sides of the conversation, effectively eclipsing a range of voices from adding texture to what has become yet another American clash of extremes.
Most contemporary public discussion and media attention on the subject has been shaped by the polarizing rhetoric of certain anti-evolution Protestant American Christians. Those who oppose these efforts by criticizing creationist “science” have largely fallen into the trap of engaging in only reactionary responses. Many on both sides have portrayed this conversation as a conflict of “religion versus science,” ignoring the fact that the Creationist movement has historically been a uniquely American Protestant phenomenon 2, which has only in the past couple decades begun to spread to other countries 3.
Ironically, included in the crowd of voices with which my students were unfamiliar were those from the Catholic Church, the tradition upon which their school was founded. Indeed, when I related my classroom conversation to some of my fellow teachers, most of whom were raised within the Church, they expressed great surprise, recalling that the science classes within their own Catholic education had included extensive coverage of the scientific theory of evolution. (In fact, a survey of American Catholic school textbooks published between 1940-1960 found them to be in closer agreement with evolutionary theory than American public school textbooks from the same period 4.) Intrigued, I looked further into the historical response of the Catholic Church to Darwin and found a complex story that most certainly disrupts the largely monolithic representation of Christian responses to this scientific theory. It is also a story that is infrequently told, and appears to be largely unfamiliar to much of the American public 5.
For almost a full century following publication of “Origin of the Species,” the Church hierarchy avoided any official stance on, or condemnation of, evolution. The first official Vatican pronouncement, Pope Pius XII’s 1950 encyclical, Humani Generis (Human Origins), ackowledged evolution as a possible scientific explanation for the origin of humanity6. More recent pronouncements have strongly embraced evolution as a scientific theory, as evidenced in Pope John Paul II’s 1996 address to the Pontificate Academy of Sciences, the current Pope Benedict XVI’s text “Communion and Stewardship: Human Persons created in the image of God,” and the 2009 Vatican conference on “Biological Evolution: Facts and Theories.” These documents express the view that evolution can be regarded as both a random process in the scientific sense, as well as a fulfillment of God’s plan7. Thus, while the Church accepts evolution as a scientific theory, it rejects any claims that this theory has no place for God. Significantly, these documents seek to distance the Catholic perspective from that of intelligent design Creationists8, and intelligent design speakers were pointedly barred from the recent Vatican conference.
Popular thought about evolution amongst American Catholic laity, scholars, and journalists has been largely supportive of evolutionary theory as well. A literature review of popular Catholic press indicates that responses to the 1925 Scopes and 2005 Dover trials (the former testing a Tennessee prohibition against teaching evolution in public schools, and the latter arguing against the inclusion of intelligent design within public school science curricula) demonstrates a desire to problematize any claims of an inevitable clash between science and religion brought about by the teaching of evolution9. For example, one 1925 editorial in a Catholic newsletter argued “[Creationist lawyer] William Jennings Bryan is reported as having said that if evolution is true, then Christianity can’t be true. In this matter as well as many others Mr. Bryan is wrong.” 10
….
The full text of the paper, which delves more deeply into how science becomes appropriated as a “cultural resource” and gives recommendations for teaching, can be made available on request. Again, our purpose is not to espouse or promote the Catholic view, but merely to present it as evidence that Creationists do not represent all, or even most, religious (or even Christian) views on evolution, and to highlight some of the hidden cultural factors underlying this debate.
Footnotes:
1 While I affirm that it is crucial for my students to develop a more nuanced understanding of the scientific community, it is important to note that, as young women of color their anger and anxiety is certainly not unfounded. The history of science as practiced by dominant culture is guilty of repeatedly producing scientific data and discourse in ways that either exploit or promote racist ideology. The American eugenics movement and the Tuskegee Syphilis Study are but two examples of this phenomenon.
2 Eugenie C. Scott, Evolution vs. Creationism (Westport, CT: Greenwood Press, 2004), 85-134.
3 Simon Coleman and Leslie Carlin, Cultures of Creationism (Burlington, VT: Ashgate, 2004), ix.
4 Gerald Skoog, “The Coverage of Human Evolution in High School Biology Textbooks in the 20th Century and Current State Science Standards,” Science and Education 14(2005):412-413.
5 Ronald L. Numbers and John Stenhouse, eds., Disseminating Darwinism (Cambridge: Cambridge University Press, 1999), 2. This text is one of many that notes the lack of both scholarly research and public knowledge on the Catholic position toward evolution.
6 Humani Generis, Chapter 36.
7 “Communion and Stewardship,” Chapter 69.
8 Ibid.
9 Christopher M. Hammer, Reconciling Faith, Reason, and Freedom: Catholicism and Evolution from Scopes to Dover (MA Thesis, University of Virginia, 2008), 3.
10 F. Gordon O’Neil, “The Week,” Monitor and Intermountain Catholic, June 6, 1925, 1.
]]>More likely, you’ve probably heard of private investors taking advantage of the banks’ unwillingness (or inability) to deal with all the bad loans on their books. Like the group of investors in Act 2 of this This American Life episode, you buy a house that’s in foreclosure for a significant discount on its true market value and then “you get the homeowner into either a mortgage they can afford, or they’re able to rent it, or you pay them a bit to move somewhere else.”
Well, what if there was a way to combine these two activities so that you are doing good for someone else while doing well for yourself financially? There are many variants of how this could work, but here’s the basic concept:
- Identify a number of people who are about to be foreclosed on but who could afford a reduced mortgage (or rent), and who want to stay in their homes.
- Buy these homes at a deep discount either from the bank directly or at auction. E.g. Owner owes $300K, monthly payments are $2K, house is now worth $200K, you get it for $150K.
- Offer the original home owner one of two options:
- Get a new loan to buy it back from you at $175K
- Rent it from you for $1K until such time as they can afford to buy it back at a guaranteed $25K below the appraised value at that time.
The key thing to keep in mind is that you half doing this to make a profit and half doing it philanthropically to keep the borrower from losing their house. So whatever potential profit the bank is leaving on the table by not being flexible, you keep half and the borrower gets the rest of the value. Note that the “rest of the value” to the borrower is actually higher than what either you or the bank could capture because they probably owe more than the house is currently worth. In the example above, instead of owing $300K, if you flip the house back to the borrower at $175K, they’ve just made $125K. You’ve made a very quick $25K just by being in the right place at the right time with cash (and a willingness to hold the property and be a landlord).
I can envision a web site where people who are about to lose their homes can register, giving all the details of their property and sale history along with the history of the relationship with their lender, what their current and prospective financial picture looks like, and how much they would be willing/able to spend monthly on a new mortgage or rent. The story is vetted and the applications that are suspect or don’t meet the criteria are rejected. The candidates that pass are given a “buyback or rentback” offer conditional on you being able to get the property at the target price or lower.
Please comment below if you’ve heard of anyone doing this.
If you haven’t and you are a home owner who would be a candidate for this type of deal, please also speak up. You might just find your Robin Hood, or maybe a group will emerge to create a fund for this.
If you are a potential investor in such a fund, let us know.
hat tip: Laura Rose, Marissa Chien
]]>First, let’s look at the issue of spending as a percentage of income. As Eric Rescorla pointed out to me in a personal communication, one could contend that state and local spending should remain constant as a percentage of income. Budgeting a fixed share of our resources to state and local projects seems prima facie reasonable. According to this site, per capita income in California was $42,696 in 2008 and $28,374 in 1998. Using our handy-dandy inflation calculator to convert both of those to 2009 dollars, we get $42,287 for 2008 and $37,119 for 1998. That’s real growth of 14% over 10 years (yes, a very slightly different 10 years than the budget numbers I have, but close enough). So real per capita spending grew at 2.7x the rate of real per capita income.
Next, Terence posted in the comments that he had probably tracked down the two largest sources of spending increases: incarceration and education. His numbers look pretty convincing. Now, I think we can probably agree that incarceration is not a good. The United States has the highest incarceration rate in the world. California’s is actually higher than the national average. I don’t think Californians are 5x as anti-social as the British, who have about 1/5th the incarceration rate (but still the highest in the EU). So I’m pretty sure this is not an efficient use of our money.
That leaves us with education. Is increased education spending a good? From first principles, it seems like it should be. Given the large positive role it has played in my life, I hope it would be. The evidence is that additional years of education produce a positive return. However, this is somewhat different from the question of whether more public spending leads to better results. Alas, on this point, the evidence is alarmingly weak.
According to the official state statistics, the California graduation rate in 2007-08 was 79.7% compared to 83.3% in 1997-98. Going down, not up. But perhaps it takes time for increased spending to affect the graduation rate. In that case, we might want to look at intermediate metrics.
The standard benchmark here is the National Assesment of Education Progress. These are standardized tests of academic knowledge given in different subjects at different ages. Reading test data goes back to 1971. Math test data goes back to 1978. These appear to be the most consistent metrics we have available. You can visually inspect the California results here. Unfortunately, they only gave certain tests in certain years so the time periods don’t quite match up with the budget data.
What you will see is that from 1996 to 2007, math scores for Grade 4 rose from 209 to 230 (10.0%) and for Grade 8 rose from 263 to 270 (2.7%). From 1998 to 2007, reading scores for Grade 4 rose from 202 to 209 (3.5%) and for Grade 8 dropped from 252 to 251 (-0.3%). This looks like a rather slight improvement for the amount of money spent.
Of course, visual inspection is not very rigorous. You want statistical analysis. Well, it turns out that there is a nice Brookings publication on the topic for the whole United States. The introduction is available via Google and worth reading because it walks through the research history. The summary is that there has been a lot of academic back and forth on the relationship between education resources and results. My take is that many decades ago, there was a weak yet significant relationship. But it has mostly evaporated as the level of expenditures rose above some threshold level. Perhaps the most interesting bit is about a “natural experiment” where 15 Austin, TX schools had a substantial increase in the resources available to them. Student performance increased at only 2 of the 15 schools.
I found a more specific 1999 paper on spending and achievement in California. The full text is behind a paywall, but the abstract says, “The evidence indicates that, despite claims to the contrary by many advocates of public education, higher education spending does not raise student achievement. Education spending is also shown to be highest in those counties exhibiting highest monopoly power as measured by the Herfindahl index.” Basically,I think this means that education spending is driven by how powerful the local public school system is.
All in all, not good news. We’re spending more than we’re making. The increase is split primarily between a “bad” and a “good”, but even the “good” spending doesn’t seem very effective. I just don’t feel that I’m getting my money’s worth.
]]>One became super efficient at gobbling up its food, doing so at a rate that was about a hundred times faster than the other. The other was slower at acquiring food, but produced about three times more progeny per generation.
The answer is…
RNA molecules which self-catalyze and evolve in the lab. This gets us one step closer to being able to show how life could have emerged on Earth without any endogenous materials (e.g. asteroids crashing into Earth carrying DNA).
I’ve never understood why such “exogenous origins” theories are popular since they seem like a pseudo-creationist punt rather than a truly scientific view. Sure, DNA could have come from outer space. But Stuart Kauffman showed years ago how — conceptually speaking — autocatalysis and cross-catalytic reactions could yield self-replicating molecules. Now all that’s left is setting up the right initial conditions to show the existence proof. No aliens — or god(s) — necessary.
]]>2. Spend time with friends, or even try and make a new one.
3. Help another person. Donate a small amount of money, that has nominal value to you, but significant value to someone else. (Kiva, Vittana Foundation.)
4. Quit Smoking. It might be even worse for you today.
]]>One became super efficient at gobbling up its food, doing so at a rate that was about a hundred times faster than the other. The other was slower at acquiring food, but produced about three times more progeny per generation.
Answer in tomorrow’s post.
]]>
In an excellent column in today’s Washington Post, Courtland Milloy explores the use of the war metaphor, and how it can be better used, if need be.
In an effort to recast substance abuse as more of a public health problem than a crime, the nation’s newly appointed drug czar has called for an end to talk of a “war on drugs.”
“Regardless of how you try to explain to people it’s a ‘war on drugs’ or a ‘war on a product,’ people see a war as a war on them,” Gil Kerlikowske, director of the White House Office of National Drug Control Policy, told the Wall Street Journal last week.
Wow. War scares me too. But is it really over? Will we stop jailing non-violent offenders? Can we now focus on treatment?
Via the Huffington Post, Jack Cole, Executive Director of Law Enforcement Against Prohibition (LEAP,) doesn’t think the war is over:
A rose by any other name. This is not a war on drugs, it is a war on people; a war on our children, our parents, ourselves. Rebranding won’t change things. A new policy is needed to change things; ending drug prohibition.
Rebranding, a classic marketing trick. Sometimes a new name is all it takes to turn an unsuccessful product into a superstar. Somehow, I do not think this gambit will work here. And, whatever we call it, resources are still being disproportionately devoted to this “war.” According to Milloy, the Obama administration is spending more than double the National Institute for Drug Abuse (NIDA) annual research budget to “enhance Mexican law enforcement and judicial capacity.”
So the war is not really over. But, we can focus more of our resources on treatment, and attack the problem where it really lies, the brain.
Milloy concludes:
]]>But a battle rages nonetheless. And he’ll [Kerlikowske] need to rally the troops. For the foe is cunning, capturing the brain. In a war, that would be the strategic high ground, and it must be retaken if we are to win.
In honor of California’s special election on budget measures, I thought I’d shed a little light on the fundamental problem. Contrary to what polticians are saying, the cause of the budget problem is not falling revenues in a recession. Rather, the cause is a dramatic increase in spending over the last 10 years.
If you live in California, you’ve probably read or seen numerous news stories about all the sacrifices in services we have to make to balance the budget. When I saw them, I became curious about how would go about making these difficult tradeoffs. From a public policy perspective, this is an interesting problem.
I started looking at some historical data to get an idea of the choices we made on the margin in the past. The thinking is that newest spending categories probably have the least benefit. But my analysis went of the tracks very quickly when I discovered just how much spending has ballooned.
In about five minutes of Googling, the problem was obvious. First, I found historical state and local per capital spending here. You have to look at state and local expenditures together in California. There is a lot of transfer of funds back and forth, as evidenced by the budget measures aimed at raiding local coffers for state needs. This data is from the US Census Bureau’s annual survey of State and Local Government Finances (which goes back to 1992). Now, it’s in nominal dollars so I went to Bureau of Labor Statistics Consumer Price Index calculator to convert everything to 2009 dollars.
Here’s the resulting graph of state spending controlled for both population and inflation from 1992 to 2009:

As you can see, spending was flat from about 1992 to 1999. But in only 10 years, real per capita state and local spending has increased a whopping 38%! According to the official numbers for the 2009-2010 budget, we expect $86.3B in revenues and $111.1B in expenditures, for a current year gap of $24.8B (there’s another $13.7B carried over from last year, but we wouldn’t have had that either without the spending increase). If we had just managed to keep our real per capita spending at 1999 levels, we would only need to spend $80.4B today. That means that we would have a $5.9B surplus! So it’s definitely spending growth that is the issue.
Now, I don’t know about you, but I can remember 1999 pretty clearly and I don’t think I’m getting 38% better services. Do you? This sorry state of affairs viscerally disgusts me.
]]>If, like Aubrey de Grey, you believe that immortality is achievable, or you are just intrigued by the possibility, you should check out this news story on The Methuselah Foundation.
Lest you dismiss the idea out of hand, note that in 2006 Alan Russell gave a TED talk which showed a man with a decent portion of his finger being regenerated. Just three years later, Juan Enriquez gave an update in which doctors have been successfully able to regrow trachea, ears, bladders and even a full heart! If you can have a transplant for any organ in your body (regrown from your own cells), how far off is the first thousand year old person?
While I’m sure all of this progress will go on with or without my support, I decided at the end of last year to do a sort of “Pascal’s Hedge”* and became a 300 Club Member of the Methuselah Foundation. You can thank me on your 300th birthday ;-)
* See Pascal’s Wager
]]>A frantic grant-writing effort that has consumed biomedical research scientists this spring came to an end last week, resulting in a huge pile of new applications—more than 10 times larger than expected—to be reviewed by the National Institutes of Health (NIH). After this enthusiastic response, there will be many disappointed applicants: The rejection rate could run as high as 97%.
Increased time cost spent not just applying for money but also reviewing applications to allocate the funds. Additionally, many qualified researchers chose not to apply for funding after hearing how competitive the grants were. Is this a good thing? Does this competition result in better science? Could there be a better way to allocate scientific funding?
]]>Jill Price is touted as the “woman who can’t forget,” but if you read the whole article you note that this is far from the truth. In reality she can only remember things well that are relevant to her personal life history. Stephen Wiltshire is even more impressive, able to recreate entire city-scapes from just one viewing.
The reality is that we humans are limited by the size of our brains as to how much total information we can store. And if we were to store every piece of sensory information that came our way every second of the day, we’d fill up pretty quickly. Thus, we are forced to heavily compress (i.e. encode) the raw information into chunks which can later be decoded when we are asked to recall. Fortunately, the world is very structured and it actually helps us immensely in this encoding/decoding process. We are all virtuosos at this process, but it’s so unconscious and natural that we don’t recognize the incredible feat that our brains are accomplishing all the time.
What goes on in these savant cases is that their brains have developed with a preternatural fixation and skill in either a particular domain (such as architecture), or in a general encoding/decoding scheme which allows for superior memory across a wide — but not unlimited — range of domains, usually based on the visual system.
In the case of Stephen Wiltshire it appears to be the domain-specific type. If you browse his art gallery and history, almost all of his feats of memory are specifically of architectural forms. Not surprisingly, architecture has quite a bit of regularity to it which helps the encoding/decoding process. For instance, if you see one column in detail and know how many columns there are on the building, you can recreate all of them in detail. One would expect Wiltshire to perform less eidetically on landscapes and nature, and perhaps no better than average on abstract visual scenes.
I had something more to say on this topic, but now I forget….
hat tip: Mom
]]>The biggest piece of evidence is the existence of credible discounted brokers like Redfin.com which has been operating for a number of years now and have yet to make any impact on the 6% overall. In the past Redfin used to have very low fixed commission fees (if I remember correctly) and now they have gone to “50% savings”, which when you look into it is really only 25% since they are just talking about one side of the transaction. But you’d think that if they are listing both buyers and sellers they could just act like market makers and reduce both halves.
Now I understand that there is status-quo-preserving regulation and there is collusion and all sorts of conflict of interest in this game. But if the “big fat tip” is so large, why hasn’t the open market corrected this yet? There should be tons of competition into the discount broker space if there’s money to be had there. It should be an exploding industry, especially. now with all the extra inventory available. Perhaps there finally is an opportunity, and time will tell.
From a customer’s standpoint, I have to admit that it’s not about the raw information, it’s about understanding the intricacies of buying and selling a fairly complex asset (which 99% of the population is not up for doing). And if an agent can convince me that for their extra 3% they can get me much more than that in the sale price, it doesn’t sound like such a bad deal. Unless I’m an experienced buyer/seller of real estate, even if I understand all the numbers and regulations from a theoretical standpoint, I’m not going to be a good negotiator in practice.
My guess is that the true value of the broker (both sides combined) is about 4%. The other 2% is the corruption in the system that we all know about. What do you all think?
]]>Suppose you meet a Wise being (W) who tells you it has put $1,000 in box A, and either $1 million or nothing in box B. This being tells you to either take the contents of box B only, or to take the contents of both A and B. Suppose further that the being had put the $1 million in box B only if a prediction algorithm designed by the being had said that you would take only B. If the algorithm had predicted you would take both boxes, then the being put nothing in box B. Presume that due to determinism, there exists a perfectly accurate prediction algorithm. Assuming W uses that algorithm, what choice should you make?
Ultimately one is lead to understand that the paradox is a manifestation of different interpretations of the problem definition (aren’t all paradoxes though?) If you interpret the setup one way, then you should choose just B and you will net $1M. If another way, then you should choose both and net either $1000 or $1,001,000 depending on W’s unknowable prediction. As the authors conclude:
Newcomb’s paradox takes two incompatible interpretations of a question, with two different answers, and makes it seem as though they are the same interpretation. The lesson of Newcomb’s paradox is just the ancient verity that one must carefully define all one’s terms.
The authors suggest combining Bayesian nets with game theory is what yields this resolution. And at first I thought they missed the obvious further conclusion from Bayes, which is that you should clearly choose just B. Here was my reasoning. The key clue is in this piece of information: “people seem to divide almost evenly on the problem”. I.e. your Bayesian priors should now be set to 50% on either interpretation. Now, we know that the expected value (EV) of the “just B” scenario is $1M, but we don’t really know what the EV is for the “both boxes” scenario in which “your choice occurs after W has already made its prediction”:
…if W predicted you would take A along with B, then taking both gives you $1,000 rather than nothing. If instead W predicted you would take only B, then taking both boxes yields $1,001,000….
Since in this scenario you are choosing after W’s prediction, is there any way you can “predict” what W’s choice might be? No, of course not, it’s a variant of the Liar’s Paradox where if you predict one thing, the answer is the other. Thus, if we are using a probabilistic approach (as the authors have laid out for us), we must conclude there is no information to be gleaned on W’s prediction and we are forced to assign 50% likelihood of either choice. Hence, the EV of the “both boxes” interpretation is $501,000.
Putting both meta-Bayesian analyses together, we can conclude that since the “just B” interpretation yields $1M and the “both boxes” interpretation yield’s an EV of a little over half that, it’s a no-brainer to choose just B. Which means your EV is exactly $500,000. But wait! We just concluded that the EV for “both boxes” is $501,000, which is clearly better!!!
Newcomb’s paradox will probably crack my list of Top 10 Paradoxes of All-Time (unless I figure out how to solve it after it does).
]]>From an economics point of view, I think that such restrictions, however well meaning their original intent, tend to merely protect incumbents from competition. Nevertheless, I obviously don’t want the company to get into any trouble. Therefore, I have edited or redacted any potentially problematic revelations from these posts
- What I’m Working On: Supercharging Innovation
- Executive Compensation
- Revolutionizing Angel Funding
- You Can’t Pick Winners at the Seed Stage
If you have any questions, post a comment here and I will contact you privately.
]]>
]]>Ciarán Brewster: Peer-reviewed studies that have looked at the possible relationship between vaccines and autism: https://bit.ly/BOIcw
Rafe Furst: Peer-reviewed autism studies can produce bad science too https://tinyurl.com/d9dpde
CB: If peer review is such a bad system, I’d love to know what alternative you suggest. Please enlighten me.
RF: Peer review=echo chamber https://is.gd/wnlb we can do better https://is.gd/wnl8 https://is.gd/wnl9 https://is.gd/wnla
CB: Hanson doesn’t address how we’re supposed to validate results, especially in a system where researchers are driven by money.
RF: you could use something like ideas futures or truthmarkets https://tinyurl.com/dchxlo
CB: It seems that truthmarkets make the big assumption that we have a way to evaluate the likely validity of a claim
CB: What’s to stop a group of scientists, driven by money, propagating a lie, since the patron has no access to relevant info?
RF: markets r open 2 every1 so there is incentive 4 any1 with more truthful info 2 make a buck. Suspect fraud? Redo expmt & bet on it
CB: Redo expmt & bet on it. But this still doesn’t solve how we determine the validity of the suspected or new claim. Who decides?
RF: the market decides; it’s consensus just as with all “truths” but with their money on the line people are more accurate/honest
CB: So it’s just money driven peer-review then. You seem to imply that scientists are dishonest unless there’s money on the line.
RF: 1. Thousands of peers betting beats 5 with inherent COI 2. Humans r better predictors & assesors of truth when own money on line
CB: “1000’s of peers betting.” I’m sorry, but what universe are you living? People will do and say anything if price is high enough.
RF: “People will do & say anything if price is high enough”. And other people will sell, driving price down & make a profit off them
RF: 2 understand info markets https://is.gd/wGhu now see real money version https://is.gd/Kej markets r proven > human peers
CB: Could you please give me some examples of testable scientific hypotheses using the model of info markets?
RF: markets can’t set up experiments, test hypotheses, or get results; they can provide incentive for humans to do it very well
CB: If info markets are the ultimate arbiter, how do they decide what is valid? How do they have knowledge of quality science?
Starting with Yosemite in the late 19th and early 20th centuries, the pattern of forcing indigineous civilizations from their ancestral land in order to create wildlife reserves and national parks has been repeated across the country and the world.
The conflict is… compelling the conservation movement to grapple with the effects of its own century-long blunder, and with its origins as an American movement driven largely by nature romantics and aristocratic men determined to protect their hunting grounds. Not only has it dispossessed millions of people who might very well have been excellent stewards of the land, but it has engendered a worldwide hostility toward the whole idea of wildland conservation – damaging the cause in many countries whose crucial wildland is most in need of protection.
The article describes how indigineous peoples threatened with displacement across the globe have begun to band together to force a change in the conservationist mindset that humanity and nature are antithetical.
Reporting like this is why the Boston Globe needs to stay in business.
]]>
While “postdecriminalization usage rates have remained roughly the same or even decreased slightly when compared with other EU states, drug-related pathologies — such as sexually transmitted diseases and deaths due to drug usage — have decreased dramatically.”
The U.S should take note of this and take it one step further by pushing the power to make drug laws to the state level like gambling. Some states will legalize drugs altogether taxing sales and driving the prices down, putting the drug lords in Mexico and elsewhere out of business. Other states will be more strict to start and can amend later based on results in the liberal states. Obama should give mass pardons for small drug crimes and empty the prisons of otherwise productive members of society.
]]>
In a single round game with a payoff matrix similar to that proposed by Gash there is a clear Nash equilibrium, representing the optimal strategy both parties will adopt. In this case, both villages choose to support the Taliban. But, supporting the Taliban or Coalition is not a single round game, it is continuous game, with significant but unknown number of rounds. Not only may villages switch allegiance at any time, but if the Taliban is defeated or cleared from the area, the game may abruptly end.
In his seminal work, “The Evolution of Cooperation,” Robert Axelrod explores how cooperation surprisingly trumps competition in a similarly styled prisoner’s dilemma game. Based on an iterated prisoner’s dilemma tournament, Axelrod found that strategies which always defected, (or in the case of Gash’s example, supported the Taliban) performed the worst. The best strategies were mixed, and tended to copy their opponents previous actions, leading to cooperative alliances.
Extending this theory of cooperation to the actions of Afghan villages, we can infer that over time they are likely to discover that cooperation and supporting the coalition is the best strategy. While Gash correctly concludes that changing the cost/benefit value (incentive) for supporting the coalition may speed up the process, it is not necessary to achieve the optimal cooperative solution.
]]>- CO2 causes a direct temperature increase
- Positive feedbacks amplify the direct temperature increase several fold
- The effects on humans of the total temperature increase are significantly bad
- The cost of reducing CO2 emissions is less than the bad effects we can avoid
Nearly all scientifically literate skeptics agree with (1). Most typically argue against points (2) and (4). Indur M. Golakny has a nice series of posts over at Watts Up With That that looks at (3).
Even assuming that the IPCC estimates are right, AGW still doesn’t look very scary:
- Currently, AGW is the 13th ranked cause of death, accounting for only .3% of all deaths. For comparison, unsafe water is at 3.1%, malaria is at 2.0%, and urban air pollution is at 1.4%.
- Forecasts for 2085 put deaths attributable to the AGW impacts of hunger, malaria, and flooding at 237,000 in the business as usual scenario and 92,000 with the most aggressive carbon reductions. The cost is a reduction of per capita GDP in 2090 from $57.9K to $37.9K, i.e. the average person will be 35% poorer (source here).
- The forecast population at risk of water stress actually decreases under the business as usual sceanrio.
- The predicted conversion of habitat to cropland also decreases under the business as usual scenario.
The bang for the buck just doesn’t seem to be there.
]]>Changing the mindset could change the outcome.
]]>
Provenge stimulates T-Cell proliferation and theses cells lyse the cancer cells. Although the NY Times reports it is “proof of concept” that cancer vaccines can and do work it doesn’t seem to be particularly far in the research space from mainstream work with the concerns pointed out by Rafe here and Daniel here. Perhaps since it is the first vaccine it will lead to work that will change the 4 months to 4 decades.
Dendreon’s Small Molecule Program is another step away. They have identified an ion channel that is present in 100% of prostate cancers, 71% of breast cancers, 93% of colon cancers and 80% of lung cancers and hardly or not detected in the corresponding normal human tissues. Dendreon has synthesized small molecule agonists that induce cell death by activating the ion channel and interfering with cell growth, migration and division.
So many other treatments leave cancer cells alive and able to mutate. These agonists, however, are devastating. Without the equivalent of a functioning stomach and reproductive system it is unlikely these cancer cells could mutate and evolve their way around this attack. Perhaps this success can get more funding to more innovative research projects.
]]>
In an excellent New York times piece, which I strongly recommend reading in its entirety, Gina Kolata explores why advances have been so elusive in the drive to cure cancer. (Kolata also smartly avoids the “war” metaphor.)
I find particularly troubling one of the problems she identifies:
And for all the money poured into cancer research, there has never been enough for innovative studies, the kind that can fundamentally change the way scientists understand cancer or doctors treat it. Such studies are risky, less likely to work than ones that are more incremental. The result is that, with limited money, innovative projects often lose out to more reliably successful projects that aim to tweak treatments, perhaps extending life by only weeks.
Kolata hits upon a problem that plagues not just cancer research but scientific research in general. Funding is given to popular ideas that are likely to be successful. Less popular and longshot ideas are unlikely to receive funding, and if they are lucky enough to receive grant money, it’s usually a drop in the bucket compared to popular mainstream approaches. To understand why this bias towards “successful” studies is problematic, note what a Sloan-Kettering cancer specialist explained to Kolata about, what exactly constitutes a “successful” idea or research study?
For example, a study may state that a treatment offers a “significant survival advantage” or a “highly significant survival advantage.” Too often, Dr. Saltz says, the word “significant” is mistaken to mean “substantial,” and “improved survival” is often interpreted as “cure.”
Yet in this context, “significant” means “statistically significant,” a technical way of saying there is a difference between two groups of patients that is unlikely to have occurred by chance. But the difference could mean simply surviving for a few more weeks or days. [Emphasis Mine]
Billions spent on research yield expensive drugs with horrible side effects, that might give one an extra few weeks to live. Certainly, this is not always the case, but far too often it is. I find this both disappointing and sad. As Kolata explains, a different definition of success tells a different story because the death rate for cancer has barely changed since 1950:
Data from the National Center for Health Statistics show that death rates over the past 60 years — the number of deaths adjusted for the age and size of the population — plummeted for heart disease, stroke, and influenza and pneumonia. But for cancer, they barely budged.
The numbers are “plummeting” for many of the leading causes of death, but for cancer, they don’t seem to moving at all. All the money, all the awesome technology, all the hard work, and we have precious little to show for it. Does this bother anyone? Is it possible that we are doing something wrong? Perhaps we should try some new approaches?
There is a faint glimmer of hope. The director of the National Cancer Institute (NCI) has recently recognized that “the theories of Darwinian and somatic evolution can help us better understand and control cancer.” To support this and other interdisciplinary research, the NCI is committing $75 – 150M over five years. But, $15 – 20M a year is not enough. There are far too many valuable researchers lost in the long tail of science. We need more money. We need fresh ideas and different approaches. The time for change is long overdue.
]]>Now, those who make this objection usually don’t state it that bluntly. They might say that investors need technical expertise to evaluate the feasibility of a technology, or industry expertise to evaluate the likelihood of demand materializing, or business expertise to evaluate the evaluate the plausibility of the revenue model. But whatever the detailed form of the assertion, it is predicated upon angels possessing specialized knowledge that allows them to reliably predict the future success of seed-stage companies in which they invest.
It should be no surprise to readers that I find this assertion hard to defend. Given the difficulty in principle of predicting the future state of a complex system given its initial state, one should produce very strong evidence to make such a claim and I haven’t seen any from proponents of angels’ abilities. Moreover, the general evidence of human’s ability to predict these sorts of outcomes makes it unlikely for a person to have a significant degree of forecasting skill in this area.
First, there are simply too many random variables. Remember, startups at this stage typically don’t have a finished product, significant customers, or even a well-defined market. It’s not a stable institution by any means. Unless a lot of things go right, it will fall apart. Consider just a few of the major hurdles a seed-stage startup must clear to succeed.
- The team has to be able to work together effectively under difficult conditions for a long period of time. No insurmountable personality conflicts. No major divergences in vision. No adverse life events.
- The fundamental idea has to work in the future technology ecology. No insurmountable technical barriers. No other startups with obviously superior approaches. No shifts in the landscape that undermine the infrastructure upon which it relies.
- The first wave of employees must execute the initial plan. They must have the technical skills to follow developments in the technical ecology. They must avoid destructive interpersonal conflicts. They must have the right contacts to reach potential early adopters.
- Demand must materialize. Early adopters in the near term must be willing to take a risk on an unproven solution. Broader customers in the mid-term must get enough benefit to overcome their tendency towards inaction. A repeatable sales model must emerge.
- Expansion must occur. The company must close future rounds of funding. The professional executive team must work together effectively. Operations must scale up reasonably smoothly.
As you can see, I listed three example of minor hurdles associated with each major hurdle. This fan out would expand to 5-10 if I made a serious attempt at exhaustive lists. Then there are at least a dozen or so events associated with each minor hurdle, e.g., identifying and closing an individual hire. Moreover, most micro events occur repeatedly. Compound all the instances together and you have an unstable system bombarded by thousands of random events.
Enter Nassim Taleb. In Chapter 11 of The Black Swan, he summarizes a famous calculation by mathematician Michael Berry: to predict the 56th impact among a set of billiard balls on a pool table, you need to take into account the the position of every single elementary particle in the universe. Now, the people in a startup have substantially more degrees of freedom than billiard balls on a pool table and, as my list above illustrates, they participate in vastly more than 56 interactions over the early life of a startup. I think it’s clear that there is too much uncertainty to make reliable predictions based on knowledge of a seed-stage startup’s current state.
“Wait!” you may be thinking, “Perhaps there are some higher level statistical patterns that angels can detect through experience.” True. Of course, I’ve poured over the academic literature and haven’t found any predictive models, let alone seen a real live angel use one to evaluate a seed stage startup. “Not so fast! ” you say, “What if they are intuitively identifying the underlying patterns?” I suppose it’s possible. But most angels don’t make enough investments to get a representative sample (1 per year on average). Moreover, none of them that I know systematically track the startups they don’t invest in to see if their decision making is biased towards false negatives. Even if there were a few angels who cleared the hundred mark and made a reasonable effort to keep track of successful companies they passed on, I’d still be leery.
You see, there’s actually been a lot of research on just how bad human brains are at identifying and applying statistical patterns. Hastie and Dawes summarize the state of knowledge quite well in Sections 3.2-3.6 of Rational Choice in an Uncertain World. In over a hundred comparisons of human judgment to simple statistical models, humans have never won. Moreover, Dawes went one better. He actually generated random linear models that beat humans in all the subject areas he tried. No statistical mojo to determine optimal weights. Just fed in a priori reasonable predictor variables and a random guess at what their weights should be.
Without some sort of hard data amenable to objective analysis, subjective human judgment just isn’t very good. And at the seed stage, there is no hard data. The evidence seems clear. You are better off making a simple list of pluses and minuses than relying on a “gut feel”.
The final line of defense I commonly encounter from people who think personal evaluations are important in making seed investments goes something like, “Angels don’t predict the success of the company, they evaluate the quality of the people. Good people will respond to uncertainty better and that’s why the personal touch yields better results.” Sorry, but again, the evidence is against it.
This statement is equivalent to saying that angels can tell how good a person will be at the job of being an entrepreneur. As it turns out, there is a mountain of evidence that unstructured interviews have little value in predicting job performance. See for example, “The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings“. Once you have enough data to determine how smart someone is, performance on an unstructured interview explains very little additional variance in job performance. I would argue this finding is especially true for entrepreneurs where the job tasks aren’t clearly defined. Moreover, given that there are so many other random factors involved in startup success than how good a job the founders do, I think it’s hard to justify making interviews the limiting factor in how many investments you can make.
Why then are some people so insistent that personal evaluation is important? Could we be missing something? Always a possibility, but I think the explanation here is simply the illusion of control fallacy. People think they can control random events like coin flips and dice rolls. Lest you think this is merely a laboratory curiosity, check out the abstract from this Fenton-O’Creev, et al study of financial traders. The higher their illusion of control scores, the lower their returns.
I’m always open to new evidence that angels have forecasting skill. But given the overwhelming general evidence against the possibility, it better be specific and conclusive.
]]>
Instead of a tendency towards some kind of theoretical equilibrium, the participants’ views and the actual state of affairs enter into a process of dynamic disequilibrium which may be mutually self-reinforcing at first, moving both thinking and reality in a certain direction, but is bound to become unsustainable in the long run and engender a move in the opposite direction. The net result is that neither the participants’ views nor the actual state of affairs returns to the condition from which it started.
…
[W]e can observe three very different conditions in history: the “normal,” in which the participants’ views and the actual state of affairs tend to converge; and two far-from- equilibrium conditions, one of apparent changelessness, in which thinking and reality are very far apart and show no tendency to converge, and one of revolutionary change in which the actual situation is so novel and unexpected and changing so rapidly that the participants’ views cannot keep up with it.
We’ve been discussing the prospect of stabilizing dynamics by intervening during times of “apparent changelessness” so that we can forestall or mitigate times of “revolutionary change”. Interestingly, there’s (perhaps) no way to tell the difference between the quiet, equilibrium condition and the quiet, far-from-equilibrium condition, but empirically it seems that the former ultimately gives way to the latter — and ultimately to a revolutionary period — it’s just a matter of how long it takes. As Taleb observes, the longer we go without a black swan event, the more likely its appearance.
Perhaps the difficulty with stabilizing complex adaptive systems has to do with reflexivity. As soon as we make an explicit policy decision to address a source of instability we know about, the system believes* that it has become more stable, which blinds it to the inherent inevitable falsehood of that proposition. Which in turn quickens the instability. Sort of a probabilistic, temporally diffuse liar’s paradox.
This would suggest that any intervention which increases the stability/certainty of the system’s internal representation of itself — i.e. the beliefs of the market participants about the market — actually has the opposite effect as its intent. Instead, it would be a better approach to induce uncertainty whenever the system seems to be settling into a “quiet” period. This could be accomplished either by gratuitously creating a limited amount of market volatility, or by obfuscating market-related data. Given the increasing difficulty with the latter due to technology (not to mention the fairness issues it entails), the former seems preferable. What would this look like? It could take many forms, including ones that appear in the comments here.
Ultimately though, what Soros’ arguments suggest to me is that the goal of policy-induced stability is paradoxically better achieved by inducing instability than by attempting to dampen oscillations ala Sumner.
We’ve used the ecological analogy in the past of a controlled burn policy. Now a biological analogy comes to mind. If you want to become really strong and resilient, the best way to work out is to put your body into slightly unfamiliar situations and don’t fall into a routine that your body gets used to.
To expect policy makers to do this willingly with the economic policy seems a bit far fetched. However, interestingly, the historical trajectory of more frequent “revolutionary” periods in the economy may actually have the same effect organically. That is when individual economic agents (most notably humans) get used to uncertainty being the norm (as opposed to it being a distant or non-existent memory), perhaps the overall economic system will converge to riding that famous “edge of chaos” instead of oscillating in and out of it.
* In speaking of multi-agent systems like markets, when I say it believes something, you can either take that as shorthand for “the participants believe” or you can ascribe cultural agency to the markets as I do. For the purposes of this discussion, it doesn’t really matter.
At the European Future Technologies meeting in Prague they announced a “detailed simulation of a small region of a brain built molecule by molecule has been constructed and has recreated experimental results from real brains.”
]]>]]>Viewed from a thousand miles, the financial system has a incalculably large incentive to fail catastrophically as frequently as it can do so without killing the goose that lays the golden eggs.
As long as there is such a thing as “too big to fail” and trillions of dollars are available for siphoning, according to what logic can this cycle be dampened? Nobody has to explicitly pursue this outcome (although there are many who will) for it to be inevitable; the system obeys its own logic above all else.
[ commenting on Alfred Hubler on Stabilizing CAS ]
What if all financial institutions accepting federal aid were forced to list on E-Bay, sorted by zip code, every house on their books that is abandoned or in foreclosure? Signs would be put on lawns so all the neighbors knew of the opportunity. Investors could scour the listings, do drive by’s and bid on the mortgages. Investors have a better idea what the house near them is worth than some bank in New York. Bids could be by auction or even “Buy-It-Now” and even the lowest offer would be binding. Healthy banks could pre-approve borrowers using reasonable application procedures.
Some properties will sell for more than the mortgage and many less but the banks holding Mortgage Backed Securities will be left with a portfolio of mortgages that are performing and cash to lend again. The market will have spoken and done a true mark to market valuation.
Investors are sitting on a record pile of cash currently and perhaps an investment in a house they can keep an eye on from their porch while slowing the downward cycle of prices in their neighborhood is more attractive than alternatives.
]]>
So, why do we experience a “contact high?” And, if placebos work even if you don’t believe in them, do those who have never used marijuana or don’t believe in a “contact high” still experience a “contact high”? The answer is surely a complicated mix of evolutionary and epigenetic factors that we don’t fully understand, but it is probably very closely related to the biological mechanisms that underlie placebo effects. Experienced users are probably experiencing effects related to conditioning and coupling. The similar environment, people, and smells will subconsciously evoke similar but relatively muted responses. Even those who have never used marijuana, may “feel” somewhat similar to those around them. If everyone is laughing, smiling, and feeling happy, then you will too. Ever hang around a depressed person? Yeah, it’s depressing. It is also possible that mirror neurons play a role in the subconscious mimicking of the behavior of those around us. So if you are worried about that upcoming drug test and your “contact high,” don’t be, it’s all in your head.
]]>The Good: After Capitalism (Geoff Mulgan)
“The era of transition that we are entering will be disruptive—but it may bring a world where markets are servants, not masters.” I urge you to read this entire article, and leave your ideological biases at the door. Despite the title, this is no polemic. Here’s the punchline:
Contemporary biology and social science has confirmed just how much we are social animals—dependent on others for our happiness, our self-respect, our worth and even our life. There is no inherent contradiction between capitalism and community. But we have learned that these connections are not automatic: they have to be cultivated and rewarded, and societies that invest large proportions of their surpluses on advertising to persuade people that individual consumption is the best route to happiness end up paying a high price.
Here are some reality checks for the Left:
- “‘philanthrocapitalism,’ the idea that the rich can save the world, may not survive the crisis
- “Propping up failing industries is… a risky policy.”
- “If another great accommodation is on its way, this one will be shaped by the triple pressures of ecology, globalisation and demographics. Forecasting in detail how these might play out is pointless and, as always, there are as many malign possibilities as benign ones, from revived militarism and autarchy to stigmatisation of minorities and accelerated ecological collapse.”
- “Another intriguing part of this story is the growth of capital in the hands of trusts and charities, which now face the dilemma of whether to use their substantial assets (£50bn in Britain) not just to deliver an annual dividend but also to reflect their values. Bill Gates found himself at the sharp end of this dilemma when critics pointed out that the vast assets of his foundation were often invested in ways that ran counter to what it was seeking to achieve through its spending.”
- “Obama should be ideally suited to offering a new vision, yet has surrounded himself with champions of the very system that now appears to be crumbling.”
And here are some for the Right:
- “40 per cent of the investment in Silicon Valley came from government”
- “Daniel Bell… [argued] that capitalism would erode the traditional norms on which it rests—willingness to work hard, to pass on legacies to children, to avoid excessive hedonism. Japan in the 1990s was a good case in point—its slacker teenagers rejecting their parents’ work ethic that had driven the economic miracle. “
- “But the new technologies—from high speed networks to new energy systems, low carbon factories to open source software and genetic medicine—have a connecting theme: each potentially remakes capitalism more clearly as a servant rather than a master, whether in the world of money, work, everyday life or the state.”
- “It’s an irony that so many of the measures taken to deal with the immediate impact of the recession, like VAT cuts and fiscal stimulus packages, point in the opposite direction to what’s needed long term. But there are already strong movements to restrain the excesses of mass consumerism….”
- “Reinforcing these trends are shifts in the balance of the economy away from products and services, towards a ‘support economy’ based on relationships and care….”
- “Knowledge too is dividing between capitalist models and cooperative alternatives…. The creative commons approach is gaining ground in culture as an alternative to traditional copyright and Wikipedia has become an unlikely symbol of post-capitalism.”
- “…there has been a long-term trend towards more people wanting work to be an end as well as a means, a source of fulfilment as well as earnings.”
- “Governments may also be drawn further into financial services…. [Denmark and Singapore] have created personal budget accounts for citizens, and it’s not hard to imagine some offering services where people can borrow money for a period of retraining, parental leave or unemployment, and then repay through the tax system over 20 or 30 years, or through a charge on homes, with much lower transaction costs than the banks.” [ sounds familiar :-) ]
- “The building societies [I assume this means “developing economies” in American lingo?] that didn’t privatise have survived far better than those that did. Charities tend to survive recessions better than conventional businesses and Britain’s 55,000 or so social enterprises may bounce back faster than firms without a social mission.”
What I like best about Mulgan’s essay is the nuanced long-term historical view:
In this essay I look at what capitalism might become on the other side of the slump. I predict neither resurgence nor collapse. Instead I suggest an analogy with other systems that once seemed equally immutable. In the early decades of the 19th century the monarchies of Europe appeared to have seen off their revolutionary challengers, whose dreams were buried in the mud of Waterloo. Monarchs and emperors dominated the world and had proven extraordinarily adaptable. Just like the advocates of capitalism today, their supporters then could plausibly argue that monarchies were rooted in nature. Then it was hierarchy which was natural; today it is individual acquisitiveness. Then it was mass democracy which had been experimented with and shown to fail. Today it is socialism that is seen in the same light, as a well-intentioned experiment that failed because it was at odds with human nature.
and
In Perez’s account economic cycles begin with the emergence of new technologies and infrastructures that promise great wealth; these then fuel frenzies of speculative investment, with dramatic rises in stock and other prices. During these phases finance is in the ascendant and laissez faire policies become the norm. The booms are then followed by dramatic crashes, whether in 1797, 1847, 1893, 1929 or 2008. After these crashes, and periods of turmoil, the potential of the new technologies and infrastructures is eventually realised, but only once new institutions come into being which are better aligned with the characteristics of the new economy. Once that has happened, economies then go through surges of growth as well as social progress, like the belle époque or the postwar miracle.
hat tip: @TEDchris (Chris Anderson)
The Bad: Companies Reset Goals for Bonuses (Jonathan D. Glater)
When executives have a tough time meeting their performance goals, a growing number of companies are moving the goalposts for them….
Companies generally point to the economic downturn and argue that this year, missing the kind of performance targets used in the past does not result from poor management. It would be unfair to withhold pay from executives, in this view, because they may be doing a good job while circumstances beyond their control sabotage their efforts.
For the counter-argument, read my post on the topic.
hat tip: Jay Greenspan
The Ugly: Our Epistemological Depression (Jerry Z. Muller)
I’m not actually linking to this article for two reasons. One is I don’t want to promote it since I think the arguments are specious. Telling is the editor’s note at the end:
Editor’s note: Amar Bhide has asked that it be noted that some of the ideas in this essay draw upon his articles, “An accident waiting to happen,” which appeared on his website, and “Insiders and Outsiders,” which appeared on Forbes.com.
I encourage you to read Bhide’s articles linked above because they are actually quite good. From the Forbes article (emphases mine):
American industry–businesses in the real economy–long ago learned hard lessons in the virtues of focus. In the 1960s, the prevailing wisdom favored growth through diversification. Many benefits were cited. Besides synergistic cost reductions offered by sharing resources in functions such as manufacturing and marketing, executives of large diversified corporations allegedly could allocate capital more wisely than could external markets. In fact, the synergies often turned out to be illusory, and corporate executives out of touch. Super-allocators like Jack Welch and Warren Buffett were exceptions….
Predictably, taxpayers are footing much of the bill for the misadventures in diversification. Regulators, who looked the other way while bankers put the public’s deposits at risk and brought the nation’s economy to its knees, now have an opportunity to redeem themselves…. Instead, they are encouraging more diversification, hoping to bury, for instance, Merrill Lynch’s unknown liabilities into Bank of America’s impenetrable balance sheets, and–in spite of their past failures with the likes of Citicorp–welcoming the creation of more megabanks. This is rather like giving the addict in the ER more drugs. It may soothe the tremors, but it isn’t a long-term solution to the diversification debacle.
It’s important to note that “diversification” in this context refers actually to consolidation, in other words, individual companies trying to do too many things and losing the efficiency that would have been present in the marketplace if these diverse interest remained independent.
In contrast to Bhide’s lucid argument, here’s Muller’s twisting of the same:
The diversification of investment, which was intended to reduce risk to institutional investors, ended up spreading risk more widely, as investors across the country and around the world found themselves holding mortgage-backed American securities of declining and indeterminate value. There was a belief in the financial sector that diversification of assets was a substitute for due diligence on each asset, so that if one bundled enough assets together, one didn’t have to know much about the assets themselves. The creation of securities based on a pool of diverse assets (mortgage loans, student loans, credit card receivables, etc.) meant that when markets declined radically, it became impossible to determine an accurate price for the security.
This is completely wrong. The issue wasn’t spreading risk more widely and the concomitant inability to accurately price securities. The issue was that the financial wizards forgot to check how correlated the underlying securities actually were, convinced themselves with over-simple models that there was no correlation, ignorantly applied leverage, and hence increased risk to everyone. There was no “spreading of risk” whatsoever.
Muller would have been better off just plagiarizing Bhide directly, that way he’d at least get the argument right.
hat tip: Daniel Horowitz
I would compare large scale boom-bust cycles to catastrophic forest fires.
Two thoughts:
- Policy impacts only small forest fires: When small forest fires are suppressed, large forest become possible and, more importantly, the untorched forest changes the local climate and therefore the forest may grow faster and faster. At some point forest fires are potentially so devastating that policy makers have no choice but to suppress them. Eventually the amount combustible reaches a threshold where the forest fire can not be prevented, the catastrophic forest fire takes place, and the cycle starts over.
- Self-adjusting systems suppress catastrophic boom-bust cycles – therefore catastrophic wildfires are rare in un-managed forests: self-adjusting Systems avoid chaos [1,2]. However such adaptation to the edge of chaos occurs only if the system parameters change slowly, compared to the dynamical variables, i.e. if we change policy faster than the period of the boom-bust cycle, then self-adjustment will not suppress them. The good news is, that almost any type of self-adjustment suppresses chaos[3].
[1] P. Melby, J. Kaidel, N. Weber, A. Hübler, Adaptation to the Edge of Chaos in the Self-Adjusting Logistic Map, Phys.Rev.Lett 84, 5991-5993 (2000): https://server10.how-why.com/publications/2000/Melby00.pdf
[2] P. Melby, N. Weber, A. Hübler, Robustness of Adaptation in Controlled Self-Adjusting Chaotic Systems, Phys. Fluctuation and Noise Lett. 2, L285-L292 (2002): https://server10.how-why.com/publications/2002/Melby02.pdf
[3] T. Wotherspoon and A. Hubler,”Adaption to the Edge of Chaos with Random-Wavelet Feedback” is accepted by the Journal of Physical Chemistry: https://server10.how-why.com/blog/chemical.pdf
I then asked him if he had any specific thoughts on either Sumner’s proposal or synchronized chaos. Here was Hubler’s response:
In desert oasis, larger, more specialized ants often dominate. Smaller non-specialised ants have smaller populations and competition pushes them to boundary of the oasis. However the small, non-specialised ants species are around for very long periods of time whereas the larger, specialised ant species are often wiped out by a change in climate and catastrophic events. The smaller ants may initially suffer from a catastrophe too, but have a higher likelihood of survival, and even thrive temporarily after the catastrophe, since the competition from the larger ants is gone. The group around Jim Brown, at UNM, did work on ants and we just started work on bacteria colonies in changing environments.
By analogy, I conclude that boom-and-bust cycles are more harmful for larger and more specialised organizations which flourish in a certain environment, and disappear if this environment changes suddenly. A forest fire helps smaller plants and increases diversity.
Yes, chaotic synchronization sounds attractive and I have published many papers on this topic over the past 20 years. It work wells with simple nonlinear oscillators but it does not seem to work with living organisms, such as yeast cells. We do not understand that. Maybe there are adaptive processes or evolutionary advantages suppressing synchronization. For years we tried to entrain the life cycle of cancer cells and yeast cells with heat pulses, microwave pulses and other stimulus, but the cells always escape synchronization.
By analogy, I conclude that it might be difficult to synchronize economic systems.
I’d like to hear your opinions on what this means for policy, regulation, incentive plans, etc.
]]>NCI commenced a series of workshops that began to bring aspects of the physical sciences to the problem of cancer. We discussed how physical laws governing short-range and other forces, energy flows, gradients, mechanics, and thermodynamics affect cancer, and how the theories of Darwinian and somatic evolution can better help us understand and control cancer.
Read more on my Cancer Complexity Forum post.
]]>I’m not even clear whether doing what I did in revealing the contents of my ballot textually would be illegal in Missouri or not. Or in Nevada where I voted for that matter. Of course it would be a silly and slippery slope if it were illegal to reveal the contents without using a photo. For one, I could have lied and not really voted according to what I posted — as far as I know blogging falsehoods is still legal. Also, what if I posted my choices before the actual vote, like op ed columnists do? And what if I did so prospectively but scheduled the post to be published after the vote happened? Or if I tried to post before the vote but the system was slow and it didn’t actually get posted until after the vote?
Anyone who can clear up the legalities, both federally and also on a per state basis, please comment below.
hat tip: Ace Bailey
]]>Here’s the summary. The market for seed capital is clearly broken. Most individual angels will only do about 1 deal per year, which means their portfolios lose money 40% of the time due to insufficient diversification. Even premier angel groups like the Band of Angels say they only do about 8 deals per year. Our math says you need to do 125 to achieve good diversification. On the other side of the table, only 14% of entrepreneurs who want angel funding will find it. Those that do will spend about 6 months looking for money instead of building their businesses.
This is a sorry state of affairs for a market where the overall annual return is 25%+. Here’s a straightforward application of portfolio theory that can fix it. Have a large enough pool of money so one entity can do 125-200 deals per year. Then use an online screening process to give founders a yes or no in two weeks. Obviously, there are a ton of details beyond this, but those are what we’ve spent the last year figuring out. If you’re curious, let me know in a comment here and I will contact you privately.[Links to files REDACTED 05/08/2009: see here].
]]>A New York Times Magazine article raises an issue I’ve been thinking a lot about lately.
If you are, as I am, a scientist concerned about global climate change, you may find yourself asking, “What kind of research could I be doing to best contribute to a solution?”
According to some, it may not be to study the climate itself. We may not know enough to predict exactly what will happen when, but we do know that drastic changes are coming whose magnitude will be determined by the actions we take now. It may not even be to study technologies such as alternative energy or policies such as cap-and-trade that can help combat global warming. Because while these policies and technologies are surely necessary, global warming is a problem created by human behavior, and our behavior will need to change if we are to make the individual and group decisions necessary to mitigate it, including the implementation of these policies and technologies. It may therefore be that the most important scientific questions in the fight against global warming are questions about humans, human behavior, and what we can do to change it.
The climate change puzzle presents a number of interesting questions about human behavior. The global environment is the ultimate “commons” game: We have a shared resource, and we can individually decide how much effort to put into preserving it. Only, we don’t see the fruit of our individual efforts directly; only the sum total of everyone’s efforts determines how well the resource is preserved. In the case of climate change, there are further complications: the effects of our actions now may not be seen for another fifty years, and some argue that the entire problem was fabricated by misguided scientists. Combining these factors, it is not hard to understand why many people feel little incentive to take action against global warming.
The article focuses on Columbia’s Center for Research on Environmental Decisions, which performs experiments on people’s decision-making processes. One finding jumped out at me as interesting and perhaps counter-intuitive: we tend to make better decisions as groups rather than as individuals. For example, one researcher studied decisions made by farmers in Southern Uganda as they listened to rainy-season radio broadcasts. If they listened to it in groups, they would typically discuss it afterwards and come to consensus on the best planting strategy in response to the weather. They ended up more satisfied with their yields than other farmers who listened to the broadcasts individually.
Our response to climate change will obviously involve a great variety of individual and group decisions, but it may be that if we can force more of the critical decisions to be made in group settings, where participants have not made up their minds beforehand (research shows this is crucial) we may find ourselves more able to put aside the parts of our human nature that would impede progress, and make the decisions that are in all of our best interests.
]]>Any existing complexity bloggers out there who would like to engage in the same experiment, please let us know.
]]>Vodpod videos no longer available.
Click here for the full story.
It’s interesting to me that the best skeptic they could find on the subject (Richard Garwin) was thoroughly unconvincing, simply asserting that there must be a measurement problem, without he himself daring to go measure. You’d think it would be worth a looksy. More interesting still was the independent expert in measuring energy (Rob Duncan) who came in as a total skeptic and came out as a believer.
But my favorite part of the story is near the end when Fleischmann (co-discoverer of cold fusion) appears to be having both a literal and figurative last laugh. Man, what a bad beat he and Pons got.
Besides Garwin, who are the biggest outspoken skeptics these days? What do you think, is the effect real? Do Fleischmann and Pons deserve a Nobel?
]]>There is a trade off in safety. You are much more likely to die in a small car. The WSJ Online reports on a recent Insurance Insititute for Highway Safety (IIHS) study that shows small cars like the Honda Fit and Toyota Yaris fair very poorly in two-car frontal offset crash tests against the Honda Accord and Toyota Camry. This is against mid-sized cars from the same manufacturer, so a reasonable comparison.
The higher-level statistics are a bit frightening. According to the same IIHS study, the death rate for mini cars is twice as high in multi-vehicle collisions as that for very large cars. Even in single car crashes and compared to a mid-sized car, the death rate is 17% higher.
This NHTSA study presents a model of vehicle weight and safety. For every 100 lbs you reduce the weight of a light car, you increase the death reate by 5.63%. That means the reduction of weight targeted at improved mileage accounted for 13,608 additional fatalities from 1996 to 1999 in the light car class. Across the light car, heavier car, and light truck classes, the increase was 39,197 fatalities.
A USA Today analysis of crash data since CAFE went into effect estimates that for every mile per gallon improvement, you get an extra 7,700 fatalities (sorry, there doesn’t seem to be an original copy of the article online: James R. Healey, “Death by the Gallon,” USA Today, July 2, 1999).
According to this NHTSA data, there were 29,039 automobile fatilities for men in 2007. According to this CDC fact sheet, 322,841 men died from heart disease in 2007. So you should still devote more effort to your cardiac-related lifestyle. But I don’t know of any lifestyle interventions that will cut your heart attack risk by 50% (other than stopping smoking).
The Tesla Model S will supposedly weigh 3825lbs with the largest battery pack. That’s a little heavier than the Toyota Camry’s 3483lbs. Whew! At least there may be a potentially safe and efficient alternative.
]]>I’m not the only one. Most of the people I know who have been reasonably active through age 40 have some sort of permanent impairment from a ligament, cartilage, or tendon injury. Today, I was wondering if extracellular matrix (ECM) might be the answer.
ECM has been used to promote wound and burn healing. I figured a little Googling was worthwhile. Imagine my surprise when I found out that verterinarians have off-the-shelf ECM products to heal horse ligament and tendon injuries.
Moreover, I was flabbergasted to find a book on the subject of ECM based scaffolds for repairing orthopedic soft tissues. Unfortunately, all the relevant journal articles are behind paywalls. But this seems to be a rapidly advancing area.
Together with stem cell advances, this could change the game for us 40-year olds. We could be assured of fully functioning joints for a long time.
]]>Rather, I had a record number of colds this season. 4 major ones from Thanksgiving to mid-Feb. I have been training unusually hard for the last year and an increased incidence of upper respiratory infections (URIs) is a known problem for endurance athletes. A little research turned up this article where quercetin reduced the incidence of URIs in marathoners.
So I ordered a couple bottles of this and started taking 2 capsules twice daily. Note that this product also has bromelain in it, an anti-inflammatory added to quercetin when treating for allergic rhinitis. Seemed like that could be part of my problem… in for a penny in for a pound.
I noticed four potential effects.
- I stopped getting colds. Of course, this could be simply timing of the cold season.
- My cardiovascular fitness went up. Over two and a half months, my output at a given heart rate went up by 5%. My heart rate a given output went down 7-8 bpm. Of course, this could simply be the result of not having a cold. The evidence for quercetin improving athletic performance is mostly negative.
- I appeared to have gained some muscle mass. From the combination of my weight going up and my waist measurement going down, I estimate I gained about 3lbs of muscle mass. Of course, this could be explained by the fact that I was able to start doing kettlebells seriously again because a shoulder injury has mostly healed.
- I seemed to require a little less ibuprofen to keep minor injury pain in check. Quercetin may have some anti-inflammatory effect. Of course, bromelain inhibits CYP2C9 activity in the liver which may slow down ibuprofen metabolism. So the serum level of ibuprofen may have been the same at the lower dosage due to inhibition rather than direct anti-inflammatory effect.
Your mileage may vary.
]]>Part 1: Major Medical Annuities. Federally mandated/funded (similar to SSI/Medicare), with a specific initial lifetime value that is the same for everyone. The concept is that you pick a number slightly bigger than the average expected lifetime major medical bill and set aside that pot of money for everyone individually. At some point (e.g. 65) you can choose to start drawing down from your pot as taxable income. Prior to then, the only way the fund can be used is for major medical expenses not covered by other insurance you may have. Such payments go directly to providers and are tax-exempt. When you die, any leftover amount gets transferred to the MMA accounts of your heirs (per your desired breakdown, or according to probate law in the absence of a will).
Part 2: MMA Collectives. If you deplete your MMA (for whatever reason) and have a major medical expense that is uncovered, we need to address this ahead of time and responsibly. One idea is to allow people to voluntarily form collectives or trusted circles: family and close friends who share your lifestyle, risk profile and philosophy and are willing to act as a secondary insurance policy for each other by essentially pooling their MMA pots. Collectives would be legal entities with group voting/responsibility, and once formed, individuals cannot leave or be forced out except by unanimous consent.
Part 3: Extraordinary Circumstances Fund. Let’s say you are a loner, no close family and no Collective. Or that you have a small Collective but it’s about to be bankrupt. The government sets aside an FDIC-like insurance fund as a final safety net. However, you don’t want to get into this situation; it will have a negative impact on your life-expectancy, quality of care, financial health, etc. Essentially your health care future goes into a sort of receivership, and you and your Collective are financially on the hook to pay back 100% of the money over time. Your entire estate and those of your Collective members are considered collateral, so you cannot escape scott free by dying :-) And there is strong social pressure from your Collective to make lifestyle, medical care and general financial choices to not land yourself in a precarious situation where you’d end up needing this fail-safe.
Yes, there will still be those destitute and alone who end up a drain on the ECF, but it seems like a smaller and fairer price to pay as a society than what currently exists and what has been proposed. The proposal has a high degree of personal autonomy/responsibility while bringing to bear social pressures similar to those that have been proven effective in micro-lending collectives. Rich individuals/Collectives can effectively shield money from taxes and grow their major medical funds beyond the initial lifetime value guaranteed to everyone. Thus, if I have the money and and am in a situation where it’s a moral dilemma whether to go to extraordinary lengths to keep me alive (or to cryogenically freeze me), I have the option. And for the destitute person, even as a left-leaning person, I would feel like society has borne its part of the responsibility for your survival and quality of life vis-a-vis major medical risk mitigation.
]]>This reminds me of the system we had in college at my student union where the election ballots each year would allow you to specify the exact breakdown of how your (mandated) contribution would be spent on a percentage basis. Don’t care about Sunday night flicks, but want more Hispanic cultural events? You can vote with your wallet. There was a separate process for determining which options made it on the ballot, but I know from personal experience that I did feel much better about my involuntary contributions each year knowing I had some control over how the money was spent.
So, could Ariely’s concept actually work on a national level? Interestingly, he wasn’t necessarily proposing that tax revenue be allocated according to taxpayer preferences, but rather that the exercise itself, for the taxpayer, helps induce civic-minded behavior which helps us all.
Perhaps we can go one further and actually use the stated preference data in the Congressional budgeting process.
Thoughts?
]]>Unfortunately, health care is not one of these problems. The solution really isn’t obvious. So I’ve been thinking about it lately. I’ve got some preliminary ideas that I’d like to share. But be nice. I’m not saying these are the answers. They are just the best out-of-the-box thinking I’ve been able to come up with so far.
There are a couple of interventions I do consider obvious. First, streamline drug discovery. The easiest thing to do is simply dispense with Phase III trials. Most drugs that fail, fail in Phase II. But Phase III is by far the most expensive. Now, I would personally think carefully before I used a drug that hadn’t been shown safe and effective in something the equivalent of Phase III trials. And we should definitely continue to monitor drugs after their approval. But the current system is too much cost for not enough gain. In return for eliminating this cost, we would reduce the term of patent protection and eliminate loopholes that extend such protection.
Second, I would de-regulate primary care. Physician’s assistants and nurse practitioners are perfectly capable of dispensing routine primary care, but today they may only operate under the direct supervision of a doctor. This practice simply perpetuates physician control over primary care. Let’s get rid of it and lower primary care costs. Moreover, I would back a national law that prevented states and cities from passing local laws that restrict low-cost primary clinics such as ones you might find in a Costco or Wal-Mart. The true reason behind such laws is to protect incumbents, which drives up costs. My guess is that de-regulated primary care could provide a basic level of service at around $30/visit.
Now we get to the hard stuff. I hypothesize that insurance-paid primary care is the root of much inefficiency. Obviously, it leads to at least some overconsumption because people don’t bear the cost. But it also messes with primary care professionals’ incentives to differentiate based on quality. They compete on how efficiently and effectively they can work the insurance system rather than how efficiently and effectively they can deliver care (e.g., your bill shows you as being “treated” for hyperlipemia and diabetes because the doctor discussed you slightly elevated cholesterol and glucose test results).
Moreover, insurance doesn’t add any value when you have reasonably small, reasonably predictable expenditures. I think what has happened is that insurance companies use it as a feature to get you to sign up for the rest of the insurance policy. They negotiate below market costs, forcing list prices up for everyone else, leading everyone to want primary care insurance. Hello collective action problem. My prescription is to tax insurance companies on any payments they make on behalf of policyholders for primary care. This will make primary care insurance premiums too high for most people. Good.
I think the above three measures should reduce day-to-day healthcare expenditures and properly align people’s incentives. Now we get to the problem of major medical expenses.
As a libertarian, I would like to reduce government involvement as much as possible. However, in this case, I think there is a fundamental problem that requires government intervention. It’s nearly impossible for doctors as individuals and society as a collective to just let people die who could easily have been saved. I don’t want to argue whether basic health care is a fundamental right. Suffice it to say that enough people act as if that were the case that going against the grain will simply fail. Therefore, I think the government must mandate and fund a very basic level of acute care.
We’ll clear up your infection with generic antibiotics, we’ll set your broken arm, we’ll treat your cancer with generic chemotherapy, we’ll sew up the cut from the bread knife, and we’ll even give you life saving surgery after your car accident. But you need some absurdly expensive brand-name drugs, sorry. Have a brain tumor, sorry. Need a transplant, sorry. I realize that drawing the line will be very hard and open to lobbying. But I don’t see any way around it.
Hopefully, most people will be able to afford their own major medical insurance with a higher level of care. The problem here comes from specifying what the level of care is. Metaphorically, I want to be able to buy the Ford Fiesta, Toyota Camry, or Mercedez S500 plans. But it’s unclear at the time you sign up what interventions would fall into which category. Obviously, a dying patient has incredible incentive to assert that any potentially helpful intervention falls under his plan. Conversely, a for-profit company has incredible incentive to assert that any potentially questionable intervention falls outside his plan. Unfortunatley, the space of interventions that are both potentially helpful and potentially questionable is rather large at our current level of technology.
I have an idea here. Create a legal patient proxy. This is a company that negotiates incredibly detailed contracts with the insurance company on my behalf. They would have both lawyers and doctors. I’m not sure what these contracts would ultimately look like, but you could imagine a combination of very specific procedures and medicines that would be covered plus some sort of meta-evaluation protocol for determining whether new things would be covered. Something like a list of applicable medical journals and odds-ratio thresholds of effectiveness. The proxy could even renegotiate some of the covered interventions every year. Obviously, a proxy would aggregate many patient contracts with many insurance companies.
Here’s the cool part. There’s an infinite regress problem here, right? So we’ve got a watcher, but who watches the watcher. Enter the incentive. A patient would pay the proxy both a fee and a cut of any payouts. This arrangement would ensure the proxy negotiates really hard on the patients’ behalf. Now there’s a real market for a couple of sharp lawyers and doctors to get together and set one of these babies up.
The government could support this effort by giving such proxies some sort of legal status. Perhaps even creating an administrative court for rapidly settling contract disputes between insurance companies and patient proxies.
So this is what I’ve got so far. Any thoughts?
]]>The reason I think parties will never go away is that they are borne of cooperative activities to address asymmetric power relations. In other words, they create value, and thus they emerge all too easily. The problem is that the value created gets conferred entirely to the party constituents, and additional value is leached from non-constituents. Note that I am not suggesting this is a zero-sum game where no new value gets created, but rather there is collateral damage, especially when it comes to future cooperative opportunities that are precluded. To see what I’m referring to, simply consider a Democrat and a Republican in Congress who agree that abortion should be totally illegal, and consider the chances of them actually teaming up to work on it.
Parties are self-reinforcing once they are established and take on a life and raison d’etre all their own. At some point their own constituents may feel resentful and beholden, but what can they do? It’s become too tough to go it alone. The few independents who have been elected to Congress were either elected as a party member and later became independent, or ran as independent in a later term, or they leverage a party affiliation in the caucuses.
But I am hopeful that the political ecology will look vastly more diverse in the future, with many viable parties, each one holding less power than the current ones, and with a vibrant collection of independents holding seats of power. And I’m confident that this will happen much quicker than most people think. To see why I am hopeful, look at what is happening to the news media ecology and how quickly the power structure is shifting due to information technology and social media.
This seems like a situation where I should put my money where my mouth is, so here’s a prediction. I believe that there will be an independent elected to either the U.S. House or Senate by 2012 who has never run as a candidate for an existing party.
]]>However, let’s assume for a moment that I’m wrong. What should we do? I don’t think we can actually decrease our energy usage very much and support our civilization. So we have to find non-petroleum energy sources. Biofuel technology doesn’t look very good at the moment. Scaling will require major land use changes that I contend are probably a net negative environmental impact. The cost-benefit for solar does look better, especially in certain geographic areas. But it seems to me the only massively scalable solution at our current level of technology is nuclear fission.
From an engineering perspective, this looks like a pretty obvious conclusion. The problem is that standard light water reactor (LWR) technologies have some serious drawbacks: (1) they don’t scale down well so you need to build big installations, (2) you can’t easily turn them on and off quickly so they typically only supply your base load, and (3) they generate a substantial amount of radioactive waste that we still haven’t figured out what to do with long term. So even if we can overcome the ideological resistance to nuclear power, there are also some serious fundamental economic issues to overcome as well.
Enter the thorium fuel cycle, from which you can build a Liquid Fluoride Thorium Reactor. It essentially solves all these problems and they are simpler to manufacture because you don’t need giant pressure vessels. They can scale down to 5MW. If you’re willing to run them at only slightly lower efficiency, you can use them for load following and peak reserve roles. They generate about .1% of the long term radioactive waste as standard LWRs. It’s also incredibly difficult to reprocess fuel into weapons-grade material.
The coolest part is you can mass produce the components and then assemble them on site so we could actually get these babies up and running quickly. For details, see the Energy from Thorium blog and site (I recommend starting with the former and moving on to the latter’s discussion forums and document repository if you want gory details).
Run your Tesla on clean, reliable thorium power!
]]>Macrophage (yellow) chomping on e. coli (red) [3000x magnification]

e. coli [9400x magnification]

T-bacteriophages chomping on e. coli [69000x magnification]

The lime green coat vignette shows how value gets destroyed, so you have to invert the chain of reasoning, and recognize that when you do so, there is value being created all along the chain. But what is really going on at each link in the chain is a simple monetary transaction. And all monetary transactions — whether they involve cash directly, or credit, equity, barter, etc — are a form of cooperation. In particular, they happen due to the so-called Law of Comparative Advantage (LoCA). Simply put, this “law” is the inverse of the Prisoner’s Dilemma (PD).* In PD situations, two agents are worse off by following their self interest and not cooperating. In LoCA situations it’s just the opposite: its in the agents’ self interest to cooperate, and by doing so they end up better off than by going it alone.
If you think about it, no monetary transactions ever take place unless it’s a LoCA situation. You walk around all day long not making purchases because you are better off having the money (and time) it would cost you. But every once in a while the situation is right (or you seek out the situation) and you’d prefer a cheeseburger to the $0.99 cents in your pocket. The burger joint prefers the money to the burger, so it all works out great and everyone is better off for having done the transaction.
Now, what is special about LoCA situations such that the magic of cooperation can make everyone better off? It’s simple: asymmetry. You and the burger joint have asymmetries not only in your production capacity — the burger joint can make burgers much cheaper and faster than you can — but also in your consumption profiles: the burger joint doesn’t consume any burgers, plus you each have different utility curves for money. Compare this to a person you meet outside the burger joint who is identical to you in terms of burger making ability, hunger, cash in pocket and love of money. You two have no reason to cooperate on getting burgers into your bellies, so you both just end up going inside and ordering for yourselves and paying with your own hard-earned cash.
You might be thinking to yourself at this point that PD’s are also asymmetric situations and they decidedly don’t lead to cooperation and value creation, so how can I claim that value is derived from asymmetry? The answer is that asymmetry leads to change in value, and the direction of change is determined by the specifics of that asymmetry; LoCA situations lead to creation of value and PD situations lead to destruction of value.** Not all asymmetrical situations fall into either the LoCA or PD category (at least I don’t think so), but these are the ones that are most instructive .
As for non-economic value, if you don’t believe the above argument you can stop reading here because my argument is more philosophical and even more by analogy. But assuming you are with me so far, check out Nobel laureate P.W.Anderson’s famous Science article More is different in which symmetry breaking is the key to his argument. Also, check out Parrondo’s Paradox which can be used to describe all sorts of value creation from different realms, and “[i]n its most general form… can occur where there is a nonlinear interaction of random behavior with an asymmetry.”
* which itself generalizes to the well-known economic problem of the Tragedy of the Commons.
** At least for the “prisoners.” One can see that the “police” gain value, and it’s an open question in my mind whether there is conservation of total value if you look at the system as a whole. If anyone has a proof of such a concept, or can show that total value goes down, would love to see it.
Second, via Prometheus, Wired reports on a robot-software combination that was able to generate, test, and refine it’s own hypotheses to identify coding for orphan enzymes in yeast. Obviously, this is a very special purpose kind of science. But the fact they got a closed loop is very impressive. I also like the fact that it’s in the biological sciences. Hey, maybe some descendant of this program can solve the aformentioned cancer problem.
]]>The short version is that the effect of a bubble on the economy is determined by its effect on consumer spending. The Dot Com Bubble didn’t have much of an effect because it primarily affected institutions and already relatively wealthy consumers. However, the Fed’s attempt to shorten the resulting recession created a loose monetary policy which forced dollars into the most attractive asset class: homes. This attractiveness stemmed from relaxed lending standards and tax-free capital gains on homes, which created more buyers. But asset appreciation in this class is fundamentally limited by the ability of consumers to repay loans from income, which was not growing fast enough. As the institutions insuring mortgages reached their limits, they slowed the issuing of policies, which dried up the market for new mortgages, which dried up the ability of people to buy, which decreased prices, which sent home equity under water, which further decreased the flow of insurance policies.
Because home equity and home ownership help drive consumer spending, this burst bubble then affected the real economy. Cool. Fortuitously, Vernon Smith’s Rationality in Economics is the next book in my pile.
]]>Interesting question this morning, and something I’ve been wondering about. I’ve yet to see anyone really argue that state of non-regulation we’ve been in for the last years has been a good idea. I’ve heard some thoughtful conservatives talk about how their views have changed radically — coming to understand that forceful regulation is absolutely necessary.
The super-conservatives I’ve seen are talking more about taxes, avoiding the subject. I’d be very interested to see a credible argument for a hands-off approach.
So how about it, anyone game to take up a considered argument for not mandating that companies who get big enough to affect the global economy should be broken up or otherwise handicapped?
]]>
Why 100%? Let’s face it, if you are one of the top three executives in a pubic company, there’s no reason, financially speaking, you should need guaranteed cash compensation. Having a safety net only serves to misalign your incentives with those of your employees and your investors, most of whom are at the mercy of your decisions. If you do need a guaranteed salary to pay the bills, go get a real job! Coffee is for closers.
How long of a lockup? The point of the lockup is to discourage short-term decision making. Long-term value investors typically have a minimum 3 year horizon, and illiquid venture-backed companies typically have 3-5 year vesting schedule on contingent compensation. In the past, you could have argued for an even longer lockup period, but with the pace of innovation today, I suggest a 5 year decay with 20% of salary-purchased stock becoming unlocked after each year.
What about benefits? Any non-contingent compensation (healthcare, company car, vacation pay, etc.) should not be allowed; it just serves to undermine the goal here. Again, being a CEO of a public company is a privilege you should earn by your past performance, not a God-given right. Remember all those years before you made it to the top when you were accumulating significant personal wealth? Put some of that aside to bankroll you through your tenure as top-dog. If you are good at your job, you will be hansomely rewarded via your equity.
What about other contingent compensation? You want extra stock or stock-option grants beyond your “salary”? That’s fine as long as it’s negotiated at arms length with the board of directors, and the grants are vested over a similar 5-year period. The key is that there should be no safety nets where you get paid while the other company stakeholders suffer.
What happens year after year? Every year, you are up for a renewal or change in salary. The board makes you an offer, if you don’t like it you leave. Over the course of time, you will have stock that is being released according to the 5-year clock but on staggered schedules. If you leave, you still get your locked up stock, but there is no change in schedule for unlocking it. You have to reap what you sow, but now your long-term fate is in someone else’s hands.
What about pre-IPO companies? What about them? Private companies wouldn’t change under this proposal. But once you file for IPO, the top three execs have to agree to the new compensation rules, including the 5-year decaying lockup for all equity and options previously granted.
Why 3? Seems like a reasonable number. Most of the decision making power is conferred to the top three executives. I can see the number being tied to market cap, so that while the minimum might be 3, GE might have it’s top 10 executives paid this way.
Isn’t this plan still too soft? Personally, I think it is. One could argue that the risk-reward ratio should be turned on its head so that it’s much less risky to start a new venture* which has huge value-upside and much more risky to take the helm of a established “blue chip” ship. With this line of reasoning, simply requiring executive salary to be invested seems like too much of a freeroll to me; I’d like to see some real skin in the game and require the top execs to reach into their savings and buy lock-up stock as well. But I’m happy to try the basic plan and see how it goes.
* I’ve put my money where my mouth is on this one [REDACTED 05/08/2009: see here].
The crash has laid bare many unpleasant truths about the United States. One of the most alarming, says a former chief economist of the International Monetary Fund, is that the finance industry has effectively captured our government—a state of affairs that more typically describes emerging markets, and is at the center of many emerging-market crises. If the IMF’s staff could speak freely about the U.S., it would tell us what it tells all countries in this situation: recovery will fail unless we break the financial oligarchy that is blocking essential reform. And if we are to prevent a true depression, we’re running out of time.
Other studies have confirmed the general sense that expertise is overrated. In one experiment, clinical psychologists did no better than their secretaries in their diagnoses. In another, a white rat in a maze repeatedly beat groups of Yale undergraduates in understanding the optimal way to get food dropped in the maze. The students overanalyzed and saw patterns that didn’t exist, so they were beaten by the rodent.
3) Game Theory and the Economy
I am no economist so in all honesty I can suggest any solution that won’t be better than that of any other armchair economist but it could be that inflation could be a way forward. With inflation the value of having savings decreases and so does the payoff of defecting. Interestingly, in most economic crises inflation is one of the economic indicators that tends to go down.
hat tips: Josh Paley and Daniel Horowitz
]]>1) I can force my eyes into a disconjugate gaze–looking in slightly different directions.
2) My wife can also force her eyes into a disconjugate gaze. This gives a whole new meaning to, “Love at first sight.” Our daughter inherited this superpower (among others).
3) I have large hands but can nevertheless fit my entire fist in my mouth. In fact, I held a record at my oral surgeon for largest mouth opening.
4) I cannot stand the taste of even the smallest piece of lettuce. I contend this is a rare genetic trait.
5) I once ran a mile in 4:26. I also once bench pressed 225 lbs 26 times in a row. I was the same height but weighed 65lbs more when I accomplished the second feat.
6) The first thing that attracted me to my wife was the assurance with which she handled a full-sized Chevy Blazer.
7) I knew I was going to marry my wife one month after I met her. Moreover, we were only 20 years old at the time. I reject the hypothesis that this is a result of confirmation bias.
8) My wife and I lived with Steve and Laurel right after graduating from college.
9) I used to be a pretty good cook–until my wife become a professional.
10) My wife and I don’t celebrate Valentines day (and we’ll be having our 18th anniversary in December).
11) I have a lot of trouble falling asleep in the presence of any stimulus (auditory, visual, tactile or even just internal mental dialog). Nevertheless, I require about 9 hours of sleep per night.
12) I really enjoy personal conversation with people I know. I really enjoy speaking publicly to strangers. It’s very hard for me to initiate personal conversations with strangers.
13) I feel that my short attention span is my greatest intellectual flaw. I would be a terrific software developer if only I could pay attention to a programming problem for more than a few hours.
14) I reflexively reject any group ideology (even when it has merit).
15) I’m an agnostic but I’m not viscerally afraid of dying, only intellectually.
16) I honestly think Fletch should have won an Oscar (for writing).
17) I’m pretty sure I was mostly a jerk in high school. I’m pretty sure I was still quite a bit of a jerk in college.
18) As a freshman at Stanford, I got a D in honors physics, thus crushing my hope of becoming a physicist.
19) As a sophomore at Stanford, I received an A+ in a Master’s level decision theory project course, thus solidifying my membership in the Bayesian Conspiracy.
20) I didn’t take a single computer science course at Stanford but I’ve spent most of my adult life in the software industry in a “technical capacity”.
21) I exercise about 14 hours per week.
22) I am absolutely convinced that I would have been a dramatically more successful athlete if I knew in high school everything I’ve learned about training over the years.
23) I think professional mixed martial arts (AKA “ultimate fighting”) is safer than professional boxing.
24) My best source of stress relief is having a professional MMA fighter trying to bash my brains in.
25) I have trained to the point where I am perfectly capable of seriously hurting another human being in dozens of ways. One of my biggest fears is ever having to use this skill.
]]>2) The more I learn, the less I feel that I know. But I am okay with that. Still it’s unsettling because I don’t think I’ll ever stop learning.
3) I care more about what people think of themselves than what they think of me.
4) I prefer asymmetry to symmetry; it’s the root of all value.
5) I can speak Arp (and Op), so careful what you say around me.
6) Most people think I don’t work because there is no name for what I do, but I will be working til the day I die, however…
7) Barring an untimely death, there is a good chance I will reach escape velocity through singularity.
8) I have played many games in my life, but the hardest one by far is golf.
9) I don’t believe we will ever find a theory of everything; it’s turtles all the way down.
10) I’m baffled though why we don’t have a better theory of mind by now.
11) I met my wife on a park bench in a foreign country; a local man started talking to us after a few minutes, and when he left he said, “Invite me to the wedding.”
12) I’m glad I learned that you have to sometimes choose between being right and being happy.
13) I am the happiest and most fortunate person I know.
14) I’m right handed but left footed.
15) What ever it is that I am trying to communicate to you will undoubtedly be different that what you think I am saying. I sometimes wonder how it is we can communicate at all.
16) I have memories from before I was one year old.
17) Eu tenho saudades de Rio de Janeiro.
18) I’ve been very competitive my whole life, and while competition is fun, these days I prefer cooperation. I’ve found it to be a whole lot more profitable, and infinitely more rewarding.
19) I don’t believe the universe has a mathematical structure, rather we invent math to gain insight into the true structure , which is fractally unknowable.
20) One of my super powers is being able to look at a dessert menu and instantly determine the best option.
21) My favorite koan is the one with the monks and flag and wind.
22) Never thought of myself as a creative person until I learned the true nature of creativity.
23) I once spent a whole year in an RV with my best friend going around the US to different sporting events.
24) To the best of my knowledge, I have never climbed Mt. Everest
25) The secret to the good life is that it runs on a network effect; give generously and you will see increasing returns, no matter how you choose to measure.
]]>I love what they are doing, and encourage everyone with a stake in education (parents, teachers, students, administrators, policy-makers, etc) to learn more by going to their website: DecisionEducation.org
Poker players will be particularly excited about DEF because they fully embraced the utility and natural fit of poker as both a didactic tool for decision making, and as a fundraising vehicle. Not surprisingly, Annie Duke serves on the board, and she recruited some of her poker-playing friends to help out.
I have some questions for the readers of this blog:
1) Do you feel you are a good decision maker today, and if so, how did you become so?
2) Do you feel you were given the tools/training to make good decisions in school, and if so, at what point and how?
3) Do you think your decision making skills could be improved at this point?
4) Do you have any ideas for how DEF can best achieve their goals and mission?
]]>hat tip: Daniel Horowitz
]]>Click here to make your prediction on what year that will take place.
Comment below on why you think what you think.
]]>Reason has a short interview with Norman Borlaug that nicely sums up the tradeoffs required by organic farming. There is literally nobody who understands modern agriculture better. The bottom line is that if the US tried to produce today’s agriculture output with 1960s era technology, we would need on the order of 1 million square miles of additional farmland (assuming that the marginal productivity of the land decreases somewhat as you bring less productive ares into play). That’s a swath 1000 miles by 1000 miles. That’s about 1/3 the land area of the contiguous 48 states.
Replicate this calculation all over the world and you’d have massive deforestation and habitat destruction. Remember the unintended slashing and burning rainforests to plant oil palms for subsidized biodiesel? Now multiply that by 10. No thanks.
]]>But I want to focus here on something that’s rarely discussed: group innovation. Can true innovation (like scientific or technological breakthroughs) come through collaborative effort, or is it always a matter of a singular individual? This article in the New Yorker suggests that not only can innovation be done in groups, but the innovative process can be mechanized.
Nathan Myhrvold’s Intellectual Ventures is the existence proof. But as the article points out:
The unavoidable first response to Myhrvold and his crew is to think of them as a kind of dream team, but, of course, the fact that they invent as prodigiously and effortlessly as they do is evidence that they are not a dream team at all. You could put together an Intellectual Ventures in Los Angeles, if you wanted to, and Chicago, and New York and Baltimore, and anywhere you could find enough imagination, a fresh set of eyes, and a room full of [talented but not genius level thinkers].
We have a lot of problems in this world. And a lot of talented (but not genius level) thinkers. So, given all this, what can we do to unleash the innovative potential that is not being tapped?
]]>A relevant footnote near the end of the article though:
“The question is how much of it is the meat and how much is the extra calories,” Brooks said. “Calories per se are a strong determinant for death from cancer and heart disease. This should make us think about our calorie intake.”
Suffice it to say, the average U.S. diet contains too many calories, and too many of the calories we consume come from red meat. My favorite quote, to be filed under the heading of methinks-thou-dost-protest-too-much is this one:
But the American Meat Institute objected to the conclusion, saying in a statement that the study relied on “notoriously unreliable self-reporting about what was eaten in the preceding five years….”
The article quickly debunks this of course since it’s not true.*
So the question is, how much red meat should we be eating, and does it make sense to simply replace red meat with poultry and fish but east the same amount? The China Study tries to make the case for zero animal protein. The problem is, despite the compelling-sounding arguments and data, there are some serious flaws in their analysis. Plus, it doesn’t make evolutionary sense. Finally, I have heard of no compelling evidence which suggest that vegans live longer on average than those who eat a modest amount of animal protein, and some evidence that a totally vegan diet is problematic.**
Kevin has said that all the data he has points to 20% of your calories from protein, but it doesn’t matter what form. With the caveat to avoid a lot of casein (the main villain in The China Study), which is found in milk and milk products.
This gives us an upper bound on total protein (as a function of calories). But how much from animals and how much from plant matter? Given that you need to eat most of your calories from a variety of whole plant foods for a host of health reasons, and since some plant foods have protein (including but not limited to nuts, seeds and legumes) it makes sense that if you need to get much less than 20% of your calories from animal protein for the numbers to work out.
How much less? Who knows. My guess is somewhere between 5% and 10%. And if you are looking to make a change in this regard and are worried about your friends laughing at you, just tell them you are a flexitarian, which happens to be the 2003 Word of the Year according to the American Dialect Society.
Now, of course you are eliminating or severely reducing your refined carb and sugar intake, right? The good news is that if you are eating a majority of your calories from whole plant foods, and you are exercising regularly, you can pretty much eat til you’re full and not count calories.
Oh, and most important, don’t worry about the exact numbers or if you indulge now and then. Stressing out, now that shit will kill you.
hat tip: Daniel Horowitz
* Why does there have to be two sides to every story? Sometimes there’s only one side: the truth. The American Meat Institute is not dedicated to pursuit of science, it’s a lobbying group paid for by the meat industry. Therefore, there’s no reason they need to weigh in on a news report.
** B-12 deficiency is the most well-known, but there are other issues too. Remember, you can’t just isolate one aspect of a complex system and hope to accomplish anything.
Kevin has referred more than once to the famous Dunbar number for limits on optimal human tribe size.
One of my favorite books recently is Seth Godin’s book on leadership, called — you guessed it — Tribes.
Yesterday I heard a great talk by David Logan, co-author of Tribal Leadership.
Logan’s talk presented one of those rare, insightful models of human behavior that had me nodding my head at every point. The model goes like this: there are five stages of development of human tribes that can be characterized by implicit cultural values:
- “Life sucks” (gangs, the disenfranchised)
- “My life sucks” (DMV, all bureaucracies)
- “‘I’m great” (entertainers, high-power professionals)
- “We’re great” (Zappos, Google, etc.)
- “Life is great” (peace and reconciliation process in South Africa)
One of the issues with tribes is that they can only comprehend the language and ethos of one level below and above their current stage. That is, tribes in stage 2 can’t really grasp the mentality of stage 4 or 5, it sounds like a bunch of hogwash to them. Good leaders, Logan argues, need to be able to speak the language of all five stages, and one of their main jobs is to nudge their tribe to the next level. Thus, Martin Luther King, said “I have a dream”, speaking in stage 2 or 3 language, because that’s where his audience was. Ghandi, Mandela, etc, all did the same thing.
We all belong to more than one tribe. So, thinking about the tribes you are a member or leader of, what stage are they at? What can you do to nudge them to the next stage of thinking?
]]>Q.2: Microeconomics holds that people do more than simply respond to incentives. Most animals do. But people maintain sets of “likes” and “dislikes”. They form “plans” for increasing the amount of “likes” and decreasing the amount of “dislikes” they experience. Part of these “plans” includes engaging in “transactions” with other people (or groups of people) whereby an exchange instrumentally furthers a plan or terminally completes a plan. So does a more extreme world change this?
The history is below the fold.
Q.1: The fundamental tenet of [micro]economics is that people respond to incentives. Everything else is derived from that. So let me start by asking whether you’re saying that the transition to a more extreme world changes this?
A.1: A more extreme world doesn’t change whether people respond to incentives (they do). But it does change their decision/incentive space assuming they notice that the world had changed.
]]>Somehow, he managed to perfectly balance the economic and ecological package into a rapidly growing and self-sustaining system. You see, he had to figure out how much economic benefit the land could generate at each point in time and never have more than the corresponding number of people working the land. He had to figure out how to mesh psychological factors with incentive structures to get the locals to adopt the land both socially and economically. He also had to plot the path for an ecosystem in time and space.
Each of these three prongs represents an effort to control a dynamic system and he had to mesh all of them at once. He makes it sound obvious in retrospect, but make no mistake, this is a feat of sheer brilliance. I think there are some good general lessons to learn from this, but the real ongoing value is in the human capital he has built for managing this process. He should cycle through groups of apprentices that then go forth and attempt to replicate this miracle. I really hope this lasts and expands in the long term.
]]>What do you think of this decision?
]]>If you liked this talk (as I do), check out Ariely’s 3 irrational lessons from the Bernie Madoff scandal.
]]>Sleep conserves energy and keeps animals out of trouble. It takes members of each species a minimum time per day to make a living — that is, to secure their personal survival and take advantage of any reproductive opportunity. This challenge is met anew every day. On this view, how much of the day is needed to meet adaptive goals determines the duration of the default option of sleep.
The idea that sleep is the default state, and that being awake requires a lot of hard work and danger is one that never occurred to me. Now, it’s also true that it can be dangerous for tasty, defenseless animals to be asleep, but moving around looking for food and mates certainly attracts more attention. Assuming I’m able to get enough food in one hour a day and find a rock to hide under, maybe it makes evolutionary sense for me to sleep the other 23 hours.
Another essay in the same book (Amazon Search Inside on “Strogatz”) points to research which roughly correlates the number of hours an animal sleeps a day with brain size. The supposition is that whatever happens to change the brain during the day requires corresponding down time to maintain proper function.
But what if the real reason for the correlation is that animals with bigger brains tend to be able to defend themselves better against predators because they are smarter? Part of a good predatory defense is just not being caught in a position where you are likely to get eaten, which requires the ability to predict the future to some degree. Plus, if you are caught in a chase where you are slower than your opponent, you’ll need to use your clever brain to find or create shelter during the chase.
It would be foolish to assume that the brain size / sleep correlation is due to just one factor, but this “default state” hypothesis suggests an interesting experiment: see if sleep duration correlates with how defenseless an animal is. You’d have to be careful to define defensive abilty a priori so as not to fall into a circular reasoning trap. Just off the cuff, I’d suggest it is at least a cross product of speed, innate defenses (such as shell, sharp spines, poisonous secretions, etc.), camouflage, scary markers (e.g. mimicking a predator), and sociality (the ability to use intraspecies teamwork to warn, fight, confuse, or survive through probabilities).
Anyone know of research along these lines?
* To read the essay online, go here and type “Kinsbourne” into the search field.
The former is an overview of all the logical mistakes the mind makes when trying to reach a decision. Khaneman and Tversky’s Choices, Values, and Frames is considered more seminal (and Tversky was one of my favorite professors in graduate school), but Hastie and Dawes is both more approachable and more complete in my view.
The later is an overview of how the brain is put together and operates at the biological level. There are a couple of really, really dry chapters on the biochemistry of never signal propagation that you just have to get through. But the rest of it is pretty enjoyable.
If you read these books, you’ll undertsand why I’m very skeptical of “trusting my instincts” in any situation that isn’t a fairly close parallel to something encountered in the ancestral environment. However, this knowledge has also made me optimistic in a weird way. Given the micro-level capabilities of our brains, it seems like we shouldn’t be able to get very much done, but our civilization is actually quite remarkable. So the whole is substantially greater than the sum of its parts. There must be something in the dynamics of society that allows us to overcome, in some haphazard way, our individual cognitive limitations.
]]>The essential idea is that certain complex systems exhibit coupling dynamics where different aspects of the system gradually synchronize over time. Then when the synchronization achieves a certain level, the coupling dissipates and the system gets thrown into a new regime. Of course, the synchronization gradually reasserts itself and the pattern repeats.
The authors show that, when decomposed in a particular way, the climate exhibits these coupling dynamics and they explain the shifts we’ve observed over the last century. Moreover, they show that these dynamics are probably intrinsic to the climate and not due to any external forcings (such as manmade CO2, though there may be a separate smaller CO2 warming signal of course).
The most intriguing bits are that their approach explains the shifts in the 1910s to warming regime (culminating in some of the warmest weather on record around 1940), the 1940s to a cooling regime (culminating in some of the coolest weather on record around 1970), and the 1970s to a warming regime (culminating in some of the warmest weather on record around 2000). Given the length of previous regimes, we would expect this one to have ended and a shift to a cooling regime to have begun. When applied to a state of the art climate model and run forward, their approach predicts additional shifts in ~2030 and ~2070.
It’s funny that just as things get their most extreme, the system resets. Way to mess with us humans and our cognitive biases Mother Nature. My prediction: the temperature gets gradually colder from now until 2025, there’s a major “global cooling” movement, and things start getting warmer again in the 2030s.
]]>What I loved was the list of seven differences between the “common sense” and “economic” worldviews. They are concrete examples of how economists differ in their causal reasoning from even very highly intelligent and educated non-economists. If you want to understand where I’m coming from, read the post. It’s pretty much one of the three primary planks of my own worldview. [In case your curious, the other two are that (2) the human brain is a woefully inadequate decision making substrate and (3) many of the outcomes we care about are produced by complex dynamic systems that are very difficult to characterize]
]]>Clearly the best benefit comes if you engage both, but if you are skeptical person, you might be at a disadvantage. Here’s a thought though: just by knowing that you will get some benefit from a known placebo should cause you to (logically) believe the placebo will work, so maybe this is enough to engage the belief component.
Here are some interesting tidbits buried at the end of the article which should be leveraged:
- “… a physician can maximize a placebo effect by radiating confidence or spending more time with the patient.”
- “A high price tag on the drug can apparently help, too.”
On this latter effect, I would bet that the mechanism isn’t limited to dollar value or to drugs, so for those that worry that this can be used to justify the high cost of pills here are two alternatives:
- Push interventions that are not based on a pill but rather have positive benefits for anyone, such as more broccoli.
- Push interventions that don’t cost money but require some form of sacrifice or work on the part of the patient, such as exercise.
And remember, it’s unlucky to be superstitious, and just because you’re paranoid doesn’t mean they’re not out to get you.
]]>Given my previous post, you’d probably suspect that I think this is a good idea. And at first blush it is. But the article misses two important points which render the plan fairly impotent. The first is that since public disclosure only applies to public companies, the radical transparency plan erodes further the incentive for companies to go (or remain) public,* hence driving the creative accounting underground. And that’s assuming that such disclosure has the potential of making public companies more financially transparent.** Let’s assume for the moment that it does have such potential, the next question is, will it? I think not.
The issue is that it’s not data that produces transparency, but rather data plus analysis. Yes, you have to start with accurate data for the analysis to be effective, but good data alone is not sufficient to produce the kind of price-correcting (aka naive-investor-protecting) transparency that Roth believes will result. Or more to the point, good data by itself will enable sophisticated traders and quantitative arbitrageurs to make more of a profit, but that excess profit will be at the expense of the average investor. In theory — as Roth points out with his LendingClub example — good data levels the analysis playing field, but in practice the field will never be level as long as there is profit to be made by deeper analysis. The only way true financial transparency will be achieved is if there is incentive for the best possible analysis to be open-sourced.***
Maybe someone here will come up with a clever mechanism (like this one) that aligns the market incentives with the policy goals. If so, that would be great.
But I think we need an even more radical form of transparency. I read about the concept a while ago in John Allen Paulos’ book A Mathematician Plays the Stock Market. Paulos suggests that the concept of insider information should be abolished and that we should allow all employees and stakeholders to buy and sell shares of their own companies without penalty. A corollary is that there should also be no penalty for giving “inside” information to outsiders (except possibly in cases of trade secret or non-disclosure violation).
When I first thought about Paulos’ proposal, I dismissed it as unworkable and an idea with many bad unintended consequences. But the more I think about it and challenge my assumptions about what’s important for individual companies to thrive and for the markets to be more bubble-free, the more it makes sense to me. Thoughts?
—
* Let’s face it, the only real reason for a company to go public is to provide an exit for early stage investors and liquidity (read giant payday) for founders and employees. This is a great economic driver and not something I advocate doing away with. But now there are so many private and pseudo-public ways for stake-holders to get their paydays without the burden of public scrutiny and company accountability that it doesn’t make sense to undermine the incentive to IPO. If anything, we should be providing extra incentive for companies to be public and to take on the concomitant burdens.
** If it doesn’t, then what’s the point?
*** Note that the real transparency benefits claimed from the LendingClub example came when LendingClub itself decided to post not only the data but also good algorithms for analyzing that data.
Therefore, if you’re interested in issues of poverty and race in the US, here are two ethnographies you should read. Gang Leader for a Day by Sudhir Venkatesh and Cop in the Hood by Peter Moskos. As sociology PhD candidates, both went out and actually became actors in poor black neighborhoods. Venkatesh hung out with a crack gang in a Chicago housing project and Moskos became a police officer in Baltimore’s roughest neighborhood.
It’s really hard for me to oversell these books. The subject matter is compelling. The writing is good. The conclusions are insightful. Moreover, from a meta point of view, there are several key take home points.
First, poverty is complex. It may even be an emergent phenomenon. So simplistic solutions are unlikely to work. Second, there doesn’t appear to be any fundamental difference between the poor and the wealthy. Their social institutions are every bit as rich and textured. They’re just responses to a different set of external constraints. Third, humans seem to be wired for politics and economics. No matter what sort of top-down constraints you employ, you cannot force a community to fundamentally act against their political and economic interest. They will spawn informal political and economic institutions to achieve their ends.
I think the authors argue persuasively (either implicitly or explicitly) for two policy changes. First and most obvious, the fact that drugs are illegal is a big driver of what happens in poor neighborhoods. I don’t think you can increase the transaction cost of the illegal drug trade to the point where it reduces the supply by much more than a factor of 2. Humans are just too clever at adapting their institutions in the face of an enormous economic incentive. Second and most unobvious, having police officers patrol urban neighborhoods in vehicles quite likely serves as a tipping point from good to bad equilibria along a number of dimensions. It changes the dynamic from preventing crime and defusing disputes to simply appearing to respond in a timely manner.
So legalize drugs in some fashion and bring back beat cops on foot patrol.
]]>What do you think?
]]>Scott Sumner is an unconventional monetary economist. His idea is for the Fed to sponsor a nominal GDP futures market. Then the Fed buys and sells unlimited quantities of futures at a price corresponding to a 4% nominal growth rate. The abstract for his most recent formal paper defending the policy is here.
I believe the idea is that a combination of anchored expectation, prediction market, and market arbitrage effects will serve to make this a self-fulfilling prophecy. The money supply and composition thereof will adaptively adjust to whatever level is necessary to achieve a 4% nominal growth rate given the aggregate knowledge of all market participants about fine grained aspects of the economy.
Inflation, of course, will float in such a way that real GDP will still fluctuate. But due to various forms of price and wage stickiness, keeping nominal output growth stable will serve to modestly dampen business cycles. More importantly, it will prevent the Fed from exerting inflationary or contractionary pressure on the economy by misjudging the proper level of the money supply. The market will always be able to arbitrage what it sees as mistakes.
Think of this as crowdsourcing monetary policy.
]]>
Like Johan Norberg I believe that government is largely responsible for creating the environment in which this financial crisis could happen (e.g. creating bad laws). It is therefore silly to wonder what government should do to solve the problem. They should stop messing up the system with all kinds of interventions. Just let the problems fade away by themselves.
It’s like management of Yellowstone park, where the biggest forest fires happened *after* government started trying to protect it. The prevented many small fires, and the system got unbalanced. Which resulted in a few massive fires. Then they learned to leave the system alone.
I single this comment out not to pick a fight with the reader, but rather because it is an excellent summary of the contrasting viewpoint, and one which many very smart people ascribe to.
Prophetically enough, I picked up Seed Magazine yesterday and found several articles which question the premises of the laissez-faire viewpoint:
- Ecology of Finance – “A growing cadre of biologists argues that ecosystem analysis of the world economy might help stave off a repeat of 2008’s financial catastrophe.”
- Rethinking Growth – “Herman Daly applies a biophysical lens to the economy and finds that bigger isn’t necessarily better.”
- Is Economics a Science – with commentary from Frederic Mishkin, Robin Hanson, James Galbraith, Steven Levitt, Stephen Dubner, Jim Miller and Nassim Nicholas Taleb.
- Network Dynamics in a Shrinking World – “When shrinking networks are researched, what is discovered is extraordinarily strange.”
- Industrial Ecology and the Rights of Ecosystems
Interestingly, the first article specifically invokes the forest fire metaphor: “‘You can build in what amounts to firebreaks. How you would limit epidemic spread’ — or financial panic — ’depends on the topology of the interactions.'”
The popular trend towards “complexity economics” started with Eric Beinhocker’s, The Origin of Wealth, a book I recommend to anyone who wants an excellent overview of the history of economics and the history of complex systems thinking. Whether we can positively impact system dynamics through better understand and policy, or whether we will just make things worse, we will never actually know for sure. There is no counterfactual universe we can compare to as a control. But one way to get some indication is to simulate, as John Miller and Scott Page convincingly argue in Complex Adaptive Systems.
My own current belief is that while there’s no way even in principle to accurately predict the future in a system as complex as the global economy, using simulation and better models — which I believe complexity economic models to be — we can gain better understanding of the kinds of dynamics we can expect to see in general. More importantly, I believe that we can do better than we have up to this point by using policy (i.e. new rules and incentives) to guide the economic system into dynamics that are favorable to humanity and away from dynamics that are unfavorable. Not perfect, or even close to perfect. But better.
Whether you agree with my stance or not, one thing is for certain: the complexity economics meme is on the rise.
]]>I don’t know too much about it except that it’s an autoimmune disease and has a complex, multi-causal etiology and pathology. In my reading on autoimmune diseases in general there seems to be a direct link between latitude an incidence. Specifically, the farther from the equator you live the more likely you are to get Crohn’s, Type 1 diabetes, rheumatoid arthritis, and so on.
This being our wont in Western society, we try to isolate it down to a single cause: farther from the equator means less sunlight, which means vitamin D deficiency, so it must be vitamin D. So we try to feed people vitamin D, but this doesn’t cure the condition since it’s notoriously hard for your body to get utility from vitamin D supplements (and in fact it’s easily toxic) and even hard to get enough in food. Plus there’s the issue that maybe it’s not just the vitamin D itself but some combination of biochemistry that happens when you expose yourself to sunlight (the main way humans have gotten a majority of their vitamin D throughout history). But wait, what about skin cancer? Let’s put on sunscreen and go outside for 15 minutes a day. Nope. Sunscreen blocks vitamin D production. Plus in northern climates it’s very hard to get enough sunlight to produce enough vitamin D, especially in the winter months. A number of studies suggest that over half of Americans are deficient.
My guess is that vitamin D is not the issue, but more generally sunlight is. Or more precisely, given your ethnic background, there’s a range for optimally healthy sunlight exposure, and if you go too far out of that range in one direction or the other, you end up with health problems. Autoimmune and other disease on one end, cancer and poor skin health on the other. But I doubt it’s even that simple because lifestyle in general can predispose you or provide resilience — diet, exercise, exposure to environmental insults, and patterns of activity that affect emotional and mental state.
One thing that I think is overblown is the portion of the equation that is genetically predetermined. The pendulum in science has swung too far towards genetics in terms of explanation in general, and this completely contradicts the evidence.
If I were diagnosed with Crohn’s or any other autoimmune condition, here’s what I would do personally. First, devote several hours a day to physical fitness and conditioning, as if I were a professional athlete. Second, experiment with diet like a mad scientist: try every supposed “good health” diet out there, but mixing it up and listening to my body and mental state. Third, I would experiment with daily sunlight exposure, using guidelines based on my natural skin tone (darker = need more sun). Next, I would examine my interpersonal relationships and eliminate/reduce contact with anyone who I even suspected of being a “net negative” emotionally in my life. Finally, if I didn’t see dramatic results, I would move closer to the equator and to a locale that’s very different from my current one (different culture, different daily patterns, etc), and change up my daily routine, esp. if I spent more than a few hours a time doing the same thing (like staring at a computer screen).
]]>
6. Systemic Causation and Systemic Risk
Conservatives tend to think in terms of direct causation. The overwhelming moral value of individual, not social, responsibility requires that causation be local and direct. For each individual to be entirely responsible for the consequences of his or her actions, those actions must be the direct causes of those consequences. If systemic causation is real, then the most fundamental of conservative moral—and economic—values is fallacious.
Global ecology and global economics are prime examples of systemic causation. Global warming is fundamentally a system phenomenon. That is why the very idea threatens conservative thinking. And the global economic collapse is also systemic in nature. That is at the heart of the death of the conservative principle of the laissez-faire free market, where individual short-term self-interest was supposed to be natural, moral, and the best for everybody. The reality of systemic causation has left conservatism without any real ideas to address global warming and the global economic crisis.
With systemic causation goes systemic risk. The old rational actor model taught in economics and political science ignored systemic risk. Risk was seen as local and governed by direct causation, that is, buy short-term individual decisions. The investment banks acted on their own short-term risk, based on short-term assumptions, for example, that housing prices would continue to rise or that bundles of mortgages once secure for the short term would continue to be “secure” and could be traded as “securities.”
The systemic nature of ecological and economic causation and risk have resulted in the twin disasters of global warming and global economic breakdown. Both must be dealt with on a systematic, global, long-term basis. Regulating risk is global and long-term, and so what are required are world-wide institutions that carry out that regulation in systematic way and that monitor causation and risk systemically, not just locally.
I had come to a similar conclusion in grad school during a political discussion with some conservative computer science colleagues. As befits a CS geek, I tried to go meta and explain our different stances using this individual vs. systemic causation dichotomy. But to my chagrin, they didn’t really buy it. I’m not sure why, and at this point the details of the conversation are too vague to try to analyze.
So I will appeal to any who considers themselves right of center to help me solve the mystery: Do you accept this broad characterization about individual vs systemic causation as being a key difference between conservative and liberal thinking respectively? If not, what’s wrong with the characterization (other than it being simply one of many differences)?
Just to frame this experiment correctly, if you would like to comment but don’t consider yourself “right of center”, please say how you would characterize your politics. I’m also curious if anyone who reads this considers themselves right of center, so if you do, please make some noise.
hat tip: Daniel Horowitz
]]>- “Correlation trading has spread through the psyche of the financial markets like a highly infectious thought virus.” (Tavakoli)
- “…the real danger was created not because any given trader adopted it but because every trader did. In financial markets, everybody doing the same thing is the classic recipe for a bubble and inevitable bust.” (Salmon)
- “Co-association between securities is not measurable using correlation…. Anything that relies on correlation is charlatanism.” (Taleb)
The take-away I get from this is that boom-bust cycles are inevitable. Forget about protecting against the last one happening again, it won’t. The next one is unpredictable, a black swan. The longer we go on without one, the bigger and more certain the next one. Taleb even argues that because of the increasing complexity, interdependence and information feedback in the global financial system, the frequency and magnitude are increasing.
The question becomes (in my mind), can we keep these cycles from having far-reaching collateral damage to “innocents”, and snowball effects as we are experiencing now?
Is there a way to let the air out of the tires every so often, sort of a controlled burn, possibly trading higher frequency for lower magnitude?
Psychologically, we are wired to fear change and desire stability and predictability. But it seems that the illusion of stability and predictability is what gets us into trouble.
I only have the vaguest notion of what an economic controlled burn policy would look like, but the idea is that every so often (perhaps unpredictably) we change the rules and incentives that govern the financial markets. It wouldn’t be so important how they are changed, but rather that they are changed. Keep the markets (and by this I mean the market participants) on their toes, always reacting to and trying to figure out how to game the new system, but never actually reaching that point.
]]>The clinical psychologist Oliver James has his reservations. “Twittering stems from a lack of identity. It’s a constant update of who you are, what you are, where you are. Nobody would Twitter if they had a strong sense of identity.”
“We are the most narcissistic age ever,” agrees Dr David Lewis, a cognitive neuropsychologist and director of research based at the University of Sussex. “Using Twitter suggests a level of insecurity whereby, unless people recognise you, you cease to exist. It may stave off insecurity in the short term, but it won’t cure it.”
For Alain de Botton, author of Status Anxiety and the forthcoming The Pleasures and Sorrows of Work, Twitter represents “a way of making sure you are permanently connected to somebody and somebody is permanently connected to you, proving that you are alive. It’s like when a parent goes into a child’s room to check the child is still breathing. It is a giant baby monitor.”
I’ll save the obvious rebuttals for others, but I think these commentaries are interesting in what they reveal about the evolution of Western culture and the field of psychology. This is a gross oversimplification, but here goes.
Up until recently, psychology has been the study of (a) the individual, and (b) disorder/pathology. Social psychology and “positivist” psychology attempts to shift the lopsidedness, but the comments above clearly reflect the traditional psychoanalytic view. And while I would be the first to admit that narcissism and self-centeredness are a scourge on society and individual fulfillment alike, I believe that over the decades this aspect of human psychology has been giving way to more healthy, empathic and pro-social cognitive constructs and behaviors.
Just looking back through the generations in your own family, you may see the following evolution: Extreme narcissism (Depression era) –> Self-centeredness (the “me generation”) –> Entitlement (the “millenials”). This is a positive trend. And with the information revolution and globalization, we are starting to see what the future holds: Hyper-social behavior and holism.
Twitter is both an enabler and a manifestation of hyper-social activity. Everyone who actually uses it (and Facebook updates) immediately understands the ironic mistake of Oliver James, David Lewis and Alain de Botton. You don’t twitter to feed your ego, you twitter to hold up your end of the social contract, to create/shape the community you want to be a part of. To view twitter from the tweeter’s perspective is a laughably myopic — and dare I say — narcissistic stance.
hat tip: Daniel Horowitz
]]>However, I will summarize for those of you short on time. A fundamental problem in securitization is figuring out how different components of a security are related. Think of it as measuring how well the components are diversified. The more independent the components, the less risk embodied in the security. Thus AAA rated tranches of mortgage-backed securities are supposed to be very safe because the components are supposed to be highly independent.
A Chinese mathematician named David X. Li had an insight. You don’t have to analyze the dependencies directly, you just have to observe the correlations in the market prices of the components. Then you can compute these really tight sounding confidence intervals on the correlations of various components because you have all this market data. Of course, the market can’t take into account what it doesn’t understand. So you see a bunch of 25-sigma events. At least, your model says they are 25-sigma. Oops!
]]>Here are some questions I had for Kevin recently and his answers.
Rafe: Do the BPA results (such as they are) cause you concern? Do you still use your Nalgene bottle? Would you let your infant or child drink from a plastic bottle or sippy-cup?
Kevin: I am somewhat but not overly concerned about BPA. We have eliminated BPA-containing beverage containers from _daily_ use. However, I have not swapped out the emergency Nalgene bottles. I would let my infant or child drink from a BPA cup if we were over at a friend’s house (i.e., I wouldn’t grill the friend on their BPA posture), but I would throw away my own BPA containing cups.
Rafe: I just learned that Celebrex is supposed to have all the anti-inflammatory power and none of the gastrointestinal issues. True?
Kevin: All of the Cox-2 inhibitors make the claim about no gastrointestinal issues. When I looked at the data a couple of years ago, the difference in incidence was actually not that great in the general population. Moreover, the Cox-2’s have a higher incidence of cardiac side effects (while Cox-1’s are actually cardioprotective on average). Avoid the Cox-2’s unless you have known high gastrointestinal sensitivity to the Cox-1’s (or you have a history or unusual risk of gastric ulcers, of course). If you’re worried about taking high dose, post injury Cox-1’s, take bismuth subsalicylate (e.g., Pepto Bismol, but I prefer the generic chewables) at the same time. It has a prophylactic effect.
Rafe: When working out, do you increase your protein intake, either generally, or in relation to your workouts? If so, does it matter what form the protein takes?
Kevin: In general, I recommend getting 20% of your calories from protein, period, full stop. When you work out, your caloric intake goes up and takes care of additional protein requirements. It’s best to get protein from your regular diet, but if you need to supplement, here’s my view. Egg protein can increase the sulfurousness of your gas passing, but I seemed to tolerate it well back when I had to supplement (because my caloric requirements were so high). Soy is cardioprotective (though mildly estrogenic). So most people should go with soy.
]]>The ambiguity of this question is intentional.
For those reading this post who have read the entire text, would you please comment on it? What are the highlights, what are the important hidden details, do you think it will work, whatever you want to say.
On the more literal interpretation of my question, how many people do you think actually have read the entire text, particularly those in Congress and the Administration? My supposition is that very few have, and I wonder even if Obama has read every single page. This would not be a surprising revelation, but rather a practical “necessity” of our current political process. Presumably each decision maker has a staff of people who do read and analyze every clause in a bill and who summarize for their boss.
Now, if it’s the case that no Member or Senator has actually read the entire final draft of the stimulus package — and even if some have but the majority have not — then this says something interesting about the crowdsourced nature of our body politic. But I’m not sure what :-)
Would legislation be different (and how) if everyone who proposed, voted on, or had veto power over legislation actually read every word themselves?
Would we, the People, be better off, worse off, or the same?
Assuming that crowdsourcing the legislative process is a practical necessity, how could that process be improved (practically speaking)?
]]>My other favorites were these:
- Tim Berners-Lee
- Bonnie Bassler
- Rosamund Zander
- Willie Smits
- Dan Ariely
- Liz Coleman
I’ll post their talks when they come out, but you can check them out from the program guide in the mean time.
What were your favorites?
]]>My suggestion is that evolution is the first theory — in the scientific tradition — based on the principle of emergence. That is, it looks at a system from the bottom up, starting with behavior at the micro level and yielding behavior at the macro level.
Regardless of the above, what gets your vote for the best idea ever?
]]>Our lives are so different. And the gap is widening all the time. The diversity of experience increases, as the world becomes more complex, as we create new ways of existing, physically, mentally, socially, virtually.
This is part of the paradox of “progress” and the global network effect; the possibility for common understanding increases, yet the difficulty of such a feat does too, as we branch farther and wider from our common experience.
]]>Call it “The Human Mind: A User’s Guide,” aimed at, say, seventh-graders. Instead of emphasizing facts, I’d expose students to the architecture of the mind, what it does well, and what it doesn’t. And most important, how to cope with its limitations, to consider evidence in a more balanced way, to be sensitive to biases in our reasoning, to make choices in ways that better suit our long-term goals.
What a brilliant and practical idea.
Anyone want to take a stab at a syllabus?
]]>- Society According to Kevin
- I May Have Been Wrong About Macroeconomics
- But I Was Probably Right About Climate Models
It occurred to me as I was reading this Huffington Post article that there is a reverse-emergent dynamic that occurs when countries (often through their leaders) send signals to other countries through word and action. That is, if the actions of a group can be seen as emerging from the sum total of actions of its constituents, then it’s also true that the actions of the constituents are influenced by the information received at the group level. Obama, speaking for the U.S., says to an Arab nation “we will treat you with dignity and respect if you treat us that way”, and this has an effect on individuals in that nation as to how they will behave individually.
In the past I’ve characterized the downward influence (from level 2 to level 1) as constraining autonomy, to which Kevin objects. While I’m not sure that we’ve come to consensus on this particular sticky wicket, there’s clearly a connection between the superfoo thread and the macro/micro threads.
]]>It shouldn’t be a surprise to any of you that I came to the conclusion that climate models are pretty much total bullshit. My problem with them is that they are incomplete, overfitted, and unproven. It turns out that one of the foremost experts on forecasting in general also thinks that these models have no predictive value. In fact, items (6) and (7) of their statement shows that you can predict the future temperature really well simply by saying it will be the same as the current temperature.
You can read their more formal indictment of climate forecasting methods here.
Oh snap!
]]>Lately, Arnold Kling’s blog posts have been reinforcing this belief. However, we may both be wrong. Arnold studied and practiced macroeconomics in the late 1970s. Given the delay in propagating knowledge to the undergraduate level, that’s probably also what was taught in my late 1980s undergraduate textbook. However, Will Ambrosini observes that Arnold’s views are outdated and this is a problem with non-macro economists in general. He points to this essay and I find myself convinced that modern macroeconomics is a coherent study of a complex system.
I thought this might provide you some measure of comfort. If anyone wants me to summarize the particulars of why I changed my mind, let me know.
]]>hat tip: mom
]]>Here’s my theory: someone who drinks more than three cups of coffee a day can’t possibly sit still and actually gets their ass off the couch and does shit, thereby stimulating the body and brain, a known and powerful way to reduce dementia risk.
hat tip: Daniel Horowitz
]]>]]>
The optimist proclaims that we live in the best of all possible worlds; and the pessimist fears this is true. (James Branch Cabell)
I am currently reading What Are You Optimistic About?, a collection of short essays by thought leaders in many different disciplines on the eponymous subject. I’m also reading True Enough, a compelling argument by Farhad Manjoo for how despite — nay, because of — the fire hose of information that permeates modern society and is available for the asking, the schism between what’s true and what we believe is widening; a polemic on polemics if you will. Taken together, these two books suggest to me that there is a case, not for being optimistic per se, but for why you should consciously, actively try hard to become an optimist if you aren’t already.
To understand why, you have to understand the central argument of True Enough, which is summed up nicely by the author himself:
…in a world of unprecedented media choice… we begin to select our reality according to our biases, and we interpret evidence (such as photos and videos) and solicit expertise in a way that pleases us.
In other words, our cognitive apparatus (so useful on the savanna) is woefully unprepared to navigate the complexity of our current world. With enough data points to draw from, we consciously and unconsciously create a model of reality that suits our tastes, and we mistake that model for reality itself. We cherry pick evidence that agrees with our preconceptions and ignore evidence in discord. What is most chilling about the book is that it shows how our very perception is biased by our beliefs; we could be watching the same football game on TV and have an entirely different view of the facts of what happened. Indeed, we do, all the time, about everything.
To Manjoo’s point, if you are reading this, it’s a pretty good guess that you would agree with this statement: the arrow of time points towards greater understanding and the closing of the gap between truth and belief as history marches on. But ironically, Manjoo shows why (at least in the short run) there is a gap between your belief and the reality. Hardly a case for optimism.
Yet I argue that because of the veracity of Manjoo’s argument, it is imperative that we take an optimistic stance in life. Simply put, we live in a world where both optimists and pessimists have more than enough evidence to “prove” that they are right. Right about specific events and right about their worldview in general. And because we live in a world of self-fulfilling prophecy, a world where future reality is shaped ever more by current mindset, it is important to our survival that we imagine the world is as benevolent and full of possibility as we’d like it to be.
]]>The book presents an elegant simultaneous solution to three questions:
- How can the strong force possibly get more powerful with distance?
- Why can’t we break protons into their component quarks?
- Where the heck does a proton’s mass really come from?
In fact, Wilczek won a Nobel Prize for the solution. First, you have to understand three basic facts of physics:
- Quantum mechanics says that short-lived particles and their anti-particles are constantly popping in and out of existence.
- Protons are made up of 2 up quarks and 1 down quark. These three quarks have different primary color charges (RGB) so that together, they are color neutral (white).
- Quantum mechanics says it takes a tremendous amount of energy to constrain the location of a particle (AKA the uncertainty principle).
With electric charge, the cloud of particles from (1) screen the EM force. Assume you have a proton out in space. A particle-antiparticle particle appear. There is no net energy or charge created. However, the negative particle will move a little bit towards the proton and the positive charge will move a little bit away from the proton before they annihilate each other. This absorbs a tiny bit of the EM force.
Wilczek and company had an idea. What if color charge works differently? What if it is anti-screened by (1). The ephemeral particles move so that the color charge gets relayed. Think of it as reinforcing a wave rather than canceling it. When they worked through the color field equations, they found a very few solutions where anti-screening was possible. This is good because it means the theory wasn’t arbitrary.
So that explains question (1) of how the strong force can get larger with distance. But if you do the integral of an ever increasing force over all points in the universe, you get a metric butt-load of energy bound up in a quark.
Here’s where (2) comes in. If you combine quarks with R, G, and B color charges, they cancel leaving no net force. Problem solved. And it explains the answer to our question (2) of why whe can’t break protons into quarks. The energy required is just too high.
But what about our question (3)–the mass of the proton? Enter the uncertainty principle. The three quarks in a proton can’t all be exactly on top of each other. That also requires too much energy. So at some point, the energy equation for the strong force increasing over distance and constraining the quantum location of the quarks balances. Plug that energy into m = E/c^2. That accounts for 95% of the mass of the proton. Most of the rest consists of the quarks’ masses themselves.
Cool, huh?
]]>From the comments on my Introduction to this series, it appears I have discovered a controversial topic. Good. My first objective will be to illustrate why we cannot rely on moral compasses to guide society. After some thought, I have decided to break the topic of moral compasses into two posts: how they fail and why they fail.
During this series, I will use the term “society” to mean a group of people with a common set of explicit and implicit rules living in the same geographic region . Obviously, there is a loose hierarchy where a larger society may include smaller societies. As you move up the hierarchy, the number of common rules diminishes as the geographic area increases. Eventually, we reach the “Global Society”. The members of a society also share a significant number of resources and some kind of semi-stable identity.
The number of people in a leaf-level society varies with their economic interdependence and communication channels. Less advanced societies require fewer members to sustain coherence. In areas with high mobility and mass media, the smallest unit I would consider a coherent society contains on the order of a million people. So Palo Alto is not a society. Silicon Valley may be one. The San Francisco Bay Area definitely is.
I’ll start by outlining my position in contrast to three points brought up in the comments to the Introduction:
- People have moral compasses. I absolutely agree. They appear to be a combination of evolutionarily directed hardwired behaviors and childhood indoctrination into the social group.
- Moral compasses are useful. I absolutely agree. We wouldn’t have a civilization without them. However, they are useful in a limited set of situations, few of which apply at the society level.
- The moral compass point of view is as legitimate as the incentive structure point of view. I’m sorry, but… no. Rather, the moral compass is an extremely narrow and coarse approximation of incentive structure.
The moral compass is an internal voice that, when faced with a choice, answers the question, “What’s the right thing to do?” People might describe it as a “feeling, “instinct”, or “belief”. The good things about moral compasses are that they are fast and cheap. If you need an answer in seconds or the amount of value in question is small, your moral compass is pretty much the only reasonable tool.
However, at the level of a modern society, where we have time to consider and the value in question in large, the moral compass breaks down. There are five major flaws with trying to apply moral compasses at this scale.
- They return mostly binary answers. Is “it” right or wrong, good or bad, safe or dangerous? Our brains want to categorize rather than measure. So we get discrete rather than continuous output.
- They vary significantly among people in a society. For example, in California, we have fairly even splits on important questions such as gay marriage, gun rights, death penalty, abortion, and euthanasia. Water rights, welfare, and environmental policy top the list of contentious economic issues.
- They are opaque to introspection. Most people have difficulty articulating any reasons behind their position. Those that do frankly end up sounding like they’re rationalizing. In fact, there’s evidence that people decide things before they have conscious reason to.
- They are sensitive to framing. “Undecided” people often respond differently to controversial questions depending on the framing. Is gay marriage a fairness or a moral issue? Are gun rights a safety or a freedom issue? Is the death penalty a life or a punishment issue?
- They are hard to change. My experience is that, on controversial issues, people are very unlikely to change their minds once they’ve firmly staked out a position. They will blatantly ignore evidence in favor of a view anecdotal data points that confirm their pre-existing belief. This experience is backed up by the research behind cognitive dissonance: your beliefs change to match your actions.
The only thing that allows us to get overcome these barriers is trust. Humans are hardwired for cooperation. Unfortunately, this trust and willingness to cooperate usually only extends to a relatively small “in group” with whom we have tight social ties (for an excellent series of blog posts exploring this topic, go to the first one at Life with Alacrity).
The exact limit is debatable, but it is on the order of 100, so four orders of magnitude less than a modern society. That means it will be impossible to coordinate a modern society using moral compasses. Once you reach a certain number of people the chances of reaching an impasse on any but the most fundamental issues approaches certainty. Moreover, the number of people is too larger to rely on social trust to overcome entrenched positions.
In the next post, I’ll examine why moral compasses break down. As a result, I hope you will see that moral compasses are really an approximation of a more general approach that we can employ more directly.
]]>- 50 participatns ante a pre-determined amount of money
- Each participant submits original work (of a pre-determined type)
- Each participant votes for one winner (other than themselves)
- Winner gets the money
This is similar to other contest models such as biz plan competitions and screenplay competitions, however there are key differences: (1) you put up your own money (2) you judge the winner.
Here are some optional rule variants:
- Allow public input (and originality vetting) before voting
- Money put up by: participants, organizer, or outside sponsor/investor
- Determining winners
- Allow more than one winner
- Use multiple rounds of voting to determine winner
- Bracket tourney (random draw, or use voting round for seeding)
- Administrative fee for organizer (who should never be allowed to participate, BTW)
- Winner gets non-monetary benefits too
- Winner has obligations to fulfill
Depending on the subject matter of the challenge, and the rule variants chosen, you can incentivize a wide range of activity. For example:
Business Plan Competition: Ante $1000; Submissions are one-page exec summaries; Winner gets $50K of seed funding; 5% of equity in the winning startup goes to the other participants as a group.
Social Entrepreneurship: Similar to above except plans are judged on social good as well as profit potential, and participants don’t get equity in the winner.
Charitable Endeavor: Ante is $100, but instead of starting a new organization, the goal is to deploy the funds to existing efforts or individuals or groups in need. Variant: participants volunteer 10 hours of their time in addition to the cash ante.
“TED Prize Wish“: The magic of the TED Prize Wish is that in addition to getting a cash prize, the winner gets to express their wish and have the audience help make it come true. But why should such wishes be eligible to only a select few and judged by a select few?
Creative Endeavors: Here’s your chance to get paid for your brilliant {artwork, photography, original music, musical performance, short story, poetry, youtube video, etc}. Separate challenge for each discipline, $50 ante. Remember, you don’t know what piece your fellow participants will submitted before you ante…
Organizational Improvement: Lots of companies ask their employees (who often know the ins and outs of the business better than top management) to submit ideas on how to improve, with the best ideas being financially rewarded. Here the company provides the prize pool, but the employees themselves vote on the winner(s).
What I like about The Challenge is that it’s flexible and can be organized spontaneously. In fact, I will organize one next week and announce it via this blog, so stay tuned!
]]>In the meantime, I finished a really good physics book: Lightness of Being by Nobel prize winner Frank Wilczek. It requires a basic knowledge of quantum mechanics (I suggest Al-Khalili’s Quantum) and particle physics (any recent popular book that spends more than one chapter on the Standard Model).
Given that, it does an awesome job of explaining three things that have always bothered me. First, how the strong force can possibly get more powerful the farther away you get. Second, why we can’t break protons and neutrons into their component quarks. Third, where the heck a proton’s mass really comes from. It turns out all three things are related and the explanation is quite elegant. I don’t know why the dozen other physics books I’ve read in the last five years ommitted an explanation (or at least an explanation that stuck with me).
]]>Haidt posits a sobering reality that we will all have to overcome if Obama is to achieve his goal:
]]>Our Righteous Minds were “designed” to…
- unite us into teams
- divide us against other teams, and
- blind us to the truth

Graph from Fortune Magazine article.
It’s cancer’s inconvenient truth that despite the trillions (with a “t”) of dollars spent trying to cure it so far, there has been no statistical progress on that goal. And don’t let the recent headlines fool you, most of the “progress” can be attributed to overall decline in smoking.
That’s one of the reasons why early detection has taken a front seat after many years of getting short shrift. This month’s Wired Magazine cover story is all about “Why Early Detection Is the Best Way to Beat Cancer”. There is another reason for the ascendancy early detection too though: the explosion of technology that makes it possible, including medical imaging, gene sequencing and other biotech.
The Wired article highlights the paradox of early detection which is that by looking you find all sorts of tumors and “growths” that may never become problematic. What’s more is that you can do more harm than good in those cases by treating every irregularity you find as malignant. Still, I believe the potential for good with early detection far outweighs the negatives and we should strive to detect and understand as much of what’s going on in the body as possible, simply as a matter of course in healthcare. It is important though to keep in mind some basic truths as we ramp up our quest to peer into the nooks and crannies of our bodies in search of cancer:
- Just because you detect, doesn’t mean you have to intervene. Sometimes it’s better to simply keep vigilant watch, as hard as this may be for both doctors and patients, who understandably are inclined to “do something, do anything” rather than “do nothing”.
- Analysis and theory are critically important complements to detection. The amount of data we gather will be overwhelming, and has the potential to be worse than useless. That is, unless we can successfully organize it into a framework for understanding what’s going on, both in the individual patient, and in general.
- Scientific understanding is shaped by the tools we use to measure. This has two important implications: (1) we need to be careful in our analysis and in constructing theories not to be lulled into a false sense of understanding by our tools — i.e. not every problem is a nail just because you have a hammer; (2) in searching for deeper truths about cancer, we have choices as to what tools/technologies we utilize and develop — we can’t afford to allow our favorite toys, or lack of imagination, or lack of courage dissuade us from building the ultimate tricorder.
- It’s not just about cancer. By looking without prejudice and developing new detection technology of all sorts, we will undoubtedly unlock mysteries other than cancer too. If we are going to get hung up on a giant mystery, let’s not make it cancer, let’s make it health in general.
From my perspective, it seemed like he thinks that people’s behavior is governed primarily by an internal moral compass rather than incentives. So if you want to change their behavior, you should redirect their moral compass rather than adjust their incentives. People who don’t adjust their behavior are defecting from society and should be sanctioned.
I encounter this view quite often in my social circle and this instance inspired me to write a series of posts to explain how I think things actually work. You’re free to disagree with me, of course. In fact, I expect most people to disagree with me. But I’ve thought rather hard about this issue and I’ll put my model up against the moralistic view when it comes to predicting a population’s average behavior or choosing an effective policy prescription.
The concrete example from our conversation is instructive. This friend lives in a large city and attempts to commute by public transit whenever possible. He said he does this because he thinks it’s the right thing to do for the world we live in. I asked him what he would do if one of his neighbors were always driving a large SUV to run errands. His answer was to convince this neighbor of the error of his ways. If the neighbor failed to heed this advice, he would be doing something “wrong”. The implication I took away was that if enough people chose this “wrong” behavior, we should make a law to enforce the “right” behavior (though perhaps social shunning might be sufficient).
As you can probably guess from my series of posts on the Ascetic Meme (here, here, and here), I think this approach is misguided. If you read those posts, you can undoubtedly guess my policy prescription. Figure out how much “harm” consuming a gallon of gas does to society and set that as the gas tax. If there’s non-gasoline-consumption harm from the SUV, tax SUV’s themseoves. Then if this neighbor still decides to drive the SUV, the amount of benefit he gets outweighs the cost to society and he should be driving the SUV. Better yet, I don’t personally have to go around trying to convince a lot of people to change their minds. My friend objected that people with high disposable incomes wouldn’t respond to such a tax so it would be both ineffective and “unfair”.
As you can imagine, there’s quite a lot of data on people’s response to fuel prices. For the economically inclines, here is a good survey. The estimates for long run response to a 10% increase in fuel cost range from a 2.3% to a 8% decrease in demand in the US. Now, my friend might argue that these results only apply at the aggregate level and may not affect wealthy people making the “right” choice not to own an SUV. It turns out that economists have tackled this problem as well using what are called discrete choice models. This paper shows how raising gas taxes will shift wealthy households from owning an SUV and a car to two cars and how a tax on SUVs will reduce SUV ownership across the board.
But let’s not get stuck down in the weeds. His larger point was that society doesn’t hold itself together because people do what’s in their self interest. Rather, the fundamental societal glue consists of most people making the right moral choices. I suggest that anyone who believes this run out and buy Tim Harford’s The Logic of Life for an engaging introduction to just how well people respond to incentives in a wide variety of contexts. For some reason, the moralistic view seems to seduces many intellectuals. I’m going to attempt to dispel this confusion. The first step is diving into my model of society’s foundations, which will be the first substantive post in this series. As a nice side effect, I think working through this topic this will bear fruit in Rafe’s and my ongoing discussion of “autonomy” as it applies to emergence.
]]>As 2008 closes, it appears that momentum is picking up for the somatic evolution view of cancer. Here are three recently published papers of note:
- The Evolution of Cancer (Goymer, et al, Aug 2008, Nature)
- Cancer Research Meets Evolutionary Biology (Pepper, et al, in press, 2008 Evolutionary Applications; Santa Fe Institute working paper)
- Genome Based Cell Population Heterogeneity Promotes Tumorigenicity: The Evolutionary Mechanism of Cancer (Ye, et al, Dec 2008, Journal of Cellular Physiology)
The first paper is a short review of the major work to date, accessible to non-specialists. The summary suggests: “Cancer cells vary; they compete; the fittest survive. Patrick Goymer reports on how evolutionary biology can be applied to cancer — and what good it might do.” I will summarize the other two papers below, and then give my view of the implications and where this should be heading.
Cancer Research Meets Evolutionary Biology (my summary)*
- “Although the role of somatic evolution in cancer is rarely disputed, it has seldom been integrated into biomedical research.”
- “Not all pre-malignant neoplasms progress to cancer. It is therefore important to identify risk factors for progression as early as possible because, in many cancers, early detection and intervention improve survival.”
- “Because neoplastic progression is a process of somatic evolution, reducing evolutionary rates should decrease cancer incidence.”
- “One possibility is to reduce the mutation rate via therapeutic reduction in mutagen exposure.” For example, suppressing inflammation has been show as effective in this regard.
- “Decades ago, Nowell postulated that the emergence of drug resistance in cancer was driven by somatic evolution, an hypothesis for which there is now substantial empirical support.”
- “The Darwinian perspective suggests that interventions that ameliorate progression or virulence without directly killing neoplastic cells would delay the emergence of resistance.”
- When developing drugs, somatic evolution tells us that “…tumor cell toxicity does not invariably imply effective treatment” and “short-term therapeutic response may bear little relationship to the likelihood of effective longer-term treatment.” In other words, just because you shrink or remove the tumor, doesn’t mean you’ve stopped the cancer. In fact, “…the longer-term cost may well be an accelerated rate of resistance evolution.”
- “By targeting the cancer cell products that alter the micro-environment, it is possible to halt or reverse tumor growth without using cytotoxins to directly kill cancer cells.”
- Somatic evolution suggests that the cancer stem cell theory, while consistent with somatic evolution, is largely irrelevant. The same phenomena and clinical results can be explained more parsimoniously by somatic evolution, including group selection.
- “Direct observational studies of human neoplasms have provided insights into how somatic evolution leads to cancer outcomes and to therapeutic resistance.”
- By acknowledging somatic evolution as the primary mechanism in cancer progression, we are and will continue to make actual clinical progress in preventing, detecting and treating cancer in humans.
Genome Based Cell Population Heterogeneity Promotes Tumorigenicity (my summary)
- “The impact of genetic variation at the genome level is much more profound than at the gene level, as the higher level of organization often constrains lower levels and displays more stable characteristics than lower levels.” Genome level in this paper refers to the chromosomal organization of genetic material. If you substitute “chromosome” wherever you see “genome,” you won’t be too far off in your understanding.
- “When the genome context changes, even when the gene state is the same, it often does not keep the same biological meaning.”
- Clinical support for focusing on the genome level and for somatic evolution theory is established in experiments and studies using a form of chromosomal imaging called spectral karyotyping (SKY):
- To a first approximation, genetic variation at the genome level can be measured using SKY to document non-clonal chromosome aberrations (NCCAs). An example of an NCCA can be seen above involving chromosomes 19 and 2 (simple, non-reciprocal translocation).
- Although gene-level mutations and molecular pathways are always implicated in cancer progression, nobody has ever been able to find a pattern that is predictable enough to effectively cure cancer. “Based on the concept of cancer evolution and the realization that cancer is a disease of probability, one can understand why elevated genome diversity will lead to the success of cancer evolution regardless of which molecular pathways or mechanisms are involved.”
- “Significantly, the only common link to tumorigenicity is increased levels of NCCAs!”
- Levels of observed NCCAs represent a measure of cell population diversity and “… population diversity provides the necessary pre-condition for cancer evolution to proceed….”
- “… [the] hidden link between population diversity and tumorigenicity can be easily found in cancer literature.”
- The genome level corresponds to the evolutionary mechanism while the gene level corresponds to particular molecular mechanisms. “Thus our current study offers a new direction that uses the degree of [genome level] heterogeneity to effectively monitor tumorigenicity.”
- “… using a system approach to monitor [genome level] dynamics is not contradictory to studying the function of various cancer genes, similar to not seeing the forest for the trees, these two approaches focus on two levels of genetic organization, and try to address different mechanisms (evolutionary and molecular) of cancer formation.”
- “… it seems that the complexity of cancer is too high and that just tracing individual pathways will not lead to understanding the nature of cancer due to the highly dynamic (stochastic and less predictable) features of this disease. It is time to focus more on the system’s behavior and its patterns of evolution rather than mainly focusing on individual pathways alone.”
Implications
There now seems ample evidence that the somatic evolution theory of cancer not only parsimoniously describes the disease but also makes falsifiable predictions which are being verified in experimental and clinical settings. More money and attention should be applied to this area, but it’s hard to turn the aircraft carrier that is the cancer industry quickly enough. There are some very real and practical near term implications of this work in terms of saving lives.
First, we need to overhaul the methodologies used for detecting cancer. Instead of focusing on individual (or even collections of) biomarkers, we need to look at the patterns of evolutionary change in cell populations in the living organism. Currently our best bet is to look for patterns of chromosomal change, in particular overall genomic diversity within the body. Ultimately we need smart nanotechnology for this. In the near term we need to push the envelope of computational imaging technology (like SKY) and figure out ways to prophylactically monitor as much of the cells in the body as possible as a matter of course. Clearly, it makes sense to focus first on individuals with high congenital and environmental risk, and also to focus on parts of the body which are showing evidence of needing monitoring. At the very least, all biopsies from now on should include SKY analysis (and/or its more sophisticated successors).
The implications for treatment will come as more of a shock to the cancer industry. I’ve suggested before that somatic evolution contraindicates cytotoxic and non-targeted chemotherapy in many cases. The good news for the pharmaceuticals is that there is still a role for drug therapy. But if you take the evolutionary argument to its logical conclusion, even targeted cytotoxic therapies are likely to be thwarted by the cleverness of evolution. As Pepper, et al suggest above it is possible to halt or reverse tumor growth by non-toxically altering the environment in which cells are proliferating. Let’s get the drug companies to shift gears here, and let’s think about ways to alter somatic evolution that are less costly and more effective than drug therapy.
Finally, we should be aware of the implications of somatic evolution when it comes to detecting tumors and how we react. The theory says that somatic evolution is occurring all the time in our bodies, just at an extremely low rate as to be undetectable most of the time. Furthermore, our bodies have (thanks to macro-evolution) incredibly intricate and redundant mechanisms to keep somatic evolution in check and as benign as possible. But this suggests that as long as somatic evolution is acting benignly, those defense mechanisms may not be triggered and there may be allowed to evolve (within your body) a plethora of pre-cancerous neoplasms all the time, the vast majority of which remain indolent or are eventually eliminated by the body. Indeed, as imaging technologies are exploding in their usage, we are detecting these so-called incidentalomas in mass quantities and like never before. This has (understandably) lead to overreaction based on outdated understanding of cancer: you see a tumor, and even if it’s currently benign you remove it just in case. This, as a recent Wired Magazine cover article points out, leads to the riddle of early detection:
Some cancers can be too easy to find. About 80 percent of prostate cancers are detected early. Yet most patients survive at least five years even if untreated. The problem: deciding whether medical intervention is necessary.
Other cancers are inherently elusive. Pancreatic cancer, for one, betrays almost no symptoms, making diagnosis a matter of pure luck. Only 3 percent of cases are found in the first, most curable stage.
The money goes where the cancer is. Some malignancies, notably lung cancer, are mostly detected only in late stages. As a result, that’s where most research is directed. Shifting those priorities won’t be easy.
And while we have no good solution to this riddle yet, somatic evolution theory does suggest an alternative to burying our heads in the sand and defiantly attempting to excise or poison every neoplasm we detect. Again it comes back to shaping the evolutionary process through altering the micro-environment. Instead of letting evolution run amok (or worse, fan the flames), let’s take control of somatic evolution, and maybe even work with it. After all, the goal isn’t to cure cancer, it’s to stop human death and suffering caused by cancer.
* Full disclosure: I was part of the SFI working group out of which this paper resulted. However, I did not write or edit the paper directly, and the commentary outside of quotes is my personal summary of the contents of the paper. All emphasis is mine.
Some friends and I watched the above talk together by Dan Gilbert on the various ways humans made logical errors in decision making. If you are a behavioral economist or are into psychology literature, you are probably all too familiar with the experiments on this subject, but it’s worth watching anyway.
There was some criticism of the talk in that it does ignore the fact that given limited resources in making decisions, the heuristics that we humans use (i.e. the rules of thumb, like price being a good indicator of quality) serve us very well most of the time. It’s only under specific circumstances that these heuristics lead to logical errors and bad decisions. Thus, the talk left some people thinking that the point Gilbert was making is that we’re all pretty bad decision makers and we should learn to transcend these error-prone heuristics. The critics further suggested that no, we’re not bad decision makers, we are in fact really good 95% of the time, and furthermore it’s not really logical to waste our time trying to be better because the cost is too steep. We’d waste every moment of our lives figuring out what a good price is for a bottle of wine.
My interpretation is slightly different. The import of Gilbert’s thesis is not the 95% of the time where our rules of thumb lead us to a good or reasonable decision (all things considered). Rather, it’s the 5% (or 1% or 0.1%) of the time where our bad decisions have a hugely negative impact. Consider for a moment the fact that those in positions of great power (government leaders, CEOs of large corporations, etc), are working with the same faulty decision-making apparatus as the rest of us. And so unless there are meta-apparatuses in place for making sound decisions on, say, whether and how to spend $800 Billion tax-payer dollars, we can expect that the logical errors that Gilbert speaks about will translate into massive losses in real dollars that otherwise could be easily avoided.
Gilbert’s example of the Homeland Security people asking him what to do about terrorism is a poignant reminder that the evolutionary legacy of our analytical minds flounders in ways today that it never could have on the savannas. And if you agree that the potential consequences of individual decisions gets greater with each passing decade, then you should understand how vital it is for us to acknowledge the limitations of our analytic minds and to go to extraordinary lengths to make great decisions when it really matters.
]]>Apparently this PBS NOVA program aired last year, but somehow I missed it. Definitely worth watching (and looking at the examples), especially if you are mystified by all of this “emergence mumbo jumbo”:
Part 1
Part 2
hat tip: CAS-Group Blog
]]>

Dear Rafe,
We recently launched a new feature on Change.gov called Open for Questions. Thousands of you responded, asking 10,000 questions and voting nearly a million times on questions from others.
Now that we’ve answered some of the most popular ones from the last round, we are open for questions again. Ask whatever you like, and vote up or down on the other questions to let us know which ones you most want the Transition to answer.
Get started now at https://change.gov/openforquestions.
We’re looking forward to learning about what you want to know.
Thanks,
John
John D. Podesta
Co-Chair
Obama-Biden Transition Project
I’ve been trying to reconcile Rafe’s an my views on this topic. I actually think we agree on the broad themes related to our argument over “autonomy”. From my perspective, it seems like the only real disagreement is on the implications for humans.
As a higher level agent emerges, it must organize lower level agents. No argument from me here. Below the human level, the organizing principles resemble direct control. However, I assert that this is an artifact of the relatively low levels of complexity.
Atoms, small molecules, large molecules, cells, and plants all respond solely to what I might term “tropisms“: innate, unthinking tendencies. It’s therefore feasible for the higher level agent to employ direct control by simply providing the stimuli to which tropisms respond. However, I think we need to further generalize this dynamic if we want to apply it to higher levels of complexity.
Essentially, the higher level agent gives lower level agents what they “want” in return for their participation. An animal provides its cells energy and reproduction, which in turn provide them to proteins. Proteins stabilize amino acids, which in turn stabilize atoms.
Similarly, superfoos will likely organize humans by giving them what they want. But human wants are fairly complex. They involve a combination of (at least) safety, challenge, status, pleasure, and play. There is a fair degree of heterogeneity among humans in the relative amounts and amplitudes of these ingredients they prefer. Being free to pursue ones preferred combination is what I think of as autonomy.
So my hypothesis is that superfoos will organize humans by providing them these elements. A single superfoo will have to provide lots of options to get a wide enough range of humans to participate. Different superfoos will pursue different meta-mixtures as they fill niches at the higher level of organization. This all seems like a pretty straightforward generalization of past emergence so I reject the assertion that my thinking on this isn’t broad enough. In fact, I would argue that my treatment of superfoo organizing principles is more general than Rafe’s given the history of matter.
From the human perspective, I think all this corresponds to more choice for people, not less.
]]>What I had in mind when I started thinking about this subject was how there is a structural deficiency in our duality of for-profit and non-profit organizations. Specifically, I feel we need to support social entrepreneurship, which is a new category that sits between these two endpoints. Social entrepreneurs (SEs) are not opposed to (and often strive to) make profit, but they also have the goal of doing good for society, the world or for a specific subset thereof. The very existence and growth of this category of organization and business model suggests the deficiency of which I speak.
The straw-man argument against explicitly supporting SEs is that they don’t need support. They are either for-profit and thus the market is the best (and only) support out there, or they are non-profit and already have all the proper incentives and mission to do good. I believe that the world (and the U.S.) would be a better place if there were an explicitly recognized and supported a third type of organization that was suited for SEs.
I am not going to propose that I know what mechanisms are best to support SEs, but here are some thoughts on the subject. What if we explicitly incentivize the non-monetary goals (as defined in a strategic social mission document) using tax credits? And what if we used prediction markets to determine the amount of credit the organization receives?
For instance, let’s say that the One Laptop Per Child initiative, instead of being non-profit, was set up as suggested above. That is, a for-profit with a SE strategic mission document. Let’s pretend the document said that it had a goal of delivering 100 million laptops to children by the year 2015. It doesn’t matter whether some of those were bought and paid for, sponsored by a third party, donated by the company or whatever; the mission is simply to deliver. Then there could be a prediction market claim based on whether that goal will be reached by 2015. At the end of each tax year, the price of the claim (or perhaps the 52-week average) would be used to determine the percentage of tax credit the organization receives. For example, let’s say the company makes $5M in profit in 2009 and the price of the claim is $35, then they would get a 35% tax credit. Meaning that whatever their computed tax burden would be as a for-profit, they would get to retain 35% of that amount.
Thoughts? Alternatives?
]]>The reason I like this talk so much (besides that it’s well-presented) is that it introduces us to the idea of invisible etiology. Such a powerful concept, one that I feel has the power to help us solve so many mysteries, once we take it seriously.
Something that I’ve been thinking about lately: does homelessness have an invisible etiology (or etiologies), and if so, what is it?
]]>
This is a picture of what Food on Foot did on Christmas. It’s a larger version of what they do every week, which is feed and cloth homeless and give them an opportunity to get back on their feet themselves. If you graduate from the program, you end up with an apartment, a job and a bank account. Entirely volunteer run and funded.
Amongst the program members are accountants, lawyers, computer programmers. It includes mothers, grandmothers, and children. It includes families, and people without family. “The homeless” is a microcosm of the rest of society, not “them”, but “us”.
Yes, it’s true that many homeless people have addictions or mental disorders. But so do many people with homes. I don’t see homelessness as a problem. I see it as a symptom.
The problem is in the society, the culture. It’s a form of illness or dysfunction of the higher level agent. It’s not an inevitable part of an industrialized society. And certainly not one as wealthy as the United States today.
]]>But having just watched Frost/Nixon, I was left with the distinct feeling that Ford made a big mistaking giving Nixon an unconditional pardon, especially before all the facts were in. Wouldn’t it go against all that this country stands for to allow the most powerful people in the world get away with high crimes and massive abuse of the public trust?
Is there a way that the Bush administration (or more specifically the culpable actors) could be held accountable without damaging the prospects for a better future? Does the world’s need for closure and the country’s need for moral redemption justify the damage that it would likely cause?
What should Obama do? And if nothing, who can and should address this issue and how?
]]>Rafe and I had a great chat on the phone today about Superfoos. I think we agreed that there will be multiple instances of agents emerging in the level immediately above humans but there is always a single top-level network in local space. I think we also agreed that the “awareness” at this level will be different from human -awareness. It probably won’t subsume our awareness (at least without a technological singularity) but will exhibit properties such as self-preservation.
Where we got stuck was on the concept of autonomy. Stuck isn’t really the right word. We both greatly expanded our conceptual space around autonomy. But we didn’t come to agreement on a definition. However, it was a very productive conversation, so I thought I’d put my impressions down here.
I look at autonomy from a decidedly economic standpoint. How many different alternatives on how many difference axes can I possibly choose? Rafe and I agree that increased economic complexity leads to increased choices because everyone has more resources. I think Rafe looks at autonomy from a decidedly psychological standpoint. How many different alternatives do people feel are reasonable for them to choose? Rafe and I agree that the average person probably feels more constrained in their behavior as societal complexity increases because they have more responsibilities.
The difference between these two perspectives raises some interesting questions of free will. Is it truly a free choice if social isolation is the consequence of one alternative? Now, here’s where I believe the multiplicity of superfoos becomes extremely important. If there are multiple superfoos, there’s likely to be a spectrum of the degree to which they impair psychological autonomy. Some may value and promote diversity. Then you’re free to choose a superfoo that gives you the degree of psychological autonomy you prefer. These superfoos then compete at the higher level for advantage. It’s an open question as to which superfoo strategy is better.
Even this analysis is incomplete. In economics, a major insight is that people choose how to allocate their time between labor and leisure. I would refine this to say that people choose how to allocate their time between transactional labor, social labor, and leisure. The difference between transactional and social labor is that the primary purpose of the former is to generate monetary capital and the primary purpose of the latter is to generate social capital. Think of the difference between going to work as a programmer and volunteering to be an officer in your school’s PTA. So different superfoos could potentially offer different bundles of transactional labor, social labor, and leisure autonomy.
One thing to note here is that I am putting a premium on what I’ve described in the past as executive function, which includes the ability to choose your goals. I contend that humans (and potentially some birds and other mammals) are the first agents in local space to possess this capability. Therefore, the next level of agency will obviously build on it in how it organizes its human component. To me, that obviously means offering choice.
I’ll hope Rafe will expand on his thoughts.
]]>This year for his birthday, Eben decided to host this webinar and invited all his contacts to join him online in lieu of a party and gifts. What a brilliant concept and even more brilliant execution. Eben (and Scott Brandon Hoffman, founder of CharityWater.org) truly epitomize the new philanthropy. ]]>
I was actually about to post something about terminology, so I’m glad this came up. It’s just so difficult to choose words to describe concepts that have little precedent, without going to the extreme of overloading on the one end (e.g. “organism”) or the other extreme of being totally meaningless (e.g. “foo”). I have tried to use terms that are the closest in meaning to what I’m after but there’s no avoiding the misinterpretation. I can only hope by defining and redefining to an audience that is not quick to make snap judgments but rather considers the word usage in context, we can converge to at least a common understanding of what I am claiming. From there at least we have a shot at real communication of ideas and hopefully even agreement.
Organisms and Superorganisms
Regarding “organism”, I don’t particularly like it either because it has too narrow (and biological) of a connotation. I prefer agent or “system”. In my lexicon, all systems are agents to some degree, but I typically reserve “agent” for those systems that display behaviors that we would recognize as self-preserving or self-generating. Thus, given two different systems — one being a crowd of people on a Manhattan intersection, and the other being a corporation — I would be inclined to refer to the latter as an agent, but not the former. I’m not thrilled with “supercommunity” as it’s a little too soft and doesn’t imply any sort of agency (which it needs to). Superagent? Supersystem? Let’s just stick with superfoo for the moment.
Levels
Regarding “levels”, I refer you back to my original post on levels, especially the last section titled “Level’s Aren’t Strict”. As I see it, levels are becoming less strict the higher we go up. Meaning that chemical systems are very distinct from the atomic systems they sit on top of, but social systems blend and bleed together with the human systems (i.e. human beings) they sit on top of. Just for example, individual humans interact as peers with (i.e. at the “same level as”) corporations under some circumstances (e.g. my contract with AT&T to provide service in return for money) and at different levels under other circumstances (e.g. my being an employee and thus a constituent part of the company I work for).
Networks
You make a good point about networks. Yes, everything is interconnected and one big network, and there are infinite ways to model systems as networks, and it depends on what interconnections you choose to model and what parts are “in” and what parts are “outside” of the network. However, I think we will agree that not all models are equally good and that the good models are the ones that come closest to fitting the underlying structure; they produce more insightful descriptions and more accurate predictions.
Awareness
As for “awareness”, I will agree it’s too loaded a term and should probably not be used in this discussion. I note that we both have avoided the even more loaded term, “intelligence” (thank Foo!) except as it applies to humans. My contention — and I will expound on this in its own post — is that the concept of “intelligence” is an example of the fallacy misplaced concreteness. Meaning it should never have been reified. Intelligence is simply a description of how human agency manifests itself. I will go further and suggest that “awareness” is like “intelligence” in this regard, but not quite as egregious.
Autonomy
Which brings us to “autonomy”. Ah, autonomy… a true red herring if there ever was one. So laden with religious, philosophical and political overtones. There needs to be a term that means exactly what autonomy means, but isn’t so overloaded. Dictionaries and thesauri are no help here, yielding equally charged alternatives. All I can say is that it’s kind of like the situation with networks and and how there are many and just one at the same time. To me (in my lexicon), I would say that autonomy is the degree to which a system is able to effect its own agency, which is to say to exist and persist through time. No man is an island, and no agent is truly autonomous.
Interdependence
I like how you bring in “interdependence”, but I will propose the following reconciliation: autonomy is the degree to which a system is able to effect its own agency modulo whatever the space of alternatives happens to be. That is, if your only alternatives are to hunt or die, you still may have a high degree of autonomy as long as you are maximally free to hunt (nobody is holding you down, there are actually things to hunt, you are healthy, etc). I will agree with you that the space of alternatives for agents gets larger as time goes on within a level (and also as we move up the levels). But the amount of interdependence also increases as time goes on within a level, which BTW corresponds to an increase in agency at the level above. And as interdependence goes up, unless the space of possible actions grows commensurately, there will be an overall decrease in autonomy.
Human Autonomy
So is humans autonomy currently increasing or decreasing? Too hard to say. But taking the long view, the pattern that I see is that within any given level the space of alternatives for agents starts out small and rises over time in an S-curve. At the same time, interdependence also goes up along an S-curve. And so depending on the relative amplitude, and also how the phase shifts align, between these two S-curves, autonomy can either be increasing, decreasing or flat. In all the levels heretofore it seems as though there has been a long period of increase in autonomy with an eventual peak and a subsequent decline followed by a leveling off and relative steady state. Exhibit A is single celled organisms and what happens as they transition to a multicellular collective and eventually become part of a multicellular organism. But it’s also true of every other level that I can think of, including cooperative aggregations of humans. If you have a counterexample, let’s explore that. What’s in store for humans in the superfoo scenario that I see? Again, just because the pattern seems consistent over the course of 13+ Billion years as each new level has unfolded, this doesn’t mean we aren’t entering some new phase of history where the old pattern is broken. But I’m using the same inductive logic as those who argue for the inevitability of the technological singularity.
Superfoo or Superfoos?
Now, onto singular superfoo or multiple superfoos. This is also a red herring. As agreed already in the network discussion, there are and will be superfoos, plural. But no matter how many of them and how many levels of hierarchy you presume, there will always be a network that represents the single highest level on Earth which by construction is fully connected, no islands. And it is the underlying system (singular) modeled by that network that I am calling the superfoo. It’s not a matter of prediction that gives me this confidence, it’s a matter of definition :-)
Superfoo
What can we conclude about the superfoo with any certainty? Nothing of course. But using the inductive logic that singularity arguments must rely on, here are some things that I personally feel are justified by the evidence:
- Superfoo’s agency will continue to increase.†
- Human autonomy at some point will decrease from its peak.‡
- Superfoo will increasingly exhibit self-* properties.
One point about this last bullet. Not every agent on Earth exhibits all self-* (pronounced “self-star”) properties to a high degree. Atoms exhibit self-organization, self-containment and self-stabilization, but not much else. Biological entities exhibit most of the properties in this diagram to some degree, and some properties more than others. Humans and multi-human complexes notably exhibit properties in the “Sense of Self” category to degrees that other agents don’t. The pattern over time in the universe as new levels of complexity emerge seems to be that new self-* properties emerge as well. Thus, it is not unreasonable to suggest that the levels above humans will eventually do two things: (1) exhibit the properties on the diagram to greater and greater degree as time goes on, and (2) exhibit new self-* properties that have never been seen before and which us humans may or may not be able to predict or even grok.
I will conclude by noting that on the list of known self-* properties that are possible for superfoo, are self-awareness and self-consciousness. I will pause here to quickly throw up the flag to signal that we are not to anthropomorphize those terms, but just consider them as detached and scientific descriptions of a system’s behavior. To put a fine point on it, consider for a moment that corporations as actors in the world, do seem to exhibit a form of self-awareness. It may not be identical to the sort of self-awareness that you feel that you have, but it meets a clinical definition. To wit, all activity related to corporate branding would be hard to explain without referring at least implicitly to self-awareness.
Thus, and finally, it is with all this in mind that I suggest we admit the possibility (nay, likelihood) of superfoo’s self-awareness and self-consciousness, if not now, at some point in the future.
† Assuming we humans survive the various existential threats that are upon us including climatic catastrophe, weapons-related catastrophe, socio-economic catastrophe, technological catastrophe (bio-tech, nano-tech), etc. Have we reached the peak in human autonomy yet? Anyone’s guess. BTW, the peaking-of-autonomy argument goes for any “transhuman agents” that emerge as well (i.e. any technologically-based AIs, modified humans, and hybrids thereof). It also goes for agents at higher levels than humans or transhumans. Currently existing examples including corporations, governments, cultures, religions, military-industrial complexes, foo-bar complexes, etc.
‡ Just to reiterate why this happens: increasing interdependence of agents at the lower level and increasing agency at the higher level. Astute readers will understand that these two dynamics are actually two sides of the same coin.
Vodpod videos no longer available.
]]>
He makes so much sense! Kevin?
Via Volokh Conspiracy via Megan McArdle via ClusterStock.
]]>I liked Rafe’s responses to my questions. It revealed the heart of the matter. From the perspective of an interdisciplinarian such as myself, I believe the concept of a “superorganism” is purely a matter of terminology.
There are three terms of interest to me on this topic: “superorganism”, “awareness”, and “autonomy”. I think Rafe, and disciples of emergence in general, use them in ways that evoke misunderstanding from lay people. As he notes, they are somewhat anthropomorphic. By casting the situation in a first person perspective, they thus provoke a visceral reaction.
The term superorganism comes from biology, meaning a species with an essentially hive-like social structure (the precise term is “eusocial“). Emergent behaviorists have overloaded the term to mean the next level of agency above organisms. Obviously, this is accurate in terms of the etymology, but I feel the inevitable linkage to a hive-like social structure is counterproductive.
I contend that, conceptually, “superorganism” = “ecology” + “economy” + “society”. I don’t think any of these constituent terms evoke any negative connotations. But saying we’re all part of an “organism” naturally causes people to feel that you’re relegating them to a cog in the machine, which is not actualkly what you’re trying to convey. I think the term “supercommunity” might be better.
In any case, I feel that using an inherently singular term like “superorganism” has mis-directed Rafe’s thinking on whether there will be one or more instances of this higher level agency. He appeals to network theory, which thankfully is something I know more than a little bit about. If you’ve ever done any network modelling, you find that you can always turn the world into one big network. Everything is already interconnected.
If you draw a multi-level network diagram of a prehistoric tribe, it’s easy to say that there’s just one big network. It’s actually the same for all particles in the universe under the most advanced current conceptions of physics. There’s nothing new about the amount of interconnectedness in the world today, it just occurs on a higher order (which is the point of emergence, of course).
That doesn’t mean that there won’t be distinct subnetworks that one could call “individuals” at this new level of agency. If a fundamental precept of the emergent model is that you can’t fully comprehend the higher level of agency, how can you assert with any confidence whether or not there will be multiple individuals?
As Rafe and I have discussed in the past, it’s all a matter of how you want to construct your model. Reality simply is what it is. All I’m saying is that, down here on this level of agency, our models of the higher level will probably work better if we assume there will be distinguishable subnetworks–it allows us to more easily account for higher level competition and cooperation, for instance.
Now for “awareness” and “autonomy”. Unlike “superorganism”, which is at least etymologically accurate, these terms are just plain bad. As Rafe agrees, the superfoo or superfoos won’t have anything like our awareness. So let’s not use that term. When combined with “autonomy”, it sounds like the superfoo or superfoos will be stealing or subsuming our awareness. This is where I think the technological singularity is a necessary condition. For there to be a higher level agent with something similar to our awareness, you need the singularity.
Which brings us to autonomy, the worst term of the three. As used in normal speech, emergence won’t decrease our autonomy. Our autonomy, in this sense, has in fact been increasing over time as the next level of organization emerges. I’m talking about autonomy in terms of executive function–deciding what it is you’re going to do. When you’re out on the plains living hand to mouth, your possible alternatives are very constrained: hunt or die.
Emergence has resulted in an ever increasing number of alternatives on average. What has happened is that our interdependence has increased. We rely on the rest of society for the support necessary to have a lot of alternatives from which to choose. But don’t confuse depednence with a lack of alternatives. In this sense, the emergence of a superfoo or superfoos is good. Our lives will be better.
]]>No, seriously. Check out the various meetings they have upcoming and the comments sections that go with each. Some topics like Health Care have lots of comments. Others like the Humanitarian, Refugee, and Asylum Policy meeting currently have no comments, which means you could have quite a bit of influence by being the only one to spout your opinion…
So, what do you think? Will Obama policy be shaped by this promising open forum with unprecedented input from the average citizen, or will this end up as just good PR?
p.s. is that Stephen Colbert looking askance at the bald dude? :-)
]]>At long last, AI researchers are truly learning from human cognition (oh the irony!) Introducing Leo, the robot that learns to model and reason about the world like human babies do, via embodied experience and social interactions:
“It’s really through the body, and the dynamic coupling of neural systems for perception, action and introspection, that cognition emerges,” says developmental psychologist Linda Smith of Indiana University in Bloomington.
Smith goes even further in challenging the conventional wisdom on human intelligence:
“That’s all there is to cognition,” Smith somewhat defiantly told an audience at the cognitive science meeting. Symbolic representations of knowledge in the brain, cherished by many cognitive scientists, simply don’t exist, in her view.
Her view is supported not only by experimental results in infants, but also by vast amounts of cognitive science literature on the embodied nature of cognition. Lakoff and Johnson’s tour de force, Philosophy in the Flesh, summarizes these results and presents a theory of cognition based on the embodied mind. They contend that the primary mechanism by which conscious reasoning is done is via metaphor, primarily metaphor that is based on the five senses (e.g. “I see what you mean”, or “That was a bittersweet experience”).
What’s novel about Smith et al’s work is that they bring in the social dimension to cognition and how powerful the results appear to be. One way to extend their approach would be to endow Leo with the six social influence primatives catalogued by Cialdini: Reciprocation, Commitment & Consistency, Social Proof, Authority, Liking, and Scarcity. Currently Leo’s main cognitive trick seems to revolve around Social Proof.
]]>Any underlying theme of this thread is how reliance on reductionism causes us to miss the key invisible etiologies that are necessary to make progress on understanding, treating, detecting and preventing cancer.
In the first three parts of this series, I pointed out how the invisible etiology of somatic evolution has great explanatory and predictive power for oncology. A new paper by some researchers on the vanguard of complex systems thinking shows how adding a complementary ecological model leads us to the promising approach of ecological therapy.
A Nature Review article published a couple of years ago summarizes the case for cancer as an evolutionary and ecological process
hat tip: David Basanta
]]>This month’s Wired Magazine Jargon Watch features a telling harbinger:
Sound Blast n. A supersize sound bite, blasted over the internet by a tech-savvy politician. Barack Obama’s campaign speeches, uploaded onto YouTube and viewed by millions, have defined the form. The average sound bite is 10.3 seconds; a typical sound blast is 10 minutes or more.
Additionally, the popularity and virulence of TED and Pop!Tech videos should give us hope that it is possible to convey nuanced ideas and deep new insights to a large audience.
The one thing that’s not quite there yet is the closing of the informational feedback loop such that we have real, organic, meaningful and creative conversation in the public sphere. But I believe that is coming. The ability to comment (textually and in video form) certainly suggests the potential. What’s lacking is a way for true crowd wisdom and substantive individual voices to percolate up from the cacophonous babel.
When that happens, the superorganism will have achieved a critical developmental milestone akin to when a child acquires language and begins to exhibit conscious thought.
]]>Rafe makes an analogy to cells within a multicellular organism. How does this support the assertion that there will only be one superorganism and that we will need to subjugate our needs to its own? Obviously, there are many multicellular organisms. Certainly, there are many single-celled organisms that exist outside of multicelluar control today. So where is the evidence that there will be only one and that people won’t be able to opt out in a meaningful sense?
There will be only one because of the amount of interconnectedness and interdependency of the constituent agents. At no time in the history of Earth has the actions of one agent had such immediate and profound impact on others, both in potential terms and in actual terms.
In network theory you can identify subnets within a larger network which are islands: they connect the nodes within the subnet to each other, but are otherwise unconnected to the larger whole. By adding a link from one of the nodes in an island to a node in another island, you end up with one large island instead of two smaller ones. The Earth-system (what we are calling the superorganism) has no islands anymore. It’s all one system, whether we like it or not, and whether we intend it or not.
Opting out is not really possible anymore due to interconnectedness and interdependency. It is a myth to think we humans are autonomous agents currently. And our level of autonomy is going down. This is not necessarily a bad thing, but it does challenge our myths and conceptions of who we are and who we can be in the future.
Now, I expect the answer to include the observation that this analogy is inadequately expressive. Exactly! So how can you predict anything? The major difference between humans and cells in this context is that they possess their own executive function. They are capable of formulating and pursuing independent long range goals. They are capable of independently applying Bayes Theorem to predict changes in their environment.
I think this is a red herring to the points you make below, but I will suggest that there is nothing special about human executive function in the pantheon of mechanisms of agency. See this post, particularly the part about Prediction & Representation. As a somewhat relevant aside, I’ll point out that the higher levels can utilize the mechanisms of the lower levels, but not vice versa. This subsumption is relevant to the comprehensibility claim.
I think this a point that Rafe needs to address further to back up his assertion that the higher level will be incomprehensible to the lower.
Your brain might have the capacity to fully comprehend an individual neuron inside it, but it there is a theoretical limit to what your brain can comprehend about itself due to recursion (c.f. halting problem, Godel incompleteness, liar’s paradox, et al). To be clear, by “fully comprehend” I mean contain a representational model that has all relevant complexity to produce an accurate description and prediction of the system being “comprehended”.
Now when you add on top of that the complexity of an entire system of brains and “other stuff” (the complexity of which is staggering by itself), it is hopeless to think your brain could fully comprehend that entire system (of which your somewhat less incomprehensible brain is a subsystem). You may argue that an enhanced human intelligence of the form discussed by Kurzweil is not nearly as limited as your and my brain. But the argument still holds: no system can fully comprehend itself, let alone a supersystem of which it is a subsystem.
I contend that the superorganism has been gradually emerging for hundreds of years and that we have been gradually improving our understanding of it. My strawman superorganism is the economy, which invisibly coordinates the behavior of all participating actors. I’ll be the first to admit that our understanding of the macroeconomy leaves something to be desired, but we do understand a fair bit. Oh, and for those Greens out there who will ask, “But what about the global ecosystem?” I’m including “resource economics” in the definition of economics.
I agree about the superorganism gradually emerging for hundreds (actually billions) of years, and that we humans have been — individually and collectively — gradually improving our understanding. The economy is a good subset to focus on because it illustrates the point about limits. Let’s go simpler and just talk about “the market”.
Ever since Adam Smith reified the concept of invisible hand, the market as a system began to reflect on itself. These days with pervasive 24-hour financial news and speculation, diverse sorts of market participants, deep analysis tools, etc, our understanding of how markets work in general has increased. However, the feedback of that understanding into the micro decisions of the lower level agents makes the emerging macro-level behavior more and more complex/chaotic. Now generalize to the global economy as a whole and you get the picture. Just look at how policy makers and nobel laureates alike are floundering at fixing the global financial crisis: they don’t even agree on the fundamentals let alone what actions will have what effects.
In your own favorite complex system, the climate, it’s the exact same situation (as you have argued eloquently) despite the illusion of understanding and consensus. And it should not be lost on us that the climate and economy are increasingly one and the same system, part of the larger superorganism.
So, on the one hand, with these complex systems that we are a part of, we do increase our understanding day by day and year after year. But on the other hand the gap between our understanding and what there is to understand is widening.
Local economies are becoming more and more linked, but it’s hard for me to see how this leads to an event horizon in and of itself.
The aforementioned widening gap is the event horizon.
It’s also hard for me to see how “awareness” in the sense that we have it will exist.
Exactly. It won’t be awareness as you know it. You are, according to my argument, not capable of the sort of “awareness” that the superorganism will have. Awareness is a misleading (and anthropomorphic) word though.
I think that you must add something along the lines of a technological singularity to end up with these two properties.
Clearly, the technological aspect of the superorganism is necessary, not only for “superawareness” but for the whole shebang. It’s not possible to separate technology from the equation. Without it, the current (and future) economy, climate, culture, organization, etc doesn’t exist.
So what is the concept of emergence adding here? In terms of understanding how the superorganism functions, what does it add beyond economics? What additional predictions does it make and why? Moreover, how can one claim that exponential technological development is not a necessary conditions for the emergence of a higher level awareness? Is there some demonstration of the preconditions for emergence that excludes it?
I think you misunderstood my claim. I am not arguing against the likelihood of technological singularity. I am saying that its significance is anthropocentric. The larger event is the emergence of a new level of organization of matter, energy and information that goes beyond “simply” human immortality, human merger with technology, and an event horizon. The emergence of this new level (the superorganism) is an aspect of The Singularity that I haven’t heard discussed before, and I’m pointing out the conspicuousness of its absence.
As far as what emergence adds, it’s (a) the notion of superorganism at all, and (b) the consequences for autonomy of individual agents which comprise the superorganism, including humans, “pure” technology / AI, and mergers thereof.
]]>hat tip: Daniel Horowitz
]]>
This is from a recent Seed Magazine article. Click on the image to enlarge it.
]]>Here is the same system at different resolutions (lowest to highest):
Yes, it’s the structure of the universe once you add in various different kinds of information, including cosmic background radiation, dark matter, etc. The last image features individual galaxies. To see how these visualizations were constructed, watch George Smoot’s talk.
What strikes me about these images is how organic the universe seems. Indeed, if you watch the talk, you will get a glimpse of how the structure of the universe emerges via principles that are similar to (if not identical) to how organic life forms emerge.
]]>
Without looking up the term Mpc/h, do you know what the above photo is? I’ll give you some hints.
It’s not a neural network like this:
And it’s not a mushroom like this:
Nor is it the internet which looks like this:
And it’s not even the relationships in the world of music which looks like this:
Take your guesses, the answer will be revealed in a day or so.
]]>Like does this mean you can cure heart disease?
She’s hesitant. Nobody wants to say they can defeat the industrialized world’s number one killer. Nobody wants to make promises about life, or quantify salvation. But she fervently believes she’s got a shot.
This is what the 2008 Genius edition of Esquire Magazine had to say about Hina Chaudhry. Her approach is to switch back on the mechanism that causes cells to divide in the heart, which doesn’t normally happen after birth in any mammal. This is not a stem-cell approach, despite what it might sound like.
Looking on the web there appears to be very little written about this work, so I’m wondering how Esquire found her or chose her work to highlight. I’d like to learn more if anyone has information they’d like to share.
]]>40% percent of terrorist groups are defeated by police and intelligence operations. 43% percent end because they give up violence and join the political process. Only 7% end as a result of military force.
This according to the research of Seth Jones, as reported in the 2008 Genius edition of Esquire Magazine.
]]>Rafe’s post covers a rather speculative topic. I think it’s worth stress testing so I am going to play skeptic without revealing my actual position. But you can be assured that it’s at least somewhat nuanced.
Rafe knows more about this topic than I. In general, I tend to trust his judgment in areas where he is expert. But I want to make sure he’s devoted enough due diligence before reaching his conclusions. Thus the stress test.
First, I’m going to try to demolish his assertions within the conceptual confines of his own analogy. Then I’ll undermine the analogy. Finally, I’ll attack emergence itself as a concept with no predictive skill in this case.
Rafe makes an analogy to cells within a multicellular organism. How does this support the assertion that there will only be one superorganism and that we will need to subjugate our needs to its own? Obviously, there are many multicellular organisms. Certainly, there are many single-celled organisms that exist outside of multicelluar control today. So where is the evidence that there will be only one and that people won’t be able to opt out in a meaningful sense?
Now, I expect the answer to include the observation that this analogy is inadequately expressive. Exactly! So how can you predict anything? The major difference between humans and cells in this context is that they possess their own executive function. They are capable of formulating and pursuing independent long range goals. They are capable of independently applying Bayes Theorem to predict changes in their environment . I think this a point that Rafe needs to address further to back up his assertion that the higher level will be incomprehensible to the lower.
I contend that the superorganism has been gradually emerging for hundreds of years and that we have been gradually improving our understanding of it. My strawman superorganism is the economy, which invisibly coordinates the behavior of all participating actors. I’ll be the first to admit that our understanding of the macroeconomy leaves something to be desired, but we do understand a fair bit. Oh, and for those Greens out there who will ask, “But what about the global ecosystem?” I’m including “resource economics” in the definition of economics.
Local economies are becoming more and more linked, but it’s hard for me to see how this leads to an event horizon in and of itself. It’s also hard for me to see how “awareness” in the sense that we have it will exist. I think that you must add something along the lines of a technological singularity to end up with these two properties.
So what is the concept of emergence adding here? In terms of understanding how the superorganism functions, what does it add beyond economics? What additional predictions does it make and why? Moreover, how can one claim that exponential technological development is not a necessary conditions for the emergence of a higher level awareness? Is there some demonstration of the preconditions for emergence that excludes it?
These are all questions to which I would like better answers.
]]>It’s possible that we all have cells that are cancerous and that grow a bit before being dumped by the body. ‘Hell bent for leather’ early detection research will lead to finding some of them. What will be the consequence? Prophylactic removal of organs in the masses? It’s really scary.
As we begin to gather empirical evidence that contradicts the standard models of what cancer is and what treatments work and don’t work, it’s important that we create and embrace models that fit the data better.
Please help with this by giving your thoughts here.
Study Suggests Some Cancers May Go Away
Cancer researchers have known for years that it was possible in rare cases for some cancers to go away on their own. There were occasional instances of melanomas and kidney cancers that just vanished. And neuroblastoma, a very rare childhood tumor, can go away without treatment.
But these were mostly seen as oddities — an unusual pediatric cancer that might not bear on common cancers of adults, a smattering of case reports of spontaneous cures. And since almost every cancer that is detected is treated, it seemed impossible even to ask what would happen if cancers were left alone.
Now, though, researchers say they have found a situation in Norway that has let them ask that question about breast cancer. And their new study, to be published Tuesday in The Archives of Internal Medicine, suggests that even invasive cancers may sometimes go away without treatment and in larger numbers than anyone ever believed.
At the moment, the finding has no practical applications because no one knows whether a detected cancer will disappear or continue to spread or kill.
And some experts remain unconvinced.
“Their simplification of a complicated issue is both overreaching and alarming,” said Robert A. Smith, director of breast cancer screening at the American Cancer Society.
But others, including Robert M. Kaplan, the chairman of the department of health services at the School of Public Health at the University of California, Los Angeles, are persuaded by the analysis. The implications are potentially enormous, Dr. Kaplan said.
If the results are replicated, he said, it could eventually be possible for some women to opt for so-called watchful waiting, monitoring a tumor in their breast to see whether it grows. “People have never thought that way about breast cancer,” he added.
Dr. Kaplan and his colleague, Dr. Franz Porzsolt, an oncologist at the University of Ulm, said in an editorial that accompanied the study, “If the spontaneous remission hypothesis is credible, it should cause a major re-evaluation in the approach to breast cancer research and treatment.”
The study was conducted by Dr. H. Gilbert Welch, a researcher at the VA Outcomes Group in White River Junction, Vt., and Dartmouth Medical School; Dr. Per-Henrik Zahl of the Norwegian Institute of Public Health; and Dr. Jan Maehlen of Ulleval University Hospital in Oslo. It compared two groups of women ages 50 to 64 in two consecutive six-year periods.
One group of 109,784 women was followed from 1992 to 1997. Mammography screening in Norway was initiated in 1996. In 1996 and 1997, all were offered mammograms, and nearly every woman accepted.
The second group of 119,472 women was followed from 1996 to 2001. All were offered regular mammograms, and nearly all accepted.
It might be expected that the two groups would have roughly the same number of breast cancers, either detected at the end or found along the way. Instead, the researchers report, the women who had regular routine screenings had 22 percent more cancers. For every 100,000 women who were screened regularly, 1,909 were diagnosed with invasive breast cancer over six years, compared with 1,564 women who did not have regular screening.
There are other explanations, but researchers say that they are less likely than the conclusion that the tumors disappeared.
The most likely explanation, Dr. Welch said, is that “there are some women who had cancer at one point and who later don’t have that cancer.”
The finding does not mean that mammograms caused breast cancer. Nor does it bear on whether women should continue to have mammograms, since so little is known about the progress of most cancers.
Mammograms save lives, Dr. Smith said. Even though they can have a downside — most notably the risk that a woman might have a biopsy to check on an abnormality that turns out not to be cancer — “the balance of benefits and harms is still considerably in favor of screening for breast cancer,” he said.
But Dr. Suzanne W. Fletcher, an emerita professor of ambulatory care and prevention at Harvard Medical School, said that it was also important for women and doctors to understand the entire picture of cancer screening. The new finding, she said, was “part of the picture.”
“The issue is the unintended consequences that can come with our screening,” Dr. Fletcher said, meaning biopsies for lumps that were not cancers or, it now appears, sometimes treating a cancer that might not have needed treatment. “In general we tend to underplay them.”
Dr. Welch said the cancers in question had broken through the milk ducts, where most breast cancers begin, and invaded the breast. Such cancers are not microscopic, often are palpable, and are bigger and look more ominous than those confined to milk ducts, so-called ductal carcinoma in situ, or DCIS, Dr. Welch said. Doctors surgically remove invasive cancers and, depending on the circumstances, may also treat women with radiation, chemotherapy or both.
The study’s design was not perfect, but researchers say the ideal study is not feasible. It would entail screening women, randomly assigning them to have their screen-detected cancers treated or not, and following them to see how many untreated cancers went away on their own.
But, they said, they were astonished by the results.
“I think everybody is surprised by this finding,” Dr. Kaplan said. He and Dr. Porzsolt spent a weekend reading and re-reading the paper.
“Our initial reaction was, ‘This is pretty weird,’ ” Dr. Kaplan said. “But the more we looked at it, the more we were persuaded.”
Dr. Barnett Kramer, director of the Office of Disease Prevention at the National Institutes of Health, had a similar reaction. “People who are familiar with the broad range of behaviors of a variety of cancers know spontaneous regression is possible,” he said. “But what is shocking is that it can occur so frequently.”
Although the researchers cannot completely rule out other explanations, Dr. Kramer said, “they do a good job of showing they are not highly likely.”
A leading alternative explanation for the results is that the women having regular scans used hormone therapy for menopause and the other women did not. But the researchers calculated that hormone use could account for no more than 3 percent of the effect.
Maybe mammography was more sensitive in the second six-year period, able to pick up more tumors. But, the authors report, mammography’s sensitivity did not appear to have changed.
Or perhaps the screened women had a higher cancer risk to begin with. But, the investigators say, the groups were remarkably similar in their risk factors.
Dr. Smith, however, said the study was flawed and the interpretation incorrect. Among other things, he said, one round of screening in the first group of women would never find all the cancers that regular screening had found in the second group. The reason, he said, is that mammography is not perfect, and cancers that are missed on one round of screening will be detected on another.
But Dr. Welch said that he and his colleagues considered that possibility, too. And, he said, their analysis found subsequent mammograms could not make up the difference.
Dr. Kaplan is already thinking of how to replicate the result. One possibility, he said, is to do the same sort of study in Mexico, where mammography screening is now being introduced.
Donald A. Berry, chairman of the department of biostatistics at M. D. Anderson Cancer Center in Houston, said the study increased his worries about screenings that find cancers earlier and earlier. Unless there is some understanding of the natural history of the cancers that are found — which are dangerous and which are not — the result can easily be more treatment of cancers that would not cause harm if left untreated, he said.
“There may be some benefit to very early detection, but the costs will be huge — and I don’t mean monetary costs,” Dr. Berry said. “It’s possible that we all have cells that are cancerous and that grow a bit before being dumped by the body. ‘Hell bent for leather’ early detection research will lead to finding some of them. What will be the consequence? Prophylactic removal of organs in the masses? It’s really scary.”
But Dr. Laura Esserman, professor of surgery and radiology at the University of California, San Francisco, sees a real opportunity to figure out why some cancers go away.
“I am a breast cancer surgeon; I run a breast cancer program,” she said. “I treat women every day, and I promise you it’s a problem. Every time you tell a person they have cancer, their whole life runs before their eyes.
“What if I could say, ‘It’s not a real cancer, it will go away, don’t worry about it,’ ” she added. “That’s such a different message. Imagine how you would feel.”
]]>
Anyone interested in how technology and policy can work together to form us a more perfect union should read Rebooting America. If your budget is tight right now, you can download the PDF version for free.
While you are at it, check out the Personal Democracy Forum which is the larger effort that Rebooting America is part of.
]]>“Never doubt that a small group of thoughtful, committed people can change the world. Indeed. It is the only thing that ever has.”
Margaret Mead
The Singularity represents an “event horizon” in the predictability of human technological development past which present models of the future may cease to give reliable answers, following the creation of strong AI or the enhancement of human intelligence. (Definition taken from The Singularity Summit website)
It may be hard to imagine anything more significant than humans and technology merging and the end of death as we know it. But that’s just because we humans are myopic and anthropomorphic. The definition above focuses on the individual agents at the current level of organization, namely humans and also their technological creations.
Much more significantly though, there is a new agent emerging. A new level of organization of matter, energy and information above the level of humans and technology, but also comprised of humans and technology. In fact, this new agent has been emerging for eons, and has been called many things, including Gaia and superorganism and the technium. What is noteworthy about this new entity is twofold: (1) there is (and will) only be one of these on Earth,* and (2) the entity itself is becoming gradually more aware of itself, which is to say its agency is becoming stronger, its interest in (and capability for) self-preservation is increasing.
The “singular” in Singularity should refer to the single Earth system. It is composed of the biosphere, of humans, technology (including AI) and hybrid systems thereof: multi-human organizations (like corporations, governments, cultures), crowdsourcing, markets, the scientific pursuit itself, and other socio-technical systems. As these coalesce, this singular global agency gets stronger and clearer.
The memes come in many forms: global awakening; cooperate or perish; collective intelligence; interconnectedness; and so on. We see birth pains and fragility of the superorganism all around us. We see the urgency. And we also see the struggle of the lower level agents to keep their autonomy despite the need of the superorganism to subjugate** the needs of its constituent agents to its own.
The reason that The Singularity involves an “event horizon” is not so much that we aren’t able to “see” into the future of humanity or technology due to exponential forces, though that may certainly be a limiting factor. The true event horizon is the one that makes the higher level mostly incomprehensible to the lower. The individual cells in your body understand your body only in the vaguest sense.
Will we be able to communicate with the superorganism? Probably not. Each level has its own language, irreducible to the languages of levels above and below. Will the superorganism care for us and keep us alive and healthy, or are we talking about some Terminator scenario? Here we can be certain, for our fates are linked: without us*** there is no superorganism. Yet, we will have to change ideas of who we are and what we can and cannot do. We are already seeing how certain ideals taken to an extreme — inalienable individual rights, unbridled market competition, zero obligation to the group — are cancerous to the superorganism, which is to say, our own survival.
Interesting times, indeed.
* This is in contrast to all other types of agents heretofore on Earth, namely all of the individual organisms we call living, all of our technological artifacts, cultural agents, memes, etc. Each of these types of agents have existed in populations of thousands, millions, billions or more.
** See the “Upward Bolstering & Downward Constraint” section of this post.
*** The “us” here refers to the expanded definition of humanity that Kurzweil suggests, which includes our soon-to-be-merged technology.
The lament I had was a common one: purchase an international data roaming plan, come home to find thousands of dollars in data roaming charges anyway. My “mistake” was that while France and Spain are part of the Roam Zone, Turkey is not. My beef was that AT&T didn’t let me know this even though I told them where I was going and asked what I needed to do to avoid overage charges.
After several hours on the phone with customer service, getting escalated up the chain, ultimately I got no relief. So I decided to write up my grievance in an email as a final attempt to get them to come to their senses. Well, it worked. A day later I got a response from AT&T Online Customer Care Professional, Brook Green, with a case number and an explanation that they were seriously looking into my case, no need to pay the current bill just yet, sorry for the bad experience and gave a time window of two weeks when I could expect a resolution.
In just one week, I got a call from Michelle Gallaway (sp?) letting me know that they decided that my data roaming charges were indeed unfair since I wasn’t given a reasonable warning of what to expect, and they will be crediting my account with the full $4,747.05 in international data roaming that they had charged me. Mike Mc(?) called twice to confirm the credit was going through (but would take a few days given the large amount and number of manager approvals it would take). I have to say, it feels good when justice is done, and I give AT&T credit for attempting to turn over a new leaf in the customer service department.
Just as a heads-up and reminder to people with iPhones, if you travel internationally here are some good web pages to know about:
- International data roaming plans
- Which countries are part of the Roam Zone
- Tips for iPhone users (since it sucks data without you knowing if you are not careful)
For those looking for sympathy on their own cell phone customer service nightmares, you are not alone. Act Two of this episode of This American Life says it all: On Hold, No One Can Hear You Scream.
]]>Pop Quiz: what are the four ways that vegetables and fruits act as a superstar health shield? Find out here.
Click here for more Dr. Ann videos.
]]>- Restore Habeas Corpus
- Stop Illegal Spying
- Ban Torture, Really
I’m curious to know though, what do you think the priorities should be for Obama’s presidency?
]]>
JAMA. 2008;300(13):1580-1581.
The Conflict Between Complex Systems and Reductionism
Henry H. Q. Heng, PhD
Author Affiliations: Center for Molecular Medicine and Genetics, Wayne State University School of Medicine, Detroit, Michigan.
Descartes’ reductionist principle has had a profound influence on medicine. Similar to repairing a clock in which each broken part is fixed in order, investigators have attempted to discover causal relationships among key components of an individual and to treat those components accordingly. For example, if most of the morbidity in patients with diabetes is caused by high blood glucose levels, then control of those levels should return the system to normal and the patient’s health problems should disappear. However, in one recent study this strategy of more intensive glucose control resulted in increased risk of death (1). Likewise, chemotherapy often initially reduces tumor size but also produces severe adverse effects leading to other complications, including the promotion of secondary tumors. Most important, little evidence exists that more aggressive chemotherapies prolong life for many patients (2-4). In fact, chemotherapies may have overall negative effects for some patients.
Most medical treatments make sense based on research of specific molecular pathways, so why do unexpected consequences occur after years of treatment? More simply, does the treatment that addresses a specific disease-related component harm the individual as a whole?
To address these questions, the conflict between reductionism and complex systems must be analyzed. With increasing technological capabilities, these systems can be examined in continuously smaller components, from organs to cells, cells to chromosomes, and from chromosomes to genes. Paradoxically, the success of science also leads to blind spots in thinking as scientists become increasingly reductionist and determinist. The expectation is that as the resolution of the analysis increases, so too will the quantity and quality of information. High-resolution studies focusing on the building blocks of a biological system provide specific targets on which molecular cures can be based.
While the DNA sequence of the human gene set is known, the functions of these genes are not understood in the context of a dynamic network and the resultant functional relationship to human diseases. Mutations in many genes are known to contribute to cancers in experimental systems, but the common mutations that actually cause cancer cannot yet be determined (5-6).
Many therapies such as antibiotics, pacemakers, blood transfusions, and organ transplantation have worked well using classic approaches. In these cases, interventions were successful in treating a specific part of a complex system without triggering system chaos in many patients. However, even for these relatively safe interventions, unpredictable risk factors still exist. For every intervention that works well there are many others that do not, most of which involve complicated pathways and multiple levels of interaction. Even apparent major successes of the past have developed problems, such as the emergence and potential spread of super pathogens resistant to available antibiotic arrays.
One common feature of a complex system is its emergent properties—the collective result of distinct and interactive properties generated by the interaction of individual components. When parts change, the behavior of a system can sometimes be predicted—but often cannot be if the system exists on the “edge of chaos.” For example, a disconnect exists between the status of the parts (such as tumor response) and the systems behavior (such as overall survival of the patient). Furthermore, nonlinear responses of a complex system can undergo sudden massive and stochastic changes in response to what may seem minor perturbations. This may occur despite the same system displaying regular and predictable behavior under other conditions (7-8). For example, patients can be harmed by an uncommon adverse effect of a commonly used treatment when the system displays chaotic behavior under some circumstances. This stochastic effect is what causes surprise. Given that any medical intervention is a stress to the system and that multiple system levels can respond differently, researchers must consider the stochastic response of the entire human system to drug therapy rather than focusing solely on the targeted organ or cell or one particular molecular pathway or specific gene. The same approach is necessary for monitoring the clinical safety of a drug.
Other challenging questions await consideration. Once an entire system is altered by disease progression, how should the system be restored following replacement of a defective part? If a system is altered, should it be brought back to the previous status, or is there a new standard defining a new stable system? The development of many diseases can take years, during which time the system has adapted to function in the altered environment. These changes are not restricted to a few clinically monitored factors but can involve the whole system, which now has adapted a new homeostasis with new dynamic interactions. Restoring only a few factors without considering the entire system can often result in further stress to the system, which might trigger a decline in system chaos. For many disease conditions resulting from years of adaptation, gradual medical improvement rather than drastic intervention might be the best way to correct the problem. In cancer research, system behavior has been monitored during cancer progression, demonstrating that cancer evolution is driven by multiple cycles of transition between genome system stability and instability (9-10). Chemotherapy, by and large, induces a relatively stable system to enter into a chaotic phase. This drastic treatment might be more harmful at the individual level than had been expected. Clearly, understanding the entire system response in the context of any specific treatment is key.
Another layer of complication affects the design of clinical trials evaluating the risk and benefit of a given medical intervention. Traditionally, many diseases have been thought to be caused by common factors including environmental insults and common genetic loci. It is thus logical to validate medical benefits vs risks using large patient populations. However, increasing numbers of recent reports illustrate that some highly penetrant and individually rare genetic alterations contribute to many common diseases, including autism, schizophrenia, and hypertension (11-12). These findings suggest that many common diseases are not caused by common shared genetic alterations. This challenges the common disease–common variant hypothesis as well as the strategy of validating common benefits or risks using a large heterogeneous patient population.
In a heterogeneous population, patients may display a variety of genetic variations that respond differently to a given medical intervention. The same treatment could be of benefit to some patients yet harmful to others. Thus, validation of risk and benefit using a large heterogeneous population will likely produce conflicting data. Based on recent findings that most patients with cancer display drastically different patterns of genetic aberrations rather than the long-assumed common genetic alterations (5-6, 9) and that heterogeneous genetic alterations also contribute to other types of common diseases (11-12), it is logical to predict that patients with variable genetic alterations will display different clinical profiles and have different responses to the same treatment. Therefore, it is essential to reconsider the current strategies of validation, diagnosis, and treatment.
Analyzing the common links behind failures in the treatment of diseases is of great importance. Such analyses will promote the important realization that the key obstacle to future medicine is the conflict between the reality of complexity and a reductionist approach. Despite previous approaches to address the issue of complexity (7-8. 13), limited medical research has been conducted within the context of complex systems. Clearly, only such realization will lead to the correct strategies that integrate information, approaches, and concepts from both low and high levels of a system.
Critical analysis of established medical concepts is needed, as is reinterpretation of the clinical significance of failed therapies from the perspective of complexity. In particular, two key features of a biological system, multilevel complexity and heterogeneity, need to be seriously considered when developing new medical interventions (6). When considering multilevel systems, the higher organizational level often dominates, suggesting that benefits at the higher level should be a priority—thus the need to focus more on an individual’s phenotype rather than on the molecular level. In the case of somatic cell evolution of cancer, higher-level genome alterations play a more dominant role than lower-level gene mutations (6, 10).This information is useful when considering diagnostic and treatment strategies in cancer.
Multilevel interactions also provide an opportunity for evolution of cooperation between levels so that game theory can be applied to assess and achieve medical benefits. For example, in cancer treatment, alternative strategies need to be developed that not only focus on destroying the cancer cells but also achieve the most possible cooperative and beneficial relationship to patients.
The unpredictable nature of heterogeneity will force the consideration of the significance of clinical exceptions, because complex disease results in highly diverse responses that include many exceptions to the general rules. Furthermore, heterogeneity is not simply “noise” but a key component of evolution directly related to human disease conditions and must also be considered when designing interventions such as cancer therapies (6,9,14).
Clinical therapies must be individualized, balancing the parts of the system and the response of the patient as a whole. Clinical research involving pharmaceutical agents needs to focus more on the differential responses within diverse patient populations. This philosophy should be extended to the public to encourage healthy lifestyles rather than depending on the quick fix of drugs as panaceas.
Corresponding Author: Henry H. Q. Heng, PhD, Center for Molecular Medicine and Genetics, Wayne State University School of Medicine, 3226 Scott Hall, 540 E Canfield, Detroit, MI 48201
Additional Contributions: I thank Gloria Heppner, PhD, for continuous encouragement and Steve Bremer, MD, Lesley Lawrenson, MS, Joshua Stevens, PhD, Markku Kurkinen, PhD, Christine Ye, MD, and Barbara Spyropoulos, PhD, for discussion and help in editing the manuscript. None of these individuals, all of Wayne State University, received compensation for their contributions.
References:
1. Gerstein HC, Miller ME, Byington RP; et al. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med. 2008;358(24):2545-2559.
2. Mittra I. The disconnection between tumor response and survival. Nat Clin Pract Oncol. 2007;4(4):203.
3. Savage L. High-intensity chemotherapy does not improve survival in small cell lung cancer. J Natl Cancer Inst. 2008;100:519.
4. Bear HD. Earlier chemotherapy for breast cancer: perhaps too late but still useful. Ann Surg Oncol. 2003;10(4):334-335.
5. Wood LD, Parsons DW, Jones S; et al. The genomic landscapes of human breast and colorectal cancers. Science. 2007;318(5853):1108-1113.
6. Heng HH. Cancer genome sequencing. Bioessays. 2007;29(8):783-794.
7. Coffey DS. Self-organization, complexity and chaos: the new biology for medicine. Nat Med. 1998;4(8):882-885.
8. Mazzocchi F. Complexity in biology. EMBO Rep. 2008;9(1):10-14.
9. Heng HH, Stevens J, Liu G; et al. Stochastic cancer progression driven by non-clonal chromosome aberrations. J Cell Physiol. 2006;208(2):461-472. |
10. Ye CJ, Liu G, Bremer SW; et al. The dynamics of cancer chromosome and genome. Cytogenet Genome Res. 2007;118(2-4):237-246.
11. Walsh T, McClellan JM, McCarthy SE; et al. Rare structural variants disrupt multiple genes in neurodevelopmental pathways in schizophrenia. Science. 2008;320(5875):539-543.
12. Szatmari P, Paterson AD, Zwaigenbaum L; et al, Autism Genome Project Consortium. Mapping autism risk loci using genetic linkage and chromosomal rearrangements [published correction appears in Nat Genet. 2007;39(10):1285]. Nat Genet. 2007;39(3):319-328.
13. Goldberger AL. Nonlinear Dynamics, Fractals, and Chaos Theory: Implications for Neuroautonomic Heart Rate Control in Health and Disease. PhysioNet Web site. https://www.physionet.org/tutorials/ndc/.
14. Heppner GH. Tumor heterogeneity. Cancer Res. 1984;44(6):2259-2265.
]]>Okay, Kev, here’s your chance on affecting climate policy, go crazy!
]]>My grandma has one of those electronic picture frames that sits in her living room, is connected to a proprietary service via a phone line, and can be updated with new pictures remotely by her family members. She gets incredible delight in discovering new photos and watching old ones go by as she drinks her tea in the morning or before bed at night. The key to this whole product for her (and for many others) is that it works completely without her having to lift a finger. Her family set it up and they take responsibility for updating the photos. If grandma had to intervene somehow, her lack of any technological familiarity would be a show-stopper.
Every year these picture frames get better. Currently there are ones that connect to the internet via wi-fi instead of a phone line, and link up to open photo-sharing sites like Flickr. Clearly, it won’t be long before other functionality is added, like web-browsing, email and instant messaging. But I don’t think these extensions will catch on, mainly because the ergonomics for typing are bad, and if you had to add a physical keyboard you might as well just go use your laptop.
The extension that will catch on is a well-integrated video chat system. Imagine Apple’s iChat system with the following front-end tacked on. The frame gets a switch at the bottom that toggles between Picture Mode and Video Chat Mode. When in VC Mode, a set of onscreen buttons pop up, as follows: “Chat with Bobby”, “Chat with Dr. Rosen”, etc. Each option is pre-programmed on a central server so there is no typing for grandma.
I can imagine such a system becoming an important way for extended families and distantly located friends to stay in touch. It also may be useful for emergency services. For instance there could be a physical panic button on the frame that dials 911 and activates the camera and microphone (one-way) so that the police can monitor what’s going on and decide how to respond.
Hey, while we are dreaming, why not extend the system eventually to a become a secure and private voting machine for local and national elections? In the wake of voting machine fraud in 2000 and 2004 this may seem a long way off. But the problem with secure voting systems has never been due to a lack of actual technological solutions.
At first we will all use our home picture frames to respond to unofficial polls, like exit polls or American Idol voting. Next time around, it’s used for pseudo-official polling to let our elected representatives know how we feel on certain topics. After that, it shouldn’t be long before we are comfortable dipping our toes in the personal democracy waters for real, first with local referendums (like city and state ballot propositions). And once the training wheels are off, it will be hard to stop the groundswell of support for the whole enchilada.
]]>
Here is my actual valve as viewed using a transesophageal echocardiogram (TEE):
Click here to see video of my valve (looking outside in).*
If you watch the video a few times, you will see how two of my three leaflets are fused together, and don’t close entirely. This allows blood to flow back into the chamber as the heart is trying to pump it out, a dynamic known as aortic regurgitation, or aortic insufficiency (AI). My AI isn’t severe enough warrant a valve replacement now, and the hope is that through management** and vigilance it will never need to.*** If at some point I do need a replacement, it’s better to wait until just before I “really need it” (which is a judgment call) because as common as valve replacements are, any heart procedure carries risk of death. Plus the technology and procedures get better and better over time.
For instance, currently they have to open you up to do a valve replacement, but there are clinical trials going on right now to do the replacement endoscopically through your veins(!) Ultimately, I think we’ll be able to grow new valves and hearts from our own cells, either in the body itself or externally, via regenerative medicine:
Lest you think this is a far away pipe-dream, take note of the regenerated finger in that video: bone, vasculature, nerves, flesh, nail — everything was regrown, more or less intact in a matter of weeks. How far off can internal body parts be?
What is actually more amazing to me is not that heart disease (still the #1 killer in the world) is rapidly becoming a solvable problem from a medical standpoint, but rather that we have technology now that could (a) save millions of actual lives, (b) get millions of people off of cholesterol drugs (and stop unwarranted worrying about cholesterol for some), and (c) save us billions of dollars a year as a society. What’s not surprising to me is the difficulty with which the medical-pharmacological-insurance system has in embracing this.
What I’m talking about is imaging technology, like the CT and echo scans that I received during my diagnosis process. Unrelated to my bicuspid valve, my cardiologist showed me a virtual fly-through of my coronary arteries. He went through all of them with a keen eye for signs of disease and plaque buildup, but I only watched a few examples. On a statistical average basis my blood cholesterol numbers have been considered normal to mildly high over the years, so I was concerned about whether I needed to try to get my numbers down. Anyone who has had high cholesterol knows that it’s almost impossible to affect with diet, a little less so with exercise unless you are a maniac, and relatively easy for most people via statin drugs.
I am leery of being dependent on any drug for the rest of my life. What my doctor told me was, “Forget about cholesterol. It’s not an issue for you. Blood cholesterol levels are a proxy that we use to diagnose potential for coronary artery disease. Having bad cholesterol numbers in and of itself is not a problem; it’s plaque buildup that’s the problem. You have virginal arteries, as your fly-through just showed us. No build-up, nothing to worry about, you can forget about cholesterol being an issue for you at this time. Now, let’s work on taking care of your valve…”
I was kind of blown away by this. Why is it that we are not using imaging technology as a standard tool in diagnosis when a patient has cause for concern (either because of high cholesterol or a family history)? Before putting us all on expensive cholesterol-lowering pills for the rest of our lives — a practice which has become so commonplace as to be considered prophylactic as opposed to treatment — why don’t we find out exactly which of us actually need these drugs to lead healthier lives?
While the reasons are too numerous, mind-numbing and standard to rehash here, it’s clear that this is a microcosm of what’s wrong with our health care system, and until common sense is brought back into the equation, we will continue to spend more money for fewer health benefits than could be achieved today for wont of a small policy change.
Big thanks to my cardiologist, Dr. Ron Karlsberg for taking care of me and also for sharing his knowledge and making me an integral part of my own medical team. Apologies to Ron in advance if I’ve misquoted or misremembered exactly what he said, but hopefully I got the spirit and main points right.
* More video here: valve from inside; valve from top via CT; heart from side via CT; false color CT of heart to see blood flow.
** Currently I am on a daily dose of a drug which is a vasodialator that reduces vascular pressure away from my heart so that less blood regurgitates.
*** The long term danger with my condition is that the heart, through having to work extra hard to keep my blood pressure as it should be, could enlarge over time and lose its elasticity, eventually leading to heart failure. Short of the heart enlarging past a certain point, there’s apparently little or no further danger from my condition.
The above is a self-replicating dynamic structure from a class of systems called cellular automata (click here to run the simulation). Below is a self-replicating dynamic structure from a class of systems called “life”:

The following video explores a new type of self-replicating dynamic structure that will emerge in some form or another in the coming years:
Vodpod videos no longer available.
What is common to all of these examples is a property called “autocatalysis”.* More accurately, these examples are instances of cross-catalytic systems:

In reality there are only ever cross-catalytic systems (never autocatalytic) since all real systems require input from the outside (energy, information, material resources) and produce output (waste, information, resources that feed into other systems). For physicists, this is similar to observing that there are no such things as thermodynamically closed systems in nature, and for the mathematically-inclined we can observe that we never find perfect circles in the actual universe (just in our abstract models).
One interesting idea that was postulated in the RepRap video is that: “anything that copies itself inevitably comes under Darwin’s law of natural selection.” Strictly speaking this is not true. Darwinian natural selection requires two additional properties besides autocatalysis: (1) heritable variation, and (2) differential replication rates over the aforementioned variation. But we can cut Dr. Bowyer some slack because implicit in his slide on the subject he covers both additional prerequisites for RepRap to “go Darwinian”:
Which brings me back full circle (self-referentially autocatalytic?) to the first example above of the cellular automaton (CA). There are an infinite number of types of CAs that are distinguishable based on their rules for how one generation of cells transforms into the next. The name of this particular CA is “Life.”**
In Life it has been proven mathematically that self-replicators (i.e. autocatalytic systems) exist, but nobody has yet found one. [As an aside, this doesn’t contradict my previous point about all autocatalytic systems really being cross-catalytic since the system that Life runs on (i.e. computer + software) really should be considered part of the system being replicated, strictly speaking.] Given the complexity of the search space, it seems likely that the best way to find such an autocatalytic Life pattern would be via evolutionary itself.
If anyone is looking for a doctoral project, finding a self-replicator in the Game of Life would be a worthy goal.
* When people who work in complexity speak about “self-organization”, autocatalysis is one important form, but not the only form. Cooperation represents a large class of self-organizing behavior as well. Both autocatalysis and cooperation lead to emergence of new levels of complexity.
** the full name is Conway’s Game of Life, named for it’s discoverer.
]]>In my post on invisible etiology, I challenged us all to be as open-minded as possible when dealing with our most complex problems, for this is the only way to make the invisible become visible. Here’s where I attempt to practice what I preach.
For a while on this blog I have been going on about cancer as somatic evolution. And while I do believe that this is part of the story, I know it is incomplete and there may be a more general, more important invisible etiology to be discovered. I want your opinion, no matter what your background and training. In fact, I am more inclined to listen to “non-experts” given that the experts have failed us for so long.
One of my goals is to create a feature-length documentary to catalyze a shift in thinking about the nature of cancer so that we, as a society, can start making real progress, so that people don’t die of cancer and don’t live lives of lower quality with cancer. I have been struggling with the enormity and hubris of this task for a year and a half now, making little progress on the goal beyond the conceptual phase. But recently inspired by some of the talks and personal interactions I experienced at Pop!Tech this year, I am simply going to put one foot in front of the other and ask you to tell me what the documentary is about. To that end, I’ve created a website devoted to answering a single question, and I’d like you to give your own answer to the question in one sentence or less:
What is cancer? Click here to express your opinion.
While I don’t know what this documentary will ultimately look like, I know that tapping into the collective wisdom is clearly better than me starting with my own answer to the question of what cancer is. So please help. Your opinion, ideas, thoughts and experience will not go to waste. On the contrary, whatever you say will have a material impact.
If you’d like to go beyond answering the simple question asked on the What is Cancer? website, I’d ask that you use the comment section below on this blog entry. If you have ideas on how to go about the documentary, resources to offer in making it, or just want to give words of encouragement and support, it is all welcome. I only ask that haters, pessimists and trolls find another outlet.
So, what do you think?
]]>The big takeaway from the talk in my view was the term that he used to convey the idea that many of the most important breakthroughs in science and society can be attributed to making a previously unseen dynamic or concept visible. The term was “invisible etiology” and it is what I was trying to get at on my posts about reification here and here.
For those who doubt that violence is a virus despite the evidence presented, Dr. Slutkin points out that the notion of a biological virus was at one point an invisible etiology itself. It would have been just as fantastical to a doctor in the Middle Ages to believe that the Black Plague was caused by an invisible microbial organism transmitted by fleas on the backs of rodents as it is for us today to consider violence as a virus. And in fact, even after it was discovered that the epidemiology of the Plague was consistent with an invisible but real, biological agent of transmission, it took another 400 years for us to actually see the first virus under a microscope. While technology eventually caught up with our quest to understand, the important shift from the perspective of understanding and preventing the disease happened much earlier. And it was that shift in thinking that first of all allowed for concepts like quarantine and sanitation to be used in the solution to the problem, and then later to suggest to us what we were looking for with our microscopes. Until that shift occurred, no understanding would be gleaned and no solution would be found.
In thinking about the history of scientific advances, we can tick off many such shifts where the invisible etiology is made visible: heliocentrism, evolution, relativity, and so on. What is striking is not so much the brilliant insight it takes for these shifts to occur, but rather how banal and obvious it all seems after the fact. Of course the Earth revolves around the Sun, how else would the data be explained? Yet it behooves us remember — as we are quick to dismiss seemingly quaint concepts — that all breakthroughs like the ones mentioned were not only not obvious to most people, but heretical and in violation of prevailing sacrosanct ideals. How can the speed of light be constant if we experience time and space as being observer-independent? The answer of course is that our experience and beliefs were inaccurate and incomplete. Our intuitions are sometimes not to be trusted. The truth is only visible once we trust the data more than our graven images.
It is worthwhile to explore some of the reasons why invisible etiologies are invisible to us for so long before they are suddenly not. I think it has to do with deeply ingrained cognitive defense mechanisms which equate what we believe to who we are. These defense mechanisms tell us that changing our beliefs is tantamount to losing our identity and purpose. The more we stand to lose, the harder it is to be objective and let the evidence lead us to the truth. Evolution is threatening to many people because it makes them question their deepest held beliefs (namely about God), which means questioning their basic identity. Changing one’s mind, accepting a new theory, often requires us to change a lot more than we bargain for.
Here’s a personal challenge to you to illustrate the point. After hearing Dr. Slutkin’s argument and evidence for violence being a virus — not just being like a virus, but actually being the same thing in a different form — do you believe him? If not, what would be required for you to change your belief, meaning, what other beliefs about the world and who you are would also have to change?
Ultimately Slutkin is asking you to reify a new concept, that is virus not as a biological entity but rather as an abstract pattern and a dynamic independent of any particular manifestation, physical or otherwise. The microbial agent we are familiar with is just one form of virus, and violence is another, no more or less “real” than any other. As long as something follows the pattern and has the same dynamic, it is also a virus.
In the grand scheme of things, violence-as-virus is not a hard sell in our current times; computer viruses, viral marketing, and memes in general have been concepts we have had a while to get used to and whose merits we appreciate. Other invisible etiologies are tougher nuts to crack. As hard-won as evolution by natural selection has been to the scientific community, it is ironic and intellectually disingenuous for those who consider themselves scientists and free thinkers to deny that there are other forms of Darwinian evolution besides the biological.
The point is not to single out individuals or grandstand about particular theories, but rather to implore us all to stop for a moment and consider what invisible etiologies exist in the world, waiting to be uncovered and brought to the light of day. I contend that many of our biggest problems and most vexing paradoxes would dissolve seemingly overnight if we would all be as open-minded as we believe ourselves to be.
]]>True to my word, I voted absentee, which not only gave me an opportunity to photocopy my completed ballot, but also gave me some time fill out each choice so that I could double-check and not make a mistake. I am revealing to you each of my ballot choices. My home state is Nevada, I’ll let you look up the details of the ballot choices if you care.
- U.S. President/VP: Obama/Biden
- U.S. Rep. in Congress: Shelly Berkeley
- State Senate: David Parks
- State Assembly: Joe Hogan
- Justice of Supreme Court Seat B: Deborah Schumacher
- Justice of Supreme Court Seat D: Mark Gibbons
- District Court Judge, Dept. 6: Elissa Cadish
- District Court Judge, Dept. 7: Linda Marie Bell
- District Court Judge, Dept. 8: Doug Smith
- District Court Judge, Dept. 10: William D. Kephart
- District Court Judge, Dept. 12: Michelle Leavitt
- District Court Judge, Dept. 14: Donald M. Mosley
- District Court Judge, Dept. 17: Michael Villani
- District Court Judge, Dept. 22: Susan Johnson
- District Court Judge, Dept. 23: Stefany Miley
- District Court Judge, Dept. 25: Kathleen E. Delaney
- District Court Judge, Family Div. Dept. G: Cynthia “Dianne” Steel
- District Court Judge, Family Div. Dept. I: Greta Muirhead
- District Court Judge, Family Div. Dept. J: Kenneth Pollock
- District Court Judge, Family Div. Dept. K: Vincent Ochoa
- District Court Judge, Family Div. Dept. L: Jennifer Elliot
- District Court Judge, Family Div. Dept. N: Mathew Harter
- District Court Judge, Family Div. Dept. O: Frank P. Sullivan
- District Court Judge, Family Div. Dept. P: Jack Howard
- District Court Judge, Family Div. Dept. Q: Bryce Duckworth
- District Court Judge, Family Div. Dept. R: Chuck Hoskin
- State Question 1: Yes
- State Question 2: Yes
- State Question 3: Yes
- State Question 4: Yes
For the various judges I did not do in depth research, but rather mostly relied on the recommendation of The Sun, which is the liberal paper in Nevada. Here are some noteworthy choices and reasoning:
David Parks: “Democratic Assemblyman David Parks wants more participation in the state’s health insurance program for children. He would also like to see the Mojave Generating Station in Laughlin, which closed in 2005, converted into a facility for producing solar power. The Sun endorses David Parks.”
Joe Hogan: “The Democratic incumbent, Joe Hogan, has earned good grades in his previous two terms. We like his support for developing Nevada’s renewable energy potential. The Sun endorses Joe Hogan.”
Deborah Schumacher: She’s been a family court judge for 15 years; her opponent is an attorney with no previous judicial experience; her opponent also endorsed McCain and Palin and criticized Obama at a rally, when judges are supposed to be politically neutral.
On the State Questions I went against the Sun’s recommendation on a couple of them:
State Question 1: I feel that state constitutions should not be in violation of the U.S. Constitution, and this corrects that issue.
State Question 2: The Sun says this is costly and unworkable, but the chance of a citizen being trampled on by the State via corrupt eminent domain proceedings is too high to compromise on.
State Question 3: The Sun say that lawmakers should be doing this anyway and that it doesn’t belong in the constitution. I’m not so trusting of lawmakers.
State Question 4: Lets lawmakers amend sales tax law without a vote from the people in order to conform with federal law, which sellers have to do anyway; this will make it easier for sellers to abide by tax laws.
hat tip: Jessa Forsythe-Crane for helping with the research
* As far as I know, my blog post had nothing to do with this. Coincidentally the site was created by the academic department which conferred my undergrad degree (Symbolic Systems). If anything, the causal arrow goes from them to me, but as you know by now I favor emergent causality.
I’m pretty sure I’ve figured out a way to supercharge what I call the “Innovation Economy”. My thinking was catalyzed by reading Nassim Taleb‘s The Black Swan. The basic idea is simple: if we want more groundbreaking firms like Google to come out of the Innovation Economy, we have to shove more startup feedstock into it.
For almost a year, I (in concert with Rafe and other Friends of Rafe) have been working out the mechanics of precisely how one could do this. We have three core postulates. First, creating world changing startups is a stochastic process whose outcome follows a Pareto distribution. Second, as with all members of this class of stochastic processes, there is no way to predict a priori the outcome of a single trial. Third, the process of getting capital to launch a new company is difficult enough that it inhibits startup formation.
If you believe these three postulates, then the form of the answer should be obvious. You would have to make the startup formation process dramatically simpler by doing away with the ritual of pretending that you can evaluate startups at the seed stage. By replacing the current artisan-like seed funding process with a factory-like seed funding process, you could fund companies much more rapidly and efficiently. More seed companies in would mean more game changing winners out.
Business angels and venture capitalist (including ones that have “incubators”) are the artisans behind the current seed funding process (though angels actually account for about 90% of seed funding). Now, I’m not saying that their opinions on startups are useless in general. I’m saying their gut feels are useless. There’s a lot of evidence from different fields that even the best experts are very poor at making wholly subjective evaluations in their fields of expertise. And the field of startups is much more complex and fluid than most.
At the seed stage, there’s just a group of founders, a slide deck, and maybe a spit-and-bailing-wire prototype. There’s nothing objective to analyze. The best you can do is make sure the founders are trustworthy, intelligent, committed people. And it turns out that the evidence is that you can do that mechanistically at least as well as a person can. Having people in the middle just slows things down and injects biases.
Of course, no process improvement is viable if seed investments are losers in general. But the available evidence is that when properly diversified they are at least as good as Series A investments. So it seems like it would be perfectly feasible to create a process where founders fill out an application on line, get a decision in a week, and then complete the funding documents in another week. Two weeks to funding.
To avoid adverse selection and moral hazard, we still need some investment criteria and oversight processes. But they can be much less capricious, time-consuming, and invasive than current practice.
[REDACTED 05/08/2009: see here]
]]>As I discussed in this previous post, I don’t think we’ll achieve AGI until we’re pretty far down the road. Based on what I heard from Vinge, Rattner, and Gershenfeld, I am reasonably convinced that our everyday environment will become increasingly electronic. Lots of everyday things will be connected to the network and imbued with significant amounts of computing power. After a while, these items will even be able to adapt their physical as well as virtual properties in response to remote instructions.
So far, nothing earth shattering or even controversial. However, I think people dramatically underestimate how big the draw of being able to richly interact with this environment will be. Let’s face it, our current user interface technology sucks. Using a computer today is pretty much the same as it was 20 years ago. Up until the iPhone, it was a total pain in the ass to interact with anything that wasn’t a full computer. Yet many people contorted their brains and digits to do whatever it took.
Even my beloved iPhone isn’t that great in an absolute sense. It just sucks much less from a relative standpoint. But it’s still pretty hard for me to do even relatively simple things, like tell my iPhone to instruct my TiVo to record a show. Imagine how frustrating it will be when you could interact electronically with 10%, 33%, or 50% of all the discrete objects in your life. You would wield magic-like powers if only you could make your wishes known!
Therefore, my first prediction is that you will see an extremely frothy market for enhanced user interface technology. People are going to try some crazy stuff. Most of it won’t work. But Bluetooth headsets are just the beginning of a trend towards adorning ourselves with electronic control devices.
Ultimately, of course, brain-computer interfaces (BCIs) are the solution. Before the summit, I thought we were a long way from BCIs. Too difficult, too brittle, too invasive. I saw several things that, when combined with my new found appreciation for the pressure to interact, changed my mind. First, from Rattner and Gershenfeld, I got a gut-level appreciation of just how small, powerful, and flexible our electronics will be in the near future. This factor ameliorates invasiveness and brittleness.
On difficulty front, I was impressed with what Kurzweil and Modha said about advances in simulating neurons. As I made clear in my AGI post, I don’t think we’re any closer to executive function [Note: via Robin Hanson, new report on whole brain emulation here], but we don’t need that for BCIs. The majory difficulty with BCIs is interpreting what the signal coming out of neurons means. However, if you can simulate the neurons in that region of the brain, you can probably do a much better job of calibrating your sensing apparatus to the intentions of the user. Of course, the brain itself will adapt and you’ll hopefully get a tightly converging loop of adaptation. Obviously, there’s a lot of handwaving going on here, but I belive the inuition that simulation will give us interpretation leverage is sound.
Now, we still need to find early adopters for the first BCIs. Even though I am a long time sci-fi buff, I’ve always had problems coming to grips with the idea of sticking electronics in my head. However, people a few years younger than I seem to have no problem permanently altering their bodies with tattoos and piercings, so their threshold to adoption may be much lower.
We’ve also, unfortunately, been creating a significant class of potential test subjects with forays into Iraq and Afghanistan. Young men and women are losing not just limbs but whole areas of brain function through traumatic brain injuries. Perhaps more than 30% of casualties have some sort of brain injury. My guess is that the prospect of restoring lost function will make them more than willing to try electronic cognitive prostheses. I know it would for me. And I’d certainly contribute money in a second to help. I can easily imagine this situation overcoming taboos against connecting electronics directly to the human brain.
I realize this was always on the technoogy roadmap. But now I’ve got it on my 10-year horizon instead of my 20-year horizon.
]]>I would love to use Google “In Quotes” to crowdsource measures of truth.
For instance, I just saw this:
“In a world of hostile and unstable suppliers of oil, this nation will achieve strategic independence by 2025,” said Mr. McCain during a campaign speech. [ Wed, 29 Oct 2008 Washington Times ]
I would like to be able to indicate on a scale from 0 (false) to 10 (true) whether I believed what McCain said is true (that we will achieve strategic independence by 2025). Everyone’s rating would yield an average number (let’s say it was 6.8). In addition, all other quotes that were attributed to McCain would have a truth index too, and the average of those numbers would be his dynamically updated “truthiness” rating. You could add a decay factor to allow for people’s reputations to change over time.
You could then sort quotes and people by their truthiness rating right along side date and relevance when people do searches on news items.
Additionally, you could get a truthiness rating for the sources by averaging the truth index of each quote they publish. Some media like to quote trolls so as to trump up gratuitous controversy, and this would separate those sources from the true investigative journalists.
I can think of many problems with this approach, but I’m curious. What do you think is the most likely way this could fail to achieve the objective of measuring truth?
]]>Here are a few more favorites, hot off the presses…
Project M
Vodpod videos no longer available.Marian Bantjes
Vodpod videos no longer available.The Pirate’s Dilemma
Vodpod videos no longer available. ]]>Vodpod videos no longer available.
To address these intense feelings and the demand for public discussion, a wiki was created, in which you are invited to join the discussion. This forum was designed as “a place for a rich, lively, respectful and facts-based dialog on what’s necessary to address the serious economic challenges confronting America today.” Hope to see you there.
Click here to go to the policy debate.
]]>Absence of evidence isn’t necessarily evidence of absence, but I believe that if anyone were making headway on this problem, the chances that someone at the summit would have alluded to it are high. Therefore, I predict that the first being with substantially higher g than current humans is much more likely to be an augmented human than an AGI [Edit: more thoughts on electronically enhancing humans here].
I was into AI about 15 years ago. I worked for a pretty successful startup whose product was based on narrow AI techniques. I then left to start a company with a couple of my colleagues focused on doing custom enterprise software development with IDEs that integrated narrow-AI techniques. I had a decent grasp of the literature and attended the occasional conference. Like many of my peers I forsook this field for distributed computing and the Internet.
Now, in every other area of information technology I can think of, if I were to come back to it after a 15-year absence, I would be blown away with the progress. Not only am I not blown away, I am a bit despondent over the complete lack of conceptual breakthroughs. Yes, we have much more powerful hardware. Yes, we can simulate quite large assemblies of neurons. Yes, we can process language and vision much better. But I haven’t seen anything new about building systems that choose which general goals to pursue, formulate plans to achieve them, and make mid-course corrections in execution.
The most impressive thing I’ve seen is from Eliezer Yudkowsky. He’s working on Friendly AI at the Singularity Institute (notably, he did not present at this Singularity Summit). I’ve been following his posts at Overcoming Bias. He’s a scary smart auto-didact that has integrated a bunch of different fields into some very nuanced thinking. But he’s trying to hit a smaller target than just AGI, the subset that we would consider “friendly”. Obviously, this is a harder problem and he hasn’t even figured out (that he’s mentioned) how to define the boundary of the friendly subset of AGI. If anybody can do it, Eliezer can, but a lot of other smart guys have tackled the AGI problem over the years and we don’t have much to show for it.
Therefore, I see the path toward AGI where there is a purposeful design rather unlikely. Now, if you believe that executive function is an emergent property, there is still the path to AGI where a collection of programs “wakes up” one day and successfully implementing lower order cognitive functions is progress. I am maximally uncertain about this proposition.
What I think is likely is that humans will gradually augment their own intelligence. Eventually, we’ll either be smart enough or have enough instrumentation plugged directly into our brains that we’ll be able to determine what constitutes executive function. But by then, we’ll already by superhuman and AGI will be a smaller leap.
]]>
To set the stage, Vernor Vinge submitted to a wide ranging interview. Unsurprisingly given that I’m a fan of his science fiction, I found the interview entertaining. He made one particularly excellent point and one particularly glaring error that are worth mentioning. The excellent point was that embedded, network processors are so useful that they will become ubiquitous (he mentioned that there will be trillions of them). Unfortunately, this popularity will also make them a critical point of failure. The glaring error was to assert that as humans outsource their cognition to machines, the number of jobs suitable for humans will narrow. Economic history contradicts this theory, but more on this topic in a bit.
Then Nova Spivak talked about collective intelligence. While it took a while to get there, I found his ultimate point insightful: in order to truly achieve collective intelligence, we need some sort of “meta-self” that maintains models of the internal state of the collective and how the collective relates to the external world, as well as structures the goals of the collective and tracks progress towards the goals.
Esther Dyson was caught wrong-footed. Apparently, she thought someone was going to interview here and had not prepared any material. She talked a little bit about genetics, but nothing I found even remotely new.
James Miller was the hit of the morning. He spoke (somewhat tongue in cheek) about the economic implications of a significant portion of the population anticipating the singularity: more money spent on safer cars, construction workers becoming more expensive, people saving less for retirement, the market for office buildings crashing, students not wanting to study anything boring.
Justin Rattner gave a fascinating (to me) talk about the nuts and bolts of Intel’s approach to maintaining the inexorable march of computing power growth. I was blown away by the fact that Moore’s Law stopped last year and nobody noticed. The original formulation of Moore’s Law was about CMOS transistors. But CMOS reached its limits and Intel switched to HiK-MG without a blip in rest of the supply line. They have technologies mapped out to maintain exponential growth for another 8 years, which is about how long they’ve historically had visibility into future production technology.
Eric Baum had some good points about what “understanding” really means. He emphasized the ability to rapidly assemble programs to solve problems and illustrated this point with a comparison between how evolution and engineers design limbs. Evolution has a general representation for a limb. Mutate this representation and you still have a limb. The difference in design instructions for an arm, a wing, and a flipper aren’t very much. Obviously, this isn’t currently the case for human-engineered prosthetic. I’m not sure I buy his conclusion that this implies we need some sort of hybrid programming tool that combines human-directed design with computer-generated programming. Seemed like a big inferential leap.
Like Rattner, Dharmendra Modha surprised me with some nuts and bolts. Apparently, Almaden Labs already has a simulation of a rat’s brain that runs at 1/10,000th real-time. Assuming the lower bound of the effective complexity of a neuron/synapse (there’s a lot of uncertainty about how much computation goes on here), they say they’ll have the infrastructure to simulate a human brain in real time by 2018. He noted that “software” to run on the brain is an open issue, but I’m still going to have to revise downward my estimation of the time until high quality brain uploading.
Ben Goertzel discussed OpenCog, an open platform for building AGI programs. I was impressed because he clearly understood the failures of past AGI projects and seemed like a smart guy. However, I’m not convinced this path will work, though this framework may accelerate the pace at which researchers narrow down the hard problems.
The only truly bad talk was by Marshall Brain. It’s not a good idea to discuss the economic implications of AI and robotics when you don’t understand anything about economics. He thinks the rise of interactive automatons will cause 50% unemployment. His use of economic statistics was worse than amateurish. He turned the aforementioned glaring error by Vinge into a painful 20 minute presentation.
Cynthia Breazeal cleansed the palate after the bad taste left by Marshall. She demonstrated how she’s working on imbuing computers with emotional intelligence. She showed some reasonably impressive videos of her emotive automaton. This avenue seems mostly like crank turning to me, necessary but not ground breaking because we pretty much understand the cognitive psychology already. However, I was somewhat impressed that they’ve managed to architect their software so the automaton uses the model of its own relationship to the world to model the state of someone else. As a result, the automaton can operate on false-beliefs in others just like a human child.
After lunch, the summit opened with a debate between John Horgan and Ray Kurzweil on whether the singularity would occur in the near future. John said that complexity is too high while Ray said that exponential growth would overcome the complexity. John was badly overmatched.
Pete Estep told us that knowledge was expanding too fast for meat brains to keep pace so we needed to augment our intelligence. Yeah, yeah. Preaching to the choir. He claimed that Innerspace was already working on a fully integrated memory prosethtic. Cool if true, but it appears to just be a prize at this point.
The most mind blowing presentation was by Neil Gershenfeld. I already thought the Fab Lab was pretty cool. But the long term stuff he’s working on is breathtaking. There’s a duality between computing and physics. For example, we use physics to build computers that we then use to model physics. The duality is much more fundamental than that (e.g., the equivalence of thermodynamic entropy and Shannon entropy). They have discovered/created a programming paradigm called asynchronous logic automata (ALA: so new there’s not a good reference on the Web; see also Conformal Computing: no good references on that either [edit 04/08/09: this term was coined by James Reynolds and Lenore Mullin in this paper]) that he says is based on fundamental physical properties. They can use ALA to PROGRAM MATTER. Such matter is made of identical cells that assemble themselves like proteins, based on the ALA instructions. He had some animations and it’s unclear from my notes whether these were merely simulations or visualizations of something they’d actually built. My memory is that they were actual, but at a large scale. Neil said they should be able to get exponential scaling and they don’t really rely on quantum effects. The bottom line was: 20 years to the Star Trek replicator. This is the number one thing on my list to keep track of now.
Peter Diamandis talked about the space tourism and the X Prize. Cool, but not that relevant.
Finally, Ray commented on all the talks. The most important comment was to dispell the notion that technology destroys jobs. He gave the example of gathering all the farmers and manufacturing workers in 1900 and telling them that farming and manufacturing jobs would only be a few percent of the total jobs in 2000. There’s just know way they could imagine all the new jobs like ASIC engineer, Web designer, and network programmer. Technology creates more opportunities than it destroys. Hallelujah brother.
]]>- One shed of hydroponic barley = 200 acres of land
- No new tech required
- VerticalFarm.com
- a Russian drilling project almost completed will cause water geyser 3000 meters high for 4 months which will change climate (but nobody knows how)
- Solar blanket for TB patients which also recharges a light for nighttime
- Project M
- Bhutan doesn’t have a single traffic light (600K people)
- Project 20 Twenty
- Games as great (crucial) learning environment
- and they are universally misunderstood by each successive generation
- Plato bashed written word, Voltaire bashed plethora of books, ___ bashed novels, etc
- games for changing the world for the better
-
- two biggest problems in the world according to secretary general
- inability to understand the other side’s perspective
- misunderstanding of the complex, interdependent
- two biggest problems in the world according to secretary general
Chris Anderson – attention and reputation are also economic markets; google is world’s largest reputation market (via pagerank); larry page and will wright are central bankers, like bernake; so is phil rosedale of second life; check out Maple Story (korean game coming to US); games enable time/money fungability
Clay Shirky – grobanites for charity started with the members before they had the mission, raised the funds before they had a cause; cognitive dissonance in the top down approach can kill the magic and the whole effort; “design for generosity” (started with napster); bad explanations: 1) kids today are criminals 2) digital krishna consciousness; good explanations: 1) cost going to free 2) people respond to incentives to be generous 3) linking the two (like napster); howard forums is better technical forum than the professionals; we’ve forgotten about intrinsic motivations; autonomy is essential (people need to choose to do it); system can’t be totally freeform either: must be right mix of freedom and constraints; need to think of people as participants in social systems, not as aggregate averages.
Matt Mason – if you want to beat pirates, copy them; matt used to be pirate radio dj and at the same time the police were raiding them each week, they were advertising with them! why? they had millions of listeners; the art of storytelling is changing because of abundance (going from broadcast to network); tale of GuyzNite die hard movie remix video on youtube (die hard lawyers asked them to take it down at the same time die hard marketers asked them how much to use it to for marketing die hard 4!); Novartis giving away drugs in markets where their patents are are being heavily infringed on, and are reaping great benefits from this; in an economy based on abundance your business model needs to be a virtuous circle; e.g. Heros is highly pirated, but they have rev streams which capitalize on this (merch, publishing, nbc.com, licensing, itunes, etc
Check out hub.poptech.org
]]>The maintstream narrative on why we need a bailout is that credit is “frozen”. We can’t just let the financial sector sort itself out because it provides the credit “grease” that lubricates the rest of the economy. The graphs in this paper make it pretty clear that the wheels of Main Street have plenty of grease. So it looks to me like the bailout is corporate welfare plain and simple. It also means that Paulson and Bernanke talking about how bad things are to justify the bailout may have actually exacerbated any real recession by magnifying the psychological salience of the crisis.
]]>Theme of the conference is Scarcity and Abundance.
BarefootCollege.org (Bunker Roy)
- training poor, illiterate rural, older women from around the world to engineers, take knowledge back to their village and transform it
- decentralizing and spreading technical knowhow (women, no written word)
- rainwater collection
- solar electricity
- teaching done only by illiterates (don’t even speak same language) because literates can’t teach illiterates
- children’s parliment
Marketing knowledge and products in developing world best done via traditional/local media forms (troubadours, bollywood-produced video, hand-puppets, etc)
BlueOcean.org learn about sustainable use of ocean resources
WattzOn.com personal energy consumption calculator that gets better through crowdsourcing
On a personal level, extending life of products makes a huge difference globally. Check out Freecycle.org
PeaceGames.org 60% drop in violence, 75% increase in prosocial behavior, Youth Peace Prize being launched
Peter Whybrow
- evolution of our current neuropsychology happened during time of scarcity; now we are in a much more abundant time overall
- this dichotomy leads to great challenges, like addition (to everything), and debt cycle (financial, sleep, and other)
- markets are not free, they only arise within social constraints, which is contrary to our mythology
- mimicry, empathy, mirror neurons
- US is actually less socially mobile than Europe right now
- sleeping less <–> weighing more
K. David Harrison: thousands of languages are going extinct
- this is a shame, but also bad for all of us
- traditional languages capture thousands of years of cultural knowledge that will enrich us all if they survive, but impoverish all of us if they don’t
- comment from audience member: there are many children (often autistic) who have incredible language learning ability (he knows one who’s up to 40 currently) and are hungry for projects like this…
Paul Polak (Out of Poverty)
- there is a place for non-profit and interest-free microloans, but…
- true revolution (and flood of capital and development) will happen only when real money is being made
- he’s done trying to convince big business to serve the “other 90%”; instead he’s building one they will wan to compete with…
- multinational franchise of microentrepreneurs (Windhorse International)
- ruthless pursuit of affordability
- example from the NGO he built (IDE): treadle pump (rural stairmaster used for pumping water)
- 2.1 poor families invested, $50m, earning $210m/yr indefinitely
- doing this for water purification systems, solar concentrators, remote-use medical lab equipment and other businesses that can be done locally by poor rural villagers
- he’s also creating a non-profit to complement the for-profit by fomenting a design revolution for 90%
- keys are:
- go to where action is
- listen (hardest part)
- learn everything there is to know about specific context
There are two paths to empathetic behavior, one innate, and one constructed. The innate system is part of our biological heritage, based on emotion, and is shared to some degree with other animals. Neuroscientists believe that a major player in this system are so-called mirror neurons, which take as input sensory information about what others are experiencing and produce emotional responses in us similar or identical to what we would have felt if we were actually experiencing the same thing ourselves. Mirror neurons are what allow us to put ourselves in another person’s shoes. This innate empathetic system interacts with other cognitive/emotional systems, and so even if we all have a similar capacity for empathy based on our mirror neurons, the end result can be quite different from human to human. One could reduce the plight of the narcissist to their lack of empathy: whereas a normal person will take subtle clues from those around them and feel emotion, the narcissist — whether it be a failing of their mirror neurons or interference from other mental systems — will systematically ignore (or not perceive) those same clues.
The second path to empathetic behavior is based on a more conscious logic or pattern of thought that does not require a visceral emotional response to function. To illustrate this point, consider a society with a well-functioning and highly sophisticated legal system dictating all manner of behavior and whose citizens act in accordance with all these laws the vast majority of the time. Would it be possible to tell whether an individual in such a society was acting out of empathy for his fellow man, or because he is following the law? On the flip side, there are societies small and big in which civility may equally be the result of a strong set of rules, or an ethos (instilled from birth ) of caring for one’s neighbors and strangers. So what is constructed empathy? It’s the logic of the mind (both conscious and unconscious) which allows us to see how it is in our own best interest to treat another as we would want to be treated ourselves. In other words, constructed empathy is “enlightened self-interest”.
In the real world, we are all a mix of both types of empathy. We each lie on a spectrum, where on the one end are the bleeding hearts and the other the sociopaths, with most of us falling naturally somewhere in between. And on a given day, or in certain circumstances, we can be acting more on one type than the other, as dictated by our individual dynamic range. What is interesting to observe are different thought patterns that emerge and different choices that people make when acting from from innate empathy vs constructed empathy.
It is also interesting to observe how this empathetic dualism allows us to reconcile the argument between those who claim that there’s no such thing as altruistic behavior — that we’re all ultimately in it for ourselves — and those who claim that humans are naturally interested in doing good and helping one another. Like in most age-old debates, there is some truth to both sides, but each one frames the issue incorrectly, too simplistically. And with this new lexicon and set of concepts, it is easy to see that we are at once cutthroat and altruistic, and that there is no contradiction in that statement.
]]>Think of an experience from your childhood. Something you remember clearly, something you can see, feel, maybe even smell, as if you were really there. After all, you really were there at the time, weren’t you? How else would you remember it? But here is the bombshell: you weren’t there. Not a single atom that is in your body today was there when that took place…Matter flows from place to place and momentarily comes together to be you. Whatever you are, therefore, you are not the stuff of which you are made. If that doesn’t make your hair stand up on the back of your neck, read it again until it does, because it is important.
We humans have this “mundane preoccupation of matter”, as Dawkins points out because we are evolved to live in a world primarily of matter (at a particular scale). This doesn’t mean that whirlpools and market forces are any less real than rocks, just that we are evolutionarily predisposed to grok rocks as real things because we can touch them with our hands, feel their solidity, and track them with our eyes as we throw them. But the reality (of rocks and whirlpools and you) is in the patterns information content and flow. So when I say things like complex systems defend themselves, I don’t mean it metaphorically, I mean it really and literally.
On the short list of things to reify:
- memes and temes (fitting since Dawkins coined “meme”)
- group selection
- social contagion
- self-fulfilling prophecy
- emergent causality
- agency
On the flip side, there do exist reifications that need to be dispelled, such as God and the gene, one of which Dawkins would wholeheartedly agree with, and the other of which he may not.
]]>First they bail out Bear Stearns. Then they let Lehman go bankrupt. But AIG gets a lifeline. On to a $700B bailout intended to purchase toxic MBSs. And most recently forcing several probably healthy banks to absorb $250B in government investment. Along the way, there were a bunch of changes to FDIC regulations and a see-sawing stock market.
You might be asking yourself, what the heck is going on here? The reason for all the flailing is that the government is attempting to implement a command and control solution to an extremely distributed problem.
It all comes down to “liquidity”. Liquidity is a measure of the fungibility of an asset: how quickly you can turn it into something else. Cash is very liquid. Credit cards and checks are almost as good as cash. Investments in machinery, startups, and education are very illiquid.
Most people think liquidity is good. But it’s also bad. Illiquid assets tend to be the most productive. Cash just sits there. But a machine can make stuff. A startup can develop innovative products. A scientist can discover new things. So from this perspective, we actually want to allocate assets as much as possible to high productivity/low liquidity items.
If Company A keeps a lot of cash around while Company B invests in high productivity machinery, Company B will tend to out-compete Company A (at least over the short run where no negative Black Swans occur). Firms therefore demand the financial tools they need to be Company B. So the modern economy has evolved to support what we might call “just-in-time liquidity”. There’s no need for small firms to hoard cash because they tap a credit line. There’s no need for their local banks to keep as much cash as would be necessary to cover all these credit lines, because they can borrow from bigger banks. And so on for every cash supply line you can think of. But just-in-time liquidity relies on trust. Everyone has to believe that they can get cash when they need it.There must be enough total liquidity in the system to meet these needs.
The problem we have now is that some large financial institutions were holding a bunch MBSs that they thought were fairly liquid and worth X. Unfortunately, it turns out MBSs are no longer that liquid and are worth a fraction of X. A double liquidity whammy! So the banks that held them needed to sell more liquid assets so they could provide the total amount of liquidity their customers needed. But when almost everybody is selling, almost nobody is buying so these other assets became less liquid, which requires selling still more assets.
Meanwhile, the rest of the economy is quickly trying to adjust by conserving cash because they don’t believe they can get liquidity from financial institutions. Households are cutting down on purchases to. Small businesses are cutting staff. Manufacturers are slowing production. Raw material suppliers are reducing investments.
Taking toxic MBSs off the books and restoring liquidity at the top won’t solve the problem now. Everyone downstream is already adjusting and it will take time for people to begin trusting just-in-time liquidity again. It’s like removing a tumor after its metastasized. Sure it helps, but you’ve got to address all the low-level knock-on effects too. Unfortunately, the government is mostly set up to affect events top-down not bottom-up. They can buy assets in markets and strong arm big companies. But they can’t make everyone feel safe.
Add to the liquidity problem the need for real economic adjustments. At the end of the day, a bunch of people purchased homes they couldn’t really afford. A bunch of developers built homes for which there was actually no demand. Banks took risks that turned out badly for them. Other people made decisions assuming their homes were worth a certain amount and their homes are no longer worth that amount. No amount of wishful thinking or financial engineering will change these painful facts.
That’s why the government is flailing. First, they don’t really have the tools necessary to influence the distributed decisions of everyone in the economy. Second, with an election coming up, they would prefer that people not have to swallow all of the real economic pain now. But the system has gone chaotic and it’s hard to believe that the government can exercise any fine tuned control. I believe the system can heal itself and the government is almost as likely to make things worse as better. Best to just let things sort themselves out. However, I’m not terribly confident of these beliefs and am really glad I’m not the one in charge. The pressure to do something must be enormous.
]]>One of the baffling aspects of living systems is the relationship of the (relatively small) genome to the seemingly infinite variation and complexity that we witness within and between species. The idea that we share 99% or so of our DNA with mice means that our differences must somehow be accounted for in the remaining 1% (roughly 7 megabytes of information).
The key insight needed to make sense of this mystery involves the aforementioned principles evidenced in the Spore universe, but it also requires the notion of real world as encoding device. By this I mean that the genome itself is not a complete, self-contained piece of code with all that is required to generate (for instance) an adult human. Rather, implicit in the genetic code is a model of the real-world environment that the code will operate in once activated (i.e. “expressed”), and this implicit model is absolutely crucial for life to have originated and continue to thrive. Imagine, for instance, if all of a sudden the laws of chemistry were altered and carbon could only form 2 bonds. Life as we know it would cease to exist; our DNA (and all DNA for that matter) relies implicitly and thoroughly on existing features of the world. And thus our DNA does not need to explicitly encode how to fold proteins since protein-folding is an automatic reaction given the structure and environment of the particular protein molecule.
This implicit encoding or reliance of the genetic code on its environment has been likened to scaffolding that is used in construction (genes being the blueprint of course). But the scaffolding analogy doesn’t do justice to the immensity of information (both in absolute terms and as a percentage of the total) that is implicitly encoded by the environment for use by the genome. Not that this is some giant happy coincidence mind you; the genome evolved in a world where physical and chemical principles pre-existed. And as lifeforms increased in complexity, each new level of organization was a pre-existing condition to be relied upon for the evolution/emergence of the next.
It is worth pointing out that by “genetic environment” I don’t just mean the environment that the whole organism finds itself in, but rather the extended and recursive environment that the genetic code will find itself in as it does its work. This includes increasing levels of complexity that are generated by, or on top of the DNA level: chromosomes, epigenetic markers, proteome, cellular structures, multicellular structures, and on up. One level begets the next, and your genetic code expects these levels to emerge in due course or it won’t function properly (or at all). Consider, for example, how useless the part of your DNA that describes brain structure would be if it were not for the encoding of how to make neurons and axons.
By grasping the significance of the extended, recursive genetic environment (ERGE) it becomes more clear why genetic fatalism is misguided and why the nature/nurture debate misses a large portion of the action. By intervening in the expression of the genome through the ERGE to the mature human animal — for example, via early intervention in childhood — genetic “predispositions” become largely irrelevant in practice. By the same token, there’s no such thing as a purely natural or purely environmental effect: it all a matter of controlling the ERGE.
]]>In Act I, we saw how government meddling overheated the housing and mortgage markets. Now we’ll see how Wall Street took advantage of this opportunity and also apportion some blame to ourselves.
That’s right. I think regular people are also to blame. That’s because most of them don’t understand, deep down, that there ain’t no such thing as a free lunch (TANSTAAFL). When you put your money in a bank, you’re not storing it. Why do you think the bank pays you interest or bears all the expenses of processing your checks? Because they use the money you deposit to make loans to other people and engage in other profitable financial transactions.
“Hah!” you say, “They should be careful with my money.” Oh, really? Do you own stocks, even through a mutual fund? I bet you like to see a healthy return on those investments, right? Well, there are probably more than a few financial services firms (at least transitively) in your portfolio that did quite well for many years. The people that run those firms are under tremendous pressure to show a profit so they can please stockholders like you, which means squeezing as much potential out of the deposits of account holders like you.
A brief digression about money market accounts. A lot of people are up in arms that their money market accounts have or may “break the buck” and that they may not have full access to their funds. Why exactly did they think their money market accounts had a higher interest rate than vanilla savings accounts? Didn’t they realize they were being compensated for the additional risk they were taking? They shopped around for the money market fund with the best interest rate and then were surprised when those funds turned out to be the riskiest. Did they think those fund managers had a magic want that created safe money? No, this extra interest was a risk premium. But don’t worry, as we shall see, even Wall Street “professionals” with advanced degrees forget that the first word in “risk premium” is “risk”.
Which brings us to the concepts of reserves and insurance. Financial institutions are supposed to make sure they have enough capital on hand to disburse funds when asked and weather potential losses on transactions. The government enforces these requirements to balance the desires of account holders for safety and stockholders for returns (even though in many cases these are the same people). Obviously, the more capital institutions need to keep on hand, the less they can “put to work” making money. Often, they try to buy various forms of insurance against losses to reduce the reserve requirements. They also try to coerce regulators into lowering the requirements, which is easy to do when they’re making billions of dollars a year.
OK, now we’re ready to talk about how Wall Street is to blame. Because home ownership had become seen as a signal of stability and prosperity, consumers were desperate to own homes and politicians were desperate to oblige them. When there are two desperate parties that want a transaction to happen, financiers smell an opportunity.
The opportunity was big. Because subprime mortages are inherently more risky, borrowers are willing to pay lenders a substantial risk premium. Or if you prefer the more populist narrative, lenders can trick borrowers into taking out high cost loans. In either case, by opening up the mortgage market to an entirely new class of customers that were willing to pay high prices, there was a huge stream of money for financiers to sink their teeth into. But they forgot about the “risk” in “risk premium.”
Now, they didn’t make the small mistakes of normal people who fundamentally don’t understand risk. They made the truly ginormous mistakes possible only for the well-educated, highly intelligent, and extremely ambitious. There are two types of risk in finance: idiosyncratic risk and systematic risk. You can diversify away idiosyncratic risk but not systemic risk. The problem is that it’s sometimes really hard to tell the difference. The details are incredibly complicated, but here’s the overview. The guys at a variety of Wall Street firms in charge of coming up with financial products related to MBSs thought they had discovered idiosyncratic risks that they could diverisfy away with complex financial instruments. However, these were really systemic risks in disguise so the risk was just moved around to where no single firm could see it.
So Firm A takes on a bunch of mortgage related risk. They want to get rid of it to free up reserves for other profitable transactions so they trade some instruments designed to insure against the risk with Firm B. Now Firm B wants to get rid of the risk, so they trade some different instruments with Firm C. And so on… At this point, everybody’s ability to keep their reserves low is dependent on the value of trades they’ve made with a bunch of other firms who have in turn made a bunch of trades with still other firms. If something bad starts happening to more than a couple of firms (that’s the “systemic” in “systemic risk” BTW), the strength of everyone’s insurance will go down. So they’ll have to increase their cash on hand to cover their reserve requirements. The first step is selling MBS-related assets, which everyone else also wants to sell, which means they will be going down in value. The second step is selling other assets and stopping the purchase of new risky assets, which everyone else also wants to do. Can you say, “meltdown”? I knew you could.
Luckily, every firm has some geeks in the back room that work in Risk Management whose job it is to be on the lookout for this kind of thing. Let’s go back in time to before there was even a whiff of crisis. Most of the scenarios that these guys run on supercomputers with fancy software show that everything is cool. But there are some extreme cases that look very bad. The geeks go to the suits running their respective firms and say, “Uh, there are some scenarios that seem to indicate we’re underestimating the risks associated with MBSs.” But the technical details are very difficult to understand. So the suits have to choose whether they go to the tremendous amount of effort necessary to comprehend what the geeks are saying. If a suit does go to the effort and ends up believing what his geeks say, what will happen?
Well, he’ll have to dial back his firm’s participation in the MBS market. His firm will make less money. His bonus will go down. He’ll have to on TV and explain why his firm is performing worse than competing firms. At the shareholder meeting, he’ll have a lot of explaining to do. So he doesn’t want to understand. Besides, what are the odds? His geeks say, “Oh a few percent.” Most likely, this suit will have “earned” a ton of bonuses and moved on by then. And the geeks might be wrong anyway. Best not to rock the boat. Are any competing firms getting out of MBSs? No? Then he’ll stay in too. He doesn’t want to be the only one not making money here.
And so we now arrive at the reason for being pissed off. As we saw yesterday, politicians layed the foundation that made the crisis possible. So Wall Street suits moved in and built a house of cards. Fundamentally, the bailout consists of the two groups most responsible helping each other out, with our money. A friend of mine has described the “Nutsack Bailout Plan.” Any Wall Street firm that wants a piece of the bailout has to let someone kick every CEO they’ve had for the last ten years in the testicles. Throw in key members of Congress and executive branch responsible for oversight and I’m in.
]]>Because of the complexity, I think we should be very careful to take baby steps. Going off half-cocked is much more likely to make things worse than better. I think we need to do three things. First, we need to understand the underlying causes of the mortgage meltdown that kicked off the cascade (not because I think fixing the cause will solve the problem, but because it will help us avoid making things worse). Second, we need to examine how the cascade was magnified so we can hopefully install some breaks going forward. Third, we need to agree on the outcomes we most want to prevent as a society (as individuals, I’m sure we all want to keep our houses, jobs, and savings).
One of my hot buttons is the narrative that the mortgage meltdown was caused by capitalism run amok. This… is… just… plain… wrong. The root cause of the problem is the government f***ing around with the housing and mortgage markets. This intervention caused a potential energy gradient on which capitalists at the aggressive end of the curve could feed. Don’t lay all the blame on the sharks when you go swimming off the Great Barrier Reef after slashing your arms and legs with a razor blade. The feeding frenzy was magnified by the risk structure of the resulting market, a structure that made it easy for some people to fool themselves and their overseers. Not everyone, mind you, but selection pressure caused the organizations these people ran to grow larger than those lead by wiser heads. But we’ll get to the sharks in the next installment.
The government intervenes in the housing market per se in a number of ways. The most obvious is that mortgage interest and property taxes are both deductible on your federal and most state returns. As a homeowner, I obviously like this. As a wannabe economist, I know this is a wealth transfer to homeowners and therefore artificially inflates the demand for home ownership. But the feds aren’t the only ones with a finger on the scales here. State governments do their part. For example, California has a whole department dedicated to planning for housing. In general, these types of planning restrictions actually reduce the supply of housing. Moreover, we also have Proposition 13, which penalizes people for selling their houses because their property taxes then increase dramatically.
The net result is that, on average, the federal government stimulates demand and the state government restricts supply. As everyone knows from basic econ, this drives up prices. Then we have all the knock on wealth effects. With rising prices, homes become an artificially attractive form of investment so people that would otherwise rent, choose home ownership and those that would already buy, choose larger homes. As a result, the original distortions become magnified. With population growth, demand and supply gradually get more and more out of whack, causing housing prices to spiral.
Of course, this upwards spiral can’t go on forever. Eventually, home prices increase so much more than wages that fewer and fewer people can afford homes using the traditional 20% down 30-year fixed rate mortage. But don’t worry, the federal government already has a huge bureaucracy in place to support the mortgage market. Just pull some levers and twist some dials. Hallelujah! Now people have accesss to more “affordable” mortgages.
The federal initiatives surrounding mortgages are truly staggering. Fannie Mae and Freddie Mac guarantee half of the $12T mortgages in the US. But those guarantees aren’t (or weren’t) full government guarantees. So don’t forget Ginnie Mae, which specifically promotes affordable housing with a full government guarantee. Since it’s inception in 1968, it has guaranteed the mortgage backed securities (MBS) for over $2.6T of loans to low and moderate income borrowers. In fact, I would argue that Ginnie was the seed around which the market for MBSs crystallized. Without someone offloading the risk from the least attractive mortgages, trading in MBSs would have been much less attractive. Then there’s the FHA, which insures 4.8M single family mortgages. Together, legislation and regulation aimed at these institutions effectively controls the requirements to obtain a mortgage in the US. Yes, individual banks can loan on different terms, but if they do, they’ll never be able to sell those mortgages to anyone else. So they mostly fall in line.
Now we reach the climax of Act I. As home prices escalated, fewer people could afford homes. They saw this as a loss of status and became restive. The political response is covered quite well in this Washington Post article, but I will sum up. The politicians started to pull levers and twist dials. First, just a little adjustment. In 1995, the Department of Housing and Urban Development (HUD), gave Fannie and Freddie affordable housing tax credits for buying subprime MBSs. The combination of tax advantages and high fees proved irresistible. They increased their purchase of subprime MBSs tenfold from 1995 to 2004. But that wasn’t enough. Bush had an election coming up so he increased the HUD’s affordable housing goal from 50% to 56%. HUD relaxed its oversight of Fannie and Freddie and they got tax credits for loans that would have been deemed “contrary to good lending practices.”
From a complex systems standpoint, we’d really like there to be either negative feedback or redundancy to blunt the impact of this initial problem. As we’ll see in the next episode, we unfortunately had positive feedback. But there’s already a question you can answer for yourself. Do you think any legislation package aimed at addressing the crisis should increase or decrease the government’s involvement in housing and mortage markets?
]]>Two big prediction markets (Betfair and Intrade) currently show Obama with around a 65% chance of winning. Not surprisingly (given that there’s nothing at stake for the people polled) CNN’s current poll calls it much closer with Obama leading 47% to 43% for McCain (with 10% of those polled not sure). [Update: I just realized I compared apples and oranges when I wrote this since CNN’s is a popular poll not a projection of who will win based on electoral college. But this won’t affect the punchline….]
There’s an interesting new twist in forecasting analysis though that I’d like people’s opinions on. It’s the simulation approach as embodied in FiveThirtyEight.com. This analysis has Obama currently at a whopping 83% to win!
Of course one’s first instinct might be to question the validity of the analysis. After all, the site is public information, and if the analysis is valid it should be priced into the prediction markets. But I’m not so sure we can dismiss it that easily. For one, the prediction markets themselves have shown long periods of “mispricing” where one could arbitrage between two markets. Secondly, it stands to reason that if investors don’t understand how the analysis being done is deeper than other publicly available information, then they will discount the analysis. Finally — and this gets into the heart of complex systems thinking — markets are averagers, whereas the simulation approach being used by FiveThirtyEight.com seems to model the system being predicted very well. This includes the various non-linear convergences and divergences, tipping points, info cascades, etc, that is beyond the ken of an averaging model (like a market).
Thoughts?
]]>verb ( –fies, –fied) [ trans. ] formal
make (something abstract) more concrete or real
Imagine if an alien landed on Earth to study modern society and you were assigned the task of being its local guide. You get to the subject of money and the alien is perplexed. What is money? Is it paper currency? Clearly not, since you can exchange that paper for other forms of currency, such as coins, foreign bank notes, electronic funds, treasury bills, and all sorts of derivatives, assets (both tangible and intangible, liquid and illiquid), services, promises, and so on. After hearing all of the various aspects of money, the alien tells you that money doesn’t really exist.
Money is a fiction, a myth that us humans all believe in. We all act as if money is something real, and this fiction has incredible sway over all aspects of our society, both on the individual scale and in the grand scheme of things. But there’s nothing real about money says the alien. You argue that the very “as if-ness” of money, the fact that we all act as though it were something specific and real, makes it so. Still, you have this nagging suspicion that on some level the alien is right…
One of the epistemological roadblocks of modern science as it (increasingly) studies complex systems is that there is no real place for reification of new concepts that are not already well-understood. You can posit something new as a hypothesis, but then you must reduce that thing to its constituent parts, each of which must be well understood (i.e. the assumptions). But if the magic is in not in the parts but rather in how they fit together, or in a process, then our current scientific method does poorly at explaining the system.
In complex, emergent systems we are tempted as scientists to say what we observe colloquially — the invisible hand of the market, the Darwinian evolution of culture, consciousness itself — is either an artifact (not to be reasoned about) or there exists a heretofore unresolved, better explanation rooted in familiar concepts. But by this logic, we would have to either admit that money doesn’t exist, or that it’s so complicated a thing that in thousands of years of its use, and the many Nobel prizes awarded to those who study it, we just haven’t come up with the right reduction to explain it.
It’s not that reductionism and the current scientific method are wrong, rather they are incomplete. What’s missing is the emergent explanation, the “as if” part of the equation. The mark of a good scientific theory is not how many rigorous equations or categorical reductions it contains. It’s how good the theory is at (a) explaining the observed data and (b) making predictions about what will be observed if we look.
For instance, I would argue that technological change as a process of natural selection is a much better theory than one which tries to deconstruct a piece of technology (say a camera) and explain how it came to be that way or how successive versions might change in the future. But what you won’t get with the evolution of technology theory are specific predictions and particular designs. Rather we must be satisfied with overall trends based on the factors that natural selection works on. Namely, that the users of cameras like to carry cameras with them, so there is selective pressure for cameras to become smaller and lighter, and/or to become integrated into other objects that are being carried already (like phones). While a reductionist might reason about the underlying technology and manufacturing process, an evolutionist might see the camera itself as a somewhat arbitrary manifestation of selective pressures, namely that people want to access and share visual remembrances.
Darwin’s idea* has been hailed by many as the single biggest triumph in scientific thinking in the last 200 years. We sometimes forget that evolution is not a reductionist model but rather an emergent one. It says that given a population of organisms with heritable variation and a mechanism that selects certain organisms for reproduction over others, a picture emerges of how that population changes over time. Viewing an individual organism in isolation at a particular point in time or looking at the group via statistical averaging (i.e. the reductionist method) will never produce this picture. Yet we are so steeped in reductionism that we fail to see that evolution has nothing to to do with organisms per se. It’s the process that results when the preconditions are satisfied. Technological and cultural evolution are just as real as biological. If you believe in the latter as something real, there is no rational argument for disqualifying the former.
We must be willing to reify models that have great explanatory and predictive power, regardless of whether there yet exist good formalisms with which to quantify and calculate. Sometimes (I would say quite often) formalism can be our biggest stumbling block to better understanding and hence better science. Either an existing formalism has been mistaken for the thing itself (and is thus needlessly sacrosanct), or we fear to tread uncharted territories without our trusty mathematical swords.** The first step in conquering the dragons of complexity is to reify. Practically speaking this means to first act as if a phenomenon is real, and then explore whether a world in which such a thing exists is more believable (explanation-wise and prediction-wise) than one in which it does not. If so, there will be time for formalism, and for comparison to (and reconciliation with) reductionist models.
In many systems that are studied scientifically the act of reification is a fait accompli before the science begins. Economics doesn’t bother with the question of whether markets exist, whether they are real entities that can be reasoned about and formalized. Everyone knows that markets are real, heck, many people shop on a daily basis. Does biological evolution exist though, does it deserve to be reified, reasoned about, formalized? What about technological evolution, or cultural?
As we attempt to explain and predict complex systems behavior, we must be willing to reify concepts and explore the consequences thereof. This is especially true in fields where existing models do a poor job. Einstein discovered/invented relativity by imagining something very simple: that the speed of light is constant and that time is therefore relative to an observer. The math and confirming observations followed from this simple exercise in reification. In the end what separated Einstein from his peers was his willingness to challenge an assumption so basic (absolute time) that nobody seriously considered that it might not be true.
Imagine what other deep discoveries could be made by throwing out old assumptions and reifying new ones.
—-
* Ironically, natural selection was an idea that was ripe or “in the air”; had Darwin not been alive someone else (probably Alfred Russel Wallace) would have gotten the credit. This suggests that great ideas are not created by singular geniuses, but rather are themselves emergent phenomena. See the New Yorker article on Intellectual Ventures for more evidence of this.
** If math is the talisman of reductionism, simulation is becoming that of emergent science.
As I’ve mentioned before, I am a fan of Dave Zetland. When I saw him propagate what I think is a fundamentally false dichotomy in this post, I knew I had to take on the concept of Knightian uncertainty. It crops up rather often in discussions of forecasting complex systems and I think a lot of people use it as a cop out.
Uncertainty is all in your mind. You don’t know what will happen in the future. If you have an important decision to make, you need an implicit or explicit model that projects your current state of knowledge onto the space of potential future outcomes. To make the best possible decision, you need the best possible model.
Knight wanted us to distinguish between risk, which is quantifiable, and uncertainty, which is not. If you prefer the characterization of Donald Rumsfeld, risk consists of “known unknowns” and uncertainty consists of “unknown unknowns”. This taxonomy turns two continuous, intersecting spectra into a binary categorization.
There are some random events where we feel very confident about our estimation of their likelihood. There are other random events where we have very little confidence. These points define the confidence spectrum. Moreover, there are some events that we can describe very precisely in terms of the set of conditions that constitute them and the resulting outcomes. Others, we can hardly describe at all. These points define the precision spectrum. There’s obviously some correlation between confident likelihood estimation and precise event definition, but it’s far from perfect. Unsurprisingly, trying to cram all this subtlety into two pigeon holes causes some serious analytic problems.
The biggest problem is that proponents of the Knightian taxonomy say that you can use probability when talking about risk but not when talking about uncertainty. Where exactly is this bright line? If we’re talking about a 2 dimensional plane of confidence vs precision, drawing a line and saying that you can’t use probability on one side is hard to defend.
Now, the Knightians do have a point. As we get closer to the origin of the confidence vs precision plane, we enter a region where confidence and precision both become very low. If we’re looking at a decision with potentially tremendous consequences, being in this region should make us very nervous.
But that doesn’t mean we quit! “Knightian uncertainty” is not a semantic stopsign. We don’t just throw up our hands and stop analyzing. As I was writing this post, Arnold Kling pointed to a new essay by Nassim Taleb of The Black Swan fame. Funnily enough, Taleb has a chart very much like the confidence vs precision plane I propose. His lower right quadrant is similar to my origin region. Taleb says this area represents the limits of statistics and he’s right. But he still applies “probabilistic reasoning” to it. In fact, he has a highly technical statistical appendix where he does just that.
Before I saw Taleb’s essay, a draft of this post included a demonstration that for any probabilistic model M that ignored Knightian uncertainty, I could create a probabilistic model M’ that incorporated it. M’ wasn’t a “good” model mind you, I merely wanted an existence proof to illustrate that we could apply probability to Knightian uncertainty. The problem of course was that the new random variables in M’ all reside in the danger zones of Taleb’s and my respective taxonomies. But Taleb’s a pro and he’s done a far better job than I ever could of showing how to apply probabilistic reasoning to Knightian uncertainty. So I won’t inflict my full M vs M’ discussion on you.
The key take home point here is that you can in fact apply probability to Knightian uncertainty. Of course, you have to be careful. As Taleb wisely notes in the essay, you shouldn’t put much faith in precise estimates of their probability distributions. But this is actually good advice for all distributions, even well behaved ones.
Back when I was in graduate school, my concentration was in Decision Analysis, which included both theoretical underpinnings and real-world projects trying to construct probabilistic models. I dutifully got my first job applying this knowledge to electrical power grid planning. What I learned was that you should never rely on the final forecast having a lot of precision. Even if you’re dealing with well behaved variables. Because if you put a dozen or so well behaved variables together, the system still often becomes extremely sensitive.
However, “doing the math” leads to a much deeper qualitative understanding of the problem. You can identify structural deficiencies in your model and get a feel for how assumptions flow through to the final result. Most importantly, you identify which variables are the most important and which you can probably ignore. Often, the variation in a couple will swamp everything else.
For example, one of insights Taleb identifies is that you should be long in startup investments (properly diversified, of course). That’s because the distribution of Knightian outcomes is asymmetric. Your losses are bounded by your investment but your gains are unbounded. Moreover, other people probably underestimate the gains because we don’t have enough data points to have seen the upper bounds on success. There’s a bunch of somewhat complicated math here having to do with the tendency to underestimate the exponential parameter in a power law distribution, but most numerate folks can understand the gist and the gist is what counts. The potential of very early startups is systematically underestimated. Now, this isn’t just some empty speculation. I’m actually taking this insight to heart and trying to create a financial vehicle that takes advantage of this.
I’ll give you an example from another of my favorite topics, climate change. I would love for someone to try and apply this sort of analysis to climate change outcomes. We have a power law distribution on our expectations of climate sensitivity for both CO2 warming and aerosol cooling. We also have a power law distribution on our expectations of natural temperature variability. If someone really good at the math could build a model and run the calculations, there are some very interesting qualitative questions we might be able to answer.
First, is the anthropogenic affect on future temperatures roughly symmetric, i.e., could make things colder or warmer? Second, and more importantly, is the anthropogenic contribution to variability significant compared to natural variability? If it isn’t, we should budget more for adaptation than mitigation. If it is, the reverse. But to get these answers, we need to be able to manipulate symbols and make calculations. Probability is the only way I know to do this. So saying you can’t use probability to tackle Knightian uncertainty seems like a cop out to me. How else are we suppose to make big decisions that allocate societal resources and affect billions of people?
]]>
From his comment, I think Jay has misinterpreted (or I have miscommunicated) three crucial points: (1) whether relying on expert opinion is misguided, (2) for whom I hold Ascetic Meme related hostility, and (3) whether the term “meme” has a pejorative connotation.
On the first point, Jay says, “I’m not a scientist, so on matters such as these, the best I can do is put faith in those who seem qualified and have reasonable motivations.” Actually, this is my general rule as well. But the whole point of these posts is that, due to a fortuitous personal situation, I had the time to test this rule by becoming something of an expert myself and was surprised at the results. My supposition of the Ascetic Meme is to explain what went wrong in this particular case. I am not condemning experts in general and I am not condemning the vast majority of climate experts. I am certainly not condemning regular people who believe in AGW because that’s what it appears the experts are saying.
Which brings us to the second point about where my hostility lies. That is reserved for the demagogues and a few subverted climate experts who are trying to turn environmentalism from a search for the optimal policy into a sanctimonious ideology. There are three specific actions to which I object: misrepresenting their degree of understanding about climatic processes, casting people who disagree with them as evil, and outright misrepresenting the facts. Now, there are usually a few people on each side of a debate who try these tactics. But they usually don’t work. I’m interested in understanding why it seems to be working in this case. My hypothesis is that there’s something about environmentalism that makes people susceptible to this manipulation.
Now, you could question my conclusions about what the science says. But then you have to argue the science. I assert that I reviewed the science in good faith, predisposed to the prevailing public sentiment. Feel free to disbelieve this assertion or believe that I am incompetent to review the science, but then you should simply just ignore me on this topic altogether.
Finally, we get to the third point about the connotation of the term “meme”. Here is the wikipedia definition. There’s nothing pejorative in there that I see. Personally, I think taking offense to an assertion that you run memes is like taking offense to the assertion that genes determine much of your physical makeup. It’s just how the world works. I’m certainly not saying I’m better than you. In this particular case, recall my statement in part 1 that I am a fan of the Frugality Meme and think it’s adaptive.
My warning is to be on the look out for the Frugality Meme mutating into the Ascetic Meme. I believe that anyone who tells you that you should accept an environmentalist assertion purely on the basis of authority (e.g., “This is settled science.”) or attempts to castigate someone who disagrees with them as evil, crazy, or stupid is attempting to convert you to running the Ascetic Meme.
Now, it’s perfectly rational to say, “I think the experts are probably right on this so I don’t think it justifies my time to review all the science myself.” I’ve got no problem with that. I only ask that you be on the look out for future changes in the expert opinion and distinguish between what the experts actually say and what politicians/demagogues assert that the experts say.
I can tell you what annoys me personally. I try never to bring up AGW in normal conversation. Sometimes, someone else does and I can’t stop myself from offering a cautious opinion. If pressed, I will explain that I spent a lot of time reviewing the science. Then a true believe tries to tell me that I’m wrong and that I’m going to “kill the planet”–when they haven’t even looked at the science themselves!
]]>If you buy the somatic evolution (SE) argument then there are all sorts of consequences which contraindicate chemo in most cases. Just for instance, there is good experimental evidence that SE is punctuated (as in “punctuated equilibrium”). If you introduce stress into the population at the wrong time, you tip the system into a regime that is bad for the human (good for the cancer cells). A more auspicious time to apply chemo (according to SE) might be before symptoms occur. But doctors and patients are perhaps more likely to be aggressive with chemo at precisely the wrong time.
Another consequence of SE is the “cleverness” of the disease in terms of routing around the brute-force obstacles we throw at it. Chemo is just the sort of unspecific and mild selective pressure that evolution is really good at adapting to. So instead of mere drug resistance, chemo is really creating a new system which thrives in the presence of chemo; if you don’t kill every potentially bad cell, you could be worse off than when you started. But comprehensiveness is hard to achieve because after all, the cancer cells are yours to begin with and there is strong selective pressure for them to learn to evade detection, not play nice in the cell-signaling game, etc. The only way we know how to tell whether a cell is cancerous or not is by observing its behavior in the presence of other cells in the body of the organism in question.
I would like to re-iterate, I am not bashing chemo per se, just that it’s in a class of potential solutions which are based on a theory of what cancer is that is fundamentally incompatible with the evidence. Furthermore, one theory which is compatible with the evidence — somatic evolution — suggests that if you start with a chemo approach, you will always be patching a leaky boat with material that is corrosive to the boat itself.
A final piece of circumstantial evidence comes from an experiment you can do yourself. Find as many practicing oncologists as you like, get them to speak to you off the record, and ask them the following question: “If you or a family member was diagnosed with a common form of solid cancer (breast, colon, etc), would you administer chemo?” Previous such surveys suggest that less than 25% would choose chemo for themselves, but nearly all of them would for their patients (due to malpractice concerns).
BTW, somatic evolution is not the end-all-be-all in cancer theory. There are other dynamics at play too. Or more precisely, evolutionary theory as currently understood by most people in life sciences is not complete enough to account for these other important dynamics. One of these that I think needs to be reconciled and explored in much greater depth is what’s known as aneuploidy. Aneuploidy as it applies to cancer refers to any sort of damage done at the level of the chromosome (as opposed to the gene-level, which sits below the chromosome level). In cancerous systems, the gene-level dynamics can look extremely stochastic, while at the same time, genome-level (i.e. chromosome-level) dynamics show remarkable patterns. Why this isn’t addressed in most cancer research at this time is a larger discussion about how science advances in the real world.
]]>I had hoped that the article would be about the somatic evolution of cancer, and while it touches on this aspect briefly and tangentially, it mostly talks about the evolution of defenses against cancer within the human population as a whole. There is a critical distinction here: somatic evolution occurs on the cells within a single body in the course of a single human lifetime,* while human evolution happens in a population of many humans over millions of years.**
The reason it is important to clearly understand this distinction is that it has critical importance in understanding cancer as well as dire implications on how to prevent, detect and treat it. For instance, those who understand the implications of somatic evolution are very skeptical of the ability of chemotherapy or radiation as reasonable approach. Why? Because just as virus populations evolve in response to selective pressures (i.e. things trying to kill them), so do cells in your body. In the case of chemo, it’s called drug resistance. The problem with non-targeted approaches like chemo and radiation is that the are also very toxic to normal, healthy cells and hence may kill you before eradicating the last cancer cell. The question becomes, can we get beyond drug resistance and make the chemo approach work? If you understand somatic evolution, the answer is probably not.***
The larger point is not to single out chemotherapy — though there certainly are some perverse incentives in the drug industry to push ineffective and harmful treatments — but rather that understanding somatic evolution, how it relates to human evolution, and the distinction thereof are all critical if we are going to lead longer, healthier lives that are not destroyed prematurely by cancer.
—
hat tip: Ali Tabibian for sending me the SciAm issue.
* With the caveat that in some cases (Tasmanian devils, dogs, human transplant patients), somatic evolution can span generations — in theory indefinitely — as the cancerous cells are transmitted from organism to organism.
** Kirschener and Gerhart in The Plausibility of Life shed some fascinating light on the intimate and complex relationship between somatic and organismic evolution, however they use terms like adaptation instead of somatic evolution, presumably to avoid this very confusion
*** Ironically, SciAm published an article in 1985 by John Cairns, professor of microbiology at Harvard University which stated, “Aside from certain rare cancers, it is not possible to detect any sudden changes in the death rates for any of the major cancers that could be credited to chemotherapy. Whether any of the common cancers can be cured by chemotherapy has yet to be established.” Note that according to the National Cancer Institute itself, the percentage of Americans dying from cancer is the same now as it was in 1950 (and has remained flat the entire time). Thus it is not reasonable to dismiss Cairns’ proclamation as a single or outdated voice.
Note that I am not saying anything about your personal consumption and conservations decisions. If it makes you feel better to buy a hybrid, install photovoltaic panels, or rip out your lawn, more power to you. Sure, we could have a detailed economic argument about whether any of these measures are cost-justified. But even if there happen to be theoretical economic losses, the practical psychological gains are likely to outweigh them.
No, what I’m saying is that we can’t trust our instincts when it comes to determining the laws and regulations that we impose on everyone. As I hope I’ve demonstrated, the Ascetic Meme can be a powerful psychological force. It operates below the threshold of conscious thought. Basing policy on our instincts creates a fertile ground for mistakes and abuse. There’s just too much emotional energy that can generate momentum in a negative direction or be harnessed for someone’s personal gain. An example of the former is clear cutting entire ecosystems to plant biofuel feedstock. An example of the latter is rich oil men lobbying for subsidies to fund renewable energy projects.
It’s not enough to recognize these problems and say, “Now that I know there may be side effects, I’ll just take them into account.” That isn’t the way human brains work. Back at Stanford, I took a class from Amos Tversky about the effect of cognitive biases in decision making. He had a great analogy between cognitive biases and perceptual biases. A classic perceptual bias is that objects appear closer on clear days and farther away on hazy days. However, knowing this fact doesn’t help people judge distance any more accurately. Similarly, simply understanding how cognitive biases affect your thinking doesn’t automatically give you better decision making ability. While the Ascetic Meme isn’t precisely a cognitive bias, it operates in a similar way: by subconsciously guiding your thinking. Even Tversky’s partner in seminal cognitive bias work, Daniel Kahneman, falls prey to these biases.
Even worse, the principle of cognitive dissonance means that your core beliefs will conform themselves to fit your actions and statements (this is a primary tool in interrogation and brainwashing). So if the Ascetic Meme tricks you into thinking that harsh conservation is necessary and then you start conserving more and exhorting your friends to do the same, your brain will modify your beliefs so that you believe even more strongly in conservation, regardless of any new evidence that comes your way! The logical fallacy here is that you start confusing instrumental values with terminal values. This explains how a previously professional scientist can, over time, come to believe that people who disagree with him are evil.
No, you can’t just wish away the Ascetic Meme. Like the influenza virus, it wouldn’t be so prevalent if it weren’t good at surviving. Rather, we need to employ tools and procedures in our environmental policy making that minimize the impact of the Ascetic Meme.
Unfortunately, unlike perceptual biases, there is no obvious way to improve accuracy by a few orders of magnitude. So you can’t just make the equivalent of a laser range finder and be done with it. Environmental policy is at the frothy frontier where two complex systems meet: natural environment and human politics. Therefore, any prescriptions are necessarily squishy. But I’ve got some ideas.
My cardinal rule of environmental policy is show restraint. Government gives men the power over other men. History teaches us that men do not always use altruistic judgment in the exercise of such power or relinquish it readily when it no longer serves the good of society. So I believe we should give the government only the minimum amount of power necessary. A good rule of thumb is to take the powers that you think your side should have and then imagine that the other side is given them instead. If you really don’t like the projected result, you’re asking for too much power.
Applying this general rule to the environment gives me a three step escalation process in the types of policy mechanisms I think we should employ:
(1) Facilitate market transactions. This approach works best in cases where the underlying problem is the allocation of scarce resources such as water or land. I know I’ve mentioned Davide Zetland’s Aguanomics before, but it has some great examples of how to apply this rule for water including a relatively new post on a specific mechanism design. You could imagine doing the same thing for land to promote habitat preservation. We could put worldwide treaties that create strong property rights for land in sensitive ecologies like rainforests. Then environmental groups can simply buy up the land. I think we should always try this route first because it gives the government no lasting power and lets the market sort out the efficient allocation.
(2) Targeted, rebated Pigouvian taxes. This approach work best in cases that I call “diffuse externalities”: a very small amount of expected harm is done to a lot of people. Carbon emissions are the primary example I have in mind. The way this works is that you set a Pigouvian tax on the behavior you want to discourage that is equal to the expected harm it does. The market will then figure out the most efficient level of that behavior. I think taxes are much better than cap-and-trade because it doesn’t create a valuable initial allocation of credits that everyone wants a piece of. However, I add two qualifications. First, I think the tax should be targeted as specifically as possible on the harm you want to avoid. In the case of carbon emissions, if you want to reduce the threat of anthropomorphic global warming (AGW), you tie the carbon tax to the AGW warming “signature”, which is the topical troposphere warming more than the Earth’s surface, aka the T3 Tax. If you’re also concerned about ocean acidification due to CO2 emissions, tax the pH of the ocean. The second qualification is that you rebate the tax to consumers. That’s right. At the end of the year, you give all the money collected back to the people according to either a per-head, pro-rata income, or other fair-seeming formula. This caveat reduces the temptation for the government to try and direct the proceeds for their own gains, while still reducing the targeted behavior to economically efficient levels.
(3) Direct regulation. This approach works best in cases that I call “concentrated externalities”: a significant amount of harm is done to a small group of people. Good examples are toxins like heavy metals and PCBs. I don’t see any way around having something like the EPA directly regulate such toxins. I would add two qualifications. First, they should apply strict cost-benefit analysis using explicit value of a statistical life calculations. This is necessary to avoid actions like worldwide banning of DDT which is great for the environment but bad for all those people that die of malaria. Second, the department evaluating and enforcing such behaviors should be as independent as possible. I actually think the EPA does a decent job, but they would do any even better job if they were free from political influence. It would be nice to make them more like the Federal Reserve than the Department of the Treasury. I’m not sure how to enforce the desire to make this option the last resort, though. Witness the attempt to get CO2 regulated as a pollutant.
Yes, I know that these tools aren’t perfect. There are almost certainly special cases where they may not work. They can almost certainly be abused in some fashion. But they’re better than going off half-cocked every time someone shouts that we should start depriving ourselves of civilizations’ comforts because the sky may be falling.
]]>One of the main themes (but not the only one) in Cancer Complexity is the notion that cancer is an evolutionary process (as in Darwinian evolution), except that instead of populations of individual animals, the population of interest is the set of cells in the body of a single animal. David Basanta devotes his whole blog to exploring this concept, both in his own research as well as in others’.
The National Cancer Institute recently held a summit of physical scientists which concluded that cancer evolution is critical but too often swept under the rug. From their report:
Cancer is an evolutionary process. This has been a conversation that has waxed and waned in the field of cancer biology for a long time. However, data supporting any or all interpretations of what this might mean in cancer are sparse. From today’s discussion, it is obvious that the physical scientists believe this is a critical concept that needs careful examination in terms of its role in transformation to cancer and what follows from these original changes.
Coincidentally, I recently participated in a workshop at the Santa Fe Institute on Integrating Evolutionary Theory into Cancer Biology.
Clearly the “cancer as evolution” meme is on the rise…
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>
Vodpod videos no longer available.
Click here for Discussion Forum
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>
As you’ll recall, I hypothesized that the Frugality Meme has its roots in evolutionary psychology. Given the conditions present on the savanna where we evolved, it’s easy to see how this meme might creep beyond the bounds of mere frugality. In the ancestral environment, tribes were almost certainly caught in a Malthusian Trap: any significant improvement in their material standard of living would quickly be countered by an increase in their population (see Chapter 2 of Gregory Clark’s excellent Farewell to Alms for a thorough, yet accessible explanation of the Malthusian Trap) .
Therefore, except for those few at the very top of the dominance hierarchy, everyone lived close to the subsistence level. There simply wasn’t much surplus that could be saved, so it would have been difficult to save too much without literally starving. Under these conditions, the Frugality Meme didn’t require any modulation; just save whatever you can. However, in modern industrialized societies, there is plenty of surplus. Sure, saving some is great. But the marginal benefit of saving decreases as you save more. At some point, the benefit of consumption today exceeds the benefit of saving for tomorrow. However, when the Frugality Meme degenerates into the Ascetic Meme, people seem to discard any notion that consumption today is desirable. Their intertemporal utility functions are their own business of course. But people in the grip of the Ascetic Meme typically also try to enjoin others from consuming today.
I think one of the primary reasons people worry about the consumption of others is hierarchy. As most of you undoubtedly know, humans are wired for hierarchy. An interesting aspect of the Frugality Memeis that people at the top of the hierarchy seem to be exempt. There’s no clear direction of causality here. Does hierarchy override the Frugality Meme? Does hierarchy enforce the Frugliaty Meme? Does the Frugality Meme promote hierarchy? I think the answer is that we have two mutually reinforcing dynamics. Hierachy and frugality both conferred some competitive advantages to our ancestors and they became intertwined.
Obviously, those at the very top are immune from attempts to enforce frugality. They already dominate. But those angling to make it to the top can use a lack of frugality to signal their perceived higher status. If they can then defend themselves against enforcement, they consolidate that status. Those higher up the chain probably try to enforce more frugality on those lower down the chain as a way of demonstrating their dominance and capturing more of the surplus for themselves. Those at the bottom also have a reason to maintain frugality among their ranks. They want to prevent too many people from being above them in the hierarchy. Obviously, those at the bottom who don’t want to suffer an enforcement sanction from either their superiors or their peers have a strong incentive to signal their frugality somehow.
So what does this have to do with environmentalists? Quite a bit I think. Environmentalism creates a fertile ground for the Frugality Meme to transform into the Ascetic Meme. With environmentalism, there really is a problem that requires conservation. We can absolutely over consume our environmental assets. But the environment is so complex, it’s hard to rationally determine what “over consume” means. So it’s easy to recursively appeal to the Frugality Meme: The world could end unless you save! How much? A lot! Is that enough? Is the environment being degraded in any way? Yes. Are you close to starving yet? No. Then you can save more! Welcome to asceticism.
I’m not sure if my hypothesis is correct. But it explains a lot of strange behavior surrounding environmentalism. First, we have the people that want to use environmentalism as an excuse to live a primitive, subsistence life. Second, we have leaders at the top of the hierarchy who are exempt from saving even as they exhort others to. Third, we have the call for severe sanctions against those who fail to signal their willingness to save. Fourth, we have people trying to signal their environmental frugality by buying Priuses (BTW, I believe the reason the Prius outsells the Civic Hybrid is that it looks distinctive and thus sends a stronger frugality signal) and putting out lawn signs advertising how they’re buying green energy. Lastly, it explains some of my own internal dialogs where my knowledge of economics conflicts with an instinct to appear environmentally conscious.
So the next time you hear someone telling you about some grievous environmental harm and how you have to change your life to prevent it, you feel the need to impress people with your environmental consciousness, or you sneer at someone because they are doing something that appears environmentally unfriendly, ask yourself, “Did I really think that through or did the Ascetic Meme just take me for a ride?”
Next up, my thoughts on the implications of the Ascetic Meme for environmental policy.
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>
- I see there being at least two types of emergence, autocatalytic and cooperative.
- Emergence is related closely to the concepts of agency, stability, and coherence.
- Cooperation (and autocatalysis) in populations under certain conditions leads to the emergence of new levels of organizational complexity, despite the presence of competition.
- Competition is the backbone of an evolutionary dynamic, which is orthogonal to the emergent dynamic.
- Alex Ryan’s diagram represents the best visual synthesis of these concepts that I’ve found.
- We should not get hung up on any of these terms, they are models (approximations of reality). Concepts like “cooperation” and “level” are not to be understood entirely by their standard English usage, but that is the starting point. Over constraining their usage creates confusion and often obscures or denies basic truths.
- On the other hand, there comes a time when formalism leads to deeper understanding where our intuitions fail. As Alex points out, “level” is great for human understanding, but the more precise and less intuitive notion of “scope” is better for formal models of emergence. Using precise formalism like his allows us to tease out — and do calculations on — notions like “novel emergence” vs. “weak emergence”. In turn, these new concepts end up jiving with and extending our intuitive understanding.
- A reductionist-only mindset often blinds us to important concepts like emergent causality and cultural agency. By adding in complex systems thinking, which centers on emergence, many once paradoxical and obfuscated situations in the real world become clearer.
The question of whether we will “break through” to a superorganism or collapse through any number of spiraling cascades or catastrophic events is the subject of Ervin Laszlo’s book, The Chaos Point, which I highly recommend. In it, he gives a sweeping view of the complex evolutionary dynamic (focusing on human society), and makes a solid argument that we are at an inflection point in history right now, similar to the “saltation” that begat multicellularity.
As you point out however, even if we do emerge to a higher level of organization, this does not necessarily mean good things for the individual human. Historically it seems that there has always been a tension between group interests and the survival/vitality of the individuals that comprise the group. Cells within metazoa give up quite a bit of autonomy (and longevity?) for the greater good. Humans within corporations or other organizations sacrifice personal desires, wealth, health, etc to be part of the collective.
Given the complexity of the human mind and human society relative the complexity of saltation precursors in the past, it does become a reasonable question as to whether we can have our cake and eat it too through a consciously engineered emergence that has “freedom and fulfillment as a foundation”.
Your idea is a good one, to “create a system — a culture — that rewards altruism, and altruistic individuals flourish; when they flourish, altruistic systems emerge.” One thing we know about engineering emergence in complex systems though is that it’s not entirely controllable or predictable. As I see it, the best we can do in reality is create to individual incentive structures which when played out in the collective have the desired result. Then, over time, hopefully the values implied by the system become inculturated and thus self-sustaining, even if the external incentives were to be removed.
]]>Vodpod videos no longer available.
Click here for Discussion Forum
]]>— John Wheeler
]]>
Arnold poses this question:
“1. Hanson says that people have a propensity to disagree, just to be contrary. Do you agree? How do we explain conformity?”
To which I comment:
“… In all seriousness, I think people have two different tendencies, which individuals are “blessed” with in different amounts (presumably on a Gaussian distribution).
Tendency one: in interactions framed as “one-to-one”, they tend to be contrary.
Tendency two: in interactions framed as “group”, they tend to conform.
My “just so” explanation using evolutionary psychology (and therefore probably suspect a priori): one-on-one interactions are opportunities to signal reproductive fitness over the opponent while group interactions are opportunities to signal cooperativeness with common goals. Personally, I seem to be blessed with a couple extra standard deviations of contrariness and at least one standard deviation less of conformity.”
So this leaves us with a four cell matrix of high and low on contrariness and conformity. (Yes, I am too lazy to do the graphic.). As I noted above, I’m high contrariness and low conformity. This makes me an “asshole”–always arguing the smallest point and never just going along. My counterpart at low contrariness and high conformity would probably be termed a “wimp”–never speaking up for himself and always trying to please the group.
While it may seem impossible to be high contrariness and high conformity, I beg to differ. These are “douchebags”–dressed in the latest fashion and spouting the popular ideologies, but showing off by arguing the minor points of those ideologies. Then this makes low contrariness and low conformity people “loners”, as in Unabomber loners–always distancing themselves from the group and never disagreeing to your face, but plotting their revenge in secret.
The primary antagonists are assholes and douchebags. Assholes think douchebags are hypocrites and douchebags think assholes are thugs. If you see two guys arguing at a party, one of them is probably an asshole and the other one is probably a douchebag. You’ll often see several douchbags try to gang up on an asshole, but assholes have an inherent advantage because they don’t have to pay lip service to an ideology. Both assholes and douchebags can attract a posse of wimps, though the douchebags usually have a larger posse.
So which are you? Asshole, wimp, douchebag, or loner? Or if you prefer the late, great George Carlin’s taxonomy, asshole, jackoff,, scumbag? For a different take on being a member of the doucheoisie, see here.
]]>Apparently, some people are seeing some potential in cloud computing not just as an aid to science but as a completely new approach to do it. An article in Wired magazine argues precisely that. With the provocative title of The end of theory, the article concludes that, with plenty of data and clever algorithms (like those developed by Google), it is possible to obtain patterns that could be used to predict outcomes…and all that without the need of scientific models.
If I told you I had a massively parallel device which takes in huge amounts of raw data and finds patterns via computation to make predictions, would you be able to tell if I was speaking of a computer cloud or a human brain?
Anderson’s postulate about the end of theory is as specious as the claim that all good science is reductionism only, but for opposite reasons. Statistical data-driven discovery of the kind Google is doing is in fact exactly a process of model-building (aka theory), no more, no less. And when the human brain “creates” theory, it is doing so in the context of empirical data, only a small fraction of which is acknowledged explicitly.
Understanding cannot be gleaned from brute-force data-mining alone. There are inherent complexity barriers that even an atoms-in-the-universe sized computational cloud could not crack. On the other hand, theory without strong correlation to the empirical is just math. In a tale of two sciences, theoretical physics has been negligent on the latter count while biology on the former. Both pendulums are swinging back toward the center now out of absolute necessity. String theory isn’t going to be a grand unification any more than DNA sequencing is going to cure cancer.
The interesting question that cloud computing raises is the limits of the (individual) human brain as a tool for scientific advancement. One could argue that the reason we are stuck on so many big problems is because we are running up against the limitations of the human brain from both a bottom-up computation perspective as well as a top-down analytical reasoning perspective. If this is so, then the importance of technology (like cloud computing) goes up and the likelihood of there ever being another Einstein goes down.
We should not forget though that human minds can be organized in coordinated activity to yield better results than any one individual. But there’s no such thing as truly automated discovery either. Not because of some special property of the human mind but rather because “truth” is only relevant within the context of a value system. Whether a cell is cancerous and needs to be killed depends whether you are the cell or the multicellular organism. And even if you believe in absolute truth, a human value system matters from a practical standpoint to sort out relevant truths from the infinite possibilities. Ultimately, advancement in science will rely on computation beyond the ken of any one human, but it will be directed by humans who care about the discoveries and have a point of view on why they matter.
]]>First, I must warn you that these thoughts are pretty preliminary, highly speculative, and undoubtedly controversial. It will also likely require more than one post to flesh them out. So bear with me. The crux of my hypothesis is that the polarization we are currently seeing on environmental issues is an emergent phenomenon stemming from some deep evolutionary psychology. Developing workable policies will require getting past this psychology and rationally examining what is actually in long term best interests of the human species.
My insight began by observing a common internal dialog of mine, “That’s cool! I want one. Yeah, but it’s really too flashy for me.” There seem to be competing personalities in my head when it comes to luxuries. I see similar behavior in my family and friends. These well off people forgo small luxuries that are pretty clearly a benefit to them with the rationalization that, “Oh, I don’t really need that.” Then for the big luxuries they do finally cave in to, they have to convince themselves that it’s alright to get them, which seems to require long drawn out conversations with me.
I see similar behavior among environmentalists. Take the “locavore” movement. The thinking here is that eating food grown close to you is better for the environment because it reduces the emissions from transporting food. Unfortunately, it turns out that the dirt-to-table emissions attributable to food are dominated by the production phase. So it’s better for the environment to produce foods in areas that require the least intensive methods and then ship them. For example, we should grow a wide variety of foods in New Zealand and ship them to England rather than grow them in England. But emotionally it seems more extravagant to eat food from far away. That’s why when you point out the efficiency argument to avowed locavores, they come up with all sort of other reasons why you should eat local–not why they eat local, but why you should. If they’re willing to bear the costs of eating local, more power to them, but leave me out of it.
It’s this combination of internal pressure and the incidence of proselytizing that leads me to believe evolutionary psychology may be involved. Humans appear to be wired for cooperation and enforcing cooperation. It’s easy to see how a bias against luxuries could also be adaptive in the ancestral environment. There’s already pressure to accumulate luxuries to signal status . Without some countervailing factor, early groups of humans may have dissolved into a counterproductive escalation of selfish accumulation. Note that I’m not claiming strong evidence for this effect, just putting it forward as a hypothesis.
In and of itself, this bias is probably a good thing even in modern times. Max Weber described how frugality was part of a highly successful work ethic in The Protestant Ethic and the Spirit of Capitalism. I owe my existence to this work ethic (all four of my grandparents are/were paragons of Protestantism), so I’m a fan. Also, people are generally bad at predicting the future, so “saving for a rainy day” is still good advice.
Let’s call this adaptive bias the Frugality Meme. Where we get into trouble is when it mutates into an extreme form: the Ascetic Meme. In this version, the goal is not to savings or modesty, it is suffering through deprivation, typically to achieve some sort of spiritual goal. Asceticism is a recurring theme in many world religions. More offensive is that it sometimes includes a compulsion to see other people join in this suffering. You can flog yourself, but please don’t flog me.
I see some environmentalists going down this extreme path. There are people dedicated to “sustainable living“, which appears to bear an uncanny resemblance to living like a monk. But hey, that’s their choice. What really worries me are environmentalists that don’t seem satisfied unless everyone suffers. You can find plenty of environmental extremist quotes on the Web, but here are my top three scariest:
“Phasing out the human race will solve every problem on earth, social and environmental.” — Dave Forman, Founder of Earth First!
“If I were reincarnated, I would wish to be returned to Earth as a killer virus to lower human population levels.” — Prince Phillip, World Wildlife Fund
“We, in the green movement, aspire to a cultural model in which killing a forest will be considered more contemptible and more criminal than the sale of 6-year-old children to Asian brothels.” — Carl Amery
Obviously, these guys are wackos. What worries me more is that regular people often seem to reflexively behave as if anything that protects the environment is good. I plan on exploring how the propagation of this meme stifles debate in future posts.
]]>
Here’s the architect’s website, and here’s a video:
Hat tip: Michael Johnson
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
The most common objection I hear when discussing this topic is usually given in the form of a quote: “Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.” My stock retort is: “A starving man makes a poor student.”
Obviously, if someone is hemorrhaging to death, you’ve got to stop the bleeding before doing anything else. But in terms of interventions, this analogy applies mostly to the case of responding to disasters. Lack of micronutrients is a chronic, rather than an acute, condition. However, there’s the more subtle issue of trying to help people break out of a bad equilibrium.
Malnourished populations are caught in a vicious circle where they don’t have the biological resources to improve their human capital enough to accumulate the wealth (using wealth in the general sense that includes things like adequate shelter, water, capital, economic opportunities, educational opportunities, etc.) necessary to improve their level of nourishment.
So I think band-aid approaches for chronic conditions are good when they are part of a larger plan to break the target population out of a bad equilibrium. In this case, you would give them supplements today and help them ensure access to more nourishing food supplies over a longer term. To this end, #5 on the Copenhagen Consensus list is biofortification. If anybody out there has researched which biofortification outfit could make the best use of donations, let us know.
]]>Under the U.S. Constitution, the states have exclusive and plenary (complete) power to allocate their electoral votes, and may change their state laws concerning the awarding of their electoral votes at any time. Under the National Popular Vote bill, all of the state’s electoral votes would be awarded to the presidential candidate who receives the most popular votes in all 50 states and the District of Columbia. The bill would take effect only when enacted, in identical form, by states possessing a majority of the electoral votes—that is, enough electoral votes to elect a President (270 of 538).
As of this writing, the bill has been enacted by Hawaii, Illinois, New Jersey, and Maryland, which represent a combined 19% of the electoral votes necessary for the U.S. to have a de facto national popular vote for the presidency.
While National Popular Vote doesn’t solve the problem of voting fraud and irregularities at individual polling places, it does dampen the cascading and amplifying effect that the electoral system yields in close election years.
Given that the National Popular Vote can come about incrementally, state by state until the threshold is reached, and given the difficulty state legislators would have if they were to try to repeal the NPV bill they already passed, it seems inevitable that it will come to pass. Whether you agree or disagree, you can make your prediction here.
]]>Regardless of your political leanings or party affiliations, there’s hardly a more important issue if you believe in some form of democracy. As we run up to the 2008 elections, you should be very concerned that the problems have not been adequately addressed, despite all the various voting reform and voting rights initiatives.
While there may be nothing you can do do assure that your vote is counted properly in 2008, there are some ways you can keep a verified record of your vote, which is a necessary precondition for fixing the broken system:
- Vote absentee. Not only does this eliminate a whole set of irregularities having to do with polling places, registration and voting machines, it also allows you to make copies of your ballot at your leisure. Use certified mail if allowed. Also, mail a postmarked copy to yourself before election day and keep it sealed.
- Take a photo of yourself voting. If you do go into polling booth, bring a digital camera or cell phone camera and photograph your ballot with a government issued photo ID in the frame.
- Send proof to the candidate you voted for. Ideally do this for all candidates you vote for (e.g. House, Senate, state elections, etc) not just for the presidency. The smaller the election, the more important each vote is and the more likely the candidate will be to want and appreciate the proof you send.
If you are not concerned about keeping your vote private, why not post your proof to various web sites, blogs and social networks (like your Facebook profile)? Once everyone’s votes are indexed by Google, it shouldn’t be hard at all to reconstruct the actual election results. Although you may have personal privacy concerns about revealing your votes, your rights as a citizen of a democracy (not just your right to privacy) are under siege. Exercise your choice to reveal now, so that you will maintain your right to conceal in the future.
]]>
Click here for Discussion Forum
]]>Apropos of Kevin’s post yesterday on the “Singularity“, we need to be taking more seriously cultural agency (which includes technological and socio-technological agency):
Click here for Discussion Forum
]]>For those of you that don’t already know, a technological singularity is a point where technological advancement suddenly accelerates, making it impossible to make future predictions beyond that point. The term was originally coined by Vernor Vinge, a scientist and author. (I highly recommend all of his science fiction novels.) As Hanson argues in his paper, humanity has already experienced at least two previous singularities–the Agrarian and Industrial Revolutions–and we’re due for a third.
There’s a lot of both scientific and science fiction speculation about the nature of the next Singularity. (For example, see the Singularity Institute homepage.) Typically, the assumption is that some self-improving dynamic will take over, such as intelligence improvements that lead to further intelligence improvements or miniaturization that leads to further miniaturization.
But Rafe’s post reminded me that one of forces that may be holding us back is the tendency of institutions created by humans to attempt to preserve themselves to the detriment of pursuing their original goals. As you already know, I think the UN IPCC is one example of this dynamic. But what if we figured out a way to keep our institutions on track?
This innovation could take many forms. A purely organizational solution might entail a set of organizational rules that help maintain the primacy of the original goal. A software solution might monitor electronic communications, detect counter productive self preservation, and initiate countermeasures. A neurobiological solution might alter our brain chemistry to damp down primate political tendencies that emerge as institutional self preservation. Or some combination of these three approaches plus a bunch more that I haven’t thought of.
In any case, a superior organizational paradigm should allow such institutions to out-compete other institutions in achieving useful goals. Eventually, a large enough population of improved institutions could dramatically accelerate technological and economic development.
I realize this is thought is only partially formed. But it’s yet another reason to try and understand complex systems.
]]>Recalling some of the ways systems defend themselves, it is interesting to see how the socio-technical agents discussed in the Wired piece do so:
- Extreme Ideologies: “Wired failed to see how a new generation of fanatical geeks would use the Internet in their effort to take over the world. Instead of ending, history looped back on itself, and we are now confronted by a recrudescent and particularly virulent religious ideology straight out of the Middle Ages.”
- Old Media: “…we underestimated how slowly Old Media would auger in — and how irresponsible it would become in its death throes…. Faced with fierce competition for those eyeballs, Old Media is hawking the apocalypse: The world is inundated by war, poverty, destruction, fascist Republicans! It’s about to be swept away by tidal waves unleashed by melting polar ice caps”
- Political Systems: “So instead of spending a decade rebuilding civil society — reinventing how we resolve conflicts and build consensus — we got MoveOn and Daily Kos and Soros-funded 527s that divert immense energy into the mud of politics, all in the naked pursuit of political power.”
One of the general principles I’ve observed is that the more head-on and forcefully you attack a complex agent, the more immediate and powerful its response will be. The fastest change comes from setting up an alternative system which is demonstrably superior, and recruiting resources from the old system at the margins where it is weakest and least self-aware. By the time the old system musters a response, it is substantially co-opted by the the new, which now may be strong enough to defend itself and finish off the job. There are no shortcuts. The head-on attack either leaves the old system stronger and the new system dead, or it’s a path to mutually assured destruction.
For the would-be revolutionary, it may be frustrating to see no visible progress, but massive change can percolate slowly and invisibly before a speedy deathblow. It took nearly 30 years of no apparent change, but in the end, the Berlin Wall fell in a relative blink of the eye.
]]>
As you may remember from this post, I am an Anthropogenic Global Warming (AGW) skeptic. I wasn’t always this way; it was only after I’d started trying to research the highest cost-benefit ratio interventions that I became disillusioned with both the scientific evidence that AGW will be a significant contributor to climatic change and the expected reliability of predicting potential AGW effects on human welfare.*
Of course, when Rafe and I discussed my conclusions, he said something like, “Well, essentially you’re saying that the climate is a really variable complex system and we can’t forecast its behavior very well. It follows that climate change will happen and some of it will be bad. People will suffer. So what should we do about that?” Unsurprisingly given the focus of this blog, my answer was that we should invest in adaptation…
… bringing me back to my original problem of determining the highest cost-benefit ratio interventions, albeit starting from a somewhat different set of requirements. It didn’t take long for me to identify water as the top priority: you need it for a lot of things, ensuring an on-demand supply is difficult, and you die pretty quickly without it. Now, it turns out that a little Googling for data and analysis with Excel revealed that there is actually sufficient fresh water on the planet Earth to support everyone in a roughly first world lifestyle.
The problems are location, infrastructure, and allocation, which could potentially be solved by transport, investment, and markets respectively–all economic issues. Thus began my quest to understand water economics. I’m really just getting up the learning curve and the topic is rather complex (you might even say that it is a “complex system”). I’ll probably have several posts summarizing what I learn. Initially, I’ve found three good places to start if you are econo-literate:
- A book chapter by Hanemann from UC Berkley that is a good overview water economics.
- A PhD thesis by Zetland from UC Davis that illustrates water economics with a California case study.
- Aguanomics, Zetland’s blog. (Note that he’s a little more interventionist on carbon than I)
The biggest thing I’ve learned so far is to be very, very wary of big water projects. There is a lot of opportunity for them to create a cascade of perverse incentives that mess up water usage for decades. Read Zetland’s thesis if you want gory details.
Unfortunately, this lesson creates a bit of a quandary if you’re looking for useful interventions. You can’t just throw big dollars at the problem. So when Zetland posted about WaterAid on Aguanomics, I was quite pleased. First, his endorsement carries significant weight with me. Second, I really liked what I saw when I went to their Web site. They’re focused on modest, local projects and really adapt their water delivery strategy to local conditions. Most importantly, it looks like they get good bang for the buck.
Obviously, WaterAid doesn’t solve the problem of how the entire world will adapt to changing water conditions over large timescales. But it does address the issue of how to improve access to water for the poor today. This is a good thing in and of itself. It should also help us learn what sort of future adaptations will work best for poor regions–the ones who will most likely bear the greatest burden of changing water conditions.
* Before your flame me, two points. First, I won’t engage in any general AGW debate in the comments of this post. If you want to discuss the topic with me, submit a comment with a valid email address (remember, any information in the email address field when you post is supposed to be visible only to blog contributors) that says something like, “Please contact me to discuss AGW.” I’ll send you email to initiate the conversation. Second, I spent hundred of hours of research going through the primary literature before I came to these conclusions. So don’t think you’re simply going to snow me with a bunch of assertions and references to lay sources. You’ll have to get into some serious scientific discussion.
]]>Over the years, it’s become clear that we share a lot of the same core beliefs and values (one of which is that play is very important–but that’s a topic for another time). More importantly, we mostly agree on how to reason through a problem. This shared context makes for productive conversations. It also makes it both possible and worthwhile to explore the areas where we disagree. While these areas of disagreement form a multidimensional space with a complex topology, I’ve identified one subspace that is fairly easy to characterize.
In general, I think I’m more optimistic about the current state of the world and Rafe is more optimistic about the potential for interventions to improve the future state of the world. Maybe it’s because my graduate degree is in engineering-economics and his is in computer science. I see a robust equilibrium where he see potential for optimization. But this analysis isn’t simply idle navel-gazing. It means that when we agree that an intervention is warranted, my confidence in the positive expected value of the intervention is pretty high, an example of which will form the meat of my first substantive post.
Thanks for having me and I look forward to getting to know you all better.
]]>- dene – “Like the gene, our notion of dene is intended to capture the essence of genetic transmission, but, rather than being confined to denoting a discrete chunk of DNA, it is far richer and more expressive.”
- bene – “As with denes, our notion of bene will also be extremely rich, making it possible to express complex modal and temporal characteristics of the organism’s behavior over time, characteristics that go far beyond simple statements about, e.g., protein synthesis or transcription.”
- genitor (aka genetic functor) – “…a genitor relates a particular dene to a particular bene, stating that whenever the organism’s DNA is seen to satisfy the property expressed by the dene, it’s behavior satisfies the property expressed by the bene.” And later, “…A genitor, with its dene and bene, connects the static with the dynamic. It carries no expectation that its truth can be predicted on the basis of purely structural information.”
Putting this together,
…[a] genitor, G is defined as a triple G = (O, D, B), which groups together the organism O with a dene D and a bene B. The former is a statement about O’s DNA and the latter is a statement about O’s behavior. …semantically, the dene D is a truth-valued function of O’s DNA sequence and the bene B is a truth valued function of O’s temporal life-span. Thus, a dene can be viewed as relating to a snapshot, taken with a still camera, of the organism’s most profound inherited artifact, and B can be viewed as relating to a movie, taken with a video camera, of the way the organism dynamically develops, lives, behaves, etc. A dene thus captures something tangible about what the organism inherently is, and a bene captures something about what it does, or what it is capable of doing, always of course in the context of its internal and external environment.
While the authors admit that their proposed formalism is only conceptual at this point and needs to be extended with a true calculus, it is a vast improvement over the current conceptual system used in genetics. As other researchers have observed, “descriptions of proteins encoded in DNA know no borders — that each sequence reaches into the next and beyond” and “genomic architecture is not colinear, but is instead interleaved and modular, and that the same genomic sequences are multifunctional”; ”
The new formalism addresses these issues well by extending the concept of genetic information beyond just a linear sequence of the DNA. For instance, “a dene may comprise the specification for one or more proteins, or it may serve as template for the production (transcription) of an RNA molecule that has a purely regulative function. It also might designate a binding site for a protein or RNA molecule, or it may comprise sequences that influence (shape or inform) the 3D structure of the DNA, its mutability, the location of nucleosomes, or even certain aspects of post-transcriptional regulation.”
Benes are formally described as follows: “…a sequence consisting of events and actions, either internal to the organism or cell, or external to them; reflecting changes in state, structure, value, shape, potential, location, etc.” Thus, the phenotypic realm — as described by the bene — now has structure which can be analyzed and simulated with existing formalisms such as finite state machines, dynamically adaptive networks, etc.
An additional feature of the dene/bene/genitor model is that it can incorporate environment explicitly using familiar logic notation like so: “O⊧(D & E & M) →B, stating that the behavior B must be true in organism O if its DNA has property D, it is endowed with mechanisms M, and its environment has property E.” Working within a logical framework enables us to bring to bear well-understood computational techniques, including theorem provers.
Importantly too, the model now is able to account for aspects of complex systems that the standard genetic framework cannot. Examples include virtual stability, epigenetic/lamarkian inheritance, organismal adaptation (e.g. Baldwin effect), and more.
There are some major gaps that I would like to see addressed, including multilevel evolution and emergence, as illustrated by Alex Ryan’s diagram. But unlike the standard genetic model, the new one can be extended to include these aspects of complex system dynamics. Which is another clue that the age of the gene is drawing to a close.
Hat tip: Carlo Maley
]]>According to Dr. Harold Freeman of the National Cancer Institute, poverty is the bigger factor today, but it hasn’t always been so:
During 250 years of slavery (1619-1865) and to a lesser extent during the ensuing 100 years of legalized segregation (ending in the mid 1960’s), I suggest that being black in America was a greater determinant of health disparities (including those related to cancer). For example, a slave’s poverty was overwhelmingly related to being a slave. But, in my opinion, as racial discrimination gradually diminished over this 400 year time period (particularly in the last 40 years since the Civil Rights Act) race has diminished as a relative determinant of health disparities compared to poverty.
Here are the principle distinctions I am making:
- In overview, health disparities are principally due to relatively lower economic status (which also co-relates to level of education).
- In the past American racist society (slavery and legalized segregation) race — being black — was the principal determinant of who was poor. There are still some residual effects of this history with respect to the demographics of who is poor.
- However, in the past 40 years, as our society has moved toward a post overt racist society, poverty has now exceeded race as a determinant of cancer mortality.
- Even so, race still matters. But it is my view that today “unintended bias” is the principal form of “racialism”. Poverty as a universal force affecting all who are poor predominates as the dominant cause of health disparities.
- Therefore the diminishment of the effects of poverty — providing resources to all, while in particular addressing disproportionate poverty in blacks — becomes the target for correction.
Comparing blacks and whites with respect to cancer, Freeman continues:
- Black Americans have an overall higher cancer death rate compared to whites. Also blacks have an overall 10-15% lower 5 year cancer survival compared to whites.
- Correcting for socioeconomic status (SES) factors (between white and black Americans) nearly eliminates the white and black differences in overall cancer survival. This suggests that most overall black and white cancer outcome differences are related to SES. (Blacks are disproportionately poor and uninsured compared to whites).
- Some reports have shown that blacks have a higher cancer mortality compared to whites even at the same stage of cancer diagnosis (breast, colon).
- However the weight of evidence indicates that when black and white patients receive the same treatment at the same stage of disease (breast, colon, cervix) the results are the same. This is an important conclusion.
- The Insititute of Medicine (IOM) concluded that race is a determinant of poorer cancer outcome in blacks, even at the same SES and insurance status. This suggests that there is some degree of racial bias on the part of some cancer care providers. This is born out by other studies such as: Blacks at the same SES and insurance as whites are less likely to receive renal transplantation, less likely to receive curative lung cancer surgery in early lung cancer, less likely to be completely worked up diagnostically for chest pain, less likely to be treated for severe pain.
In summary, most evidence suggests that racial differences in cancer outcome are primarily associated with SES and not innate racial factors. In addition there is significant evidence indicating that race in and of itself is to some extent a determinant of the quality and timeliness of receiving health care. Disproportionate poverty and uninsurance are the principal factors. To the extent that blacks disproportionately smoke cigarettes, have less healthy diets or are obese, these are factors that may cause racial disparities in cancer outcome.
Racial classifications are socially and politically determined, not based on biology. But to the extent that race is a lens through which people see, value and behave toward one another, race is an important factor.
For more information check out the following references:
- “Racial Injustice in Health Care” New England Journal of Medicine (2000)
- “Race, Poverty and Cancer” Journal of the National Cancer Institute (1991)
- “Cancer in the Socioeconomically Disadvantaged” CA: A Cancer Journal for Clinicians (1989)
- “Commentary on the Meaning of Race in Science and Society” Cancer Epidemiology, Biomarkers & Prevention (1998)

From my earlier post, it should be clear that pure exponential growth never happens in reality because the drivers of growth, including resources and incentives, decline (or fail to keep pace with the growth) as time goes on. This is true even if those resources are themselves growing but at a sub-exponential rate.
The bacteria example that the professor gives, while showing the stark differences between linear and exponential growth, is glaringly flawed. Specifically, bacteria cannot propagate without a rich substrate of biological material to draw from. If you seeded a sterile jar with a population of bacteria, not only would it not grow, it would not survive. And even if you pumped in nutrients and sunlight in the right quantities to induce population growth, the growth would decelerate long before the jar was full, due to overcrowding. In fact, the jar would never become totally full.
The point I am making is that the professor’s examples are all constructed or taken out of their larger context to illustrate the mathematics of exponential growth while ignoring the thruth that exponential growth does not exist in real-world systems. At one point he says, “Now with all that history of growth they expected the growth would just go on forever. Fortunately, it stopped. Not because anyone understood the arithmetic, it stopped for other reasons, but let’s ask what if. Suppose that growth had continued….” But, there are no systems (other than mathematical ones) with the pure conditions required to sustain exponential growth, so it’s silly to ever suppose that growth continues unabated.
The real question is whether there are systems in the real world that exhibit super-linear growth for a period of time such that will have a negative outcome for us before the growth becomes contained of its own accord. To this, the professor does make a very salient point around 7 minutes into part 2. Namely, that in such instances, we can either let the system “choose for us” the details of how growth will slow, or we can take a proactive stance and curb the growth artificially. A great example of this being done in practice occurred in China when laws and incentives were put in place to limit each family to a single child instead of letting growth be curbed by mass starvation and disease.
Sometimes the difference between between true exponential growth and the pseudo-exponential growth period in the accelerating portion of the sigmoid curve is inconsequential. But sometimes it’s not. For instance, look at the divergence between the two in this comparison.

Even more importantly, our expectation of the long-terms consequences of not intervening can meaningfully color how we choose to intervene. Our choices of how to respond are rarely binary, and different choices come with different costs and expected outcomes. If you truly believe that you are on an exponential curve, the cost of getting off that curve is never a consideration because you have no choice: you have to get off or doom is inevitable. You might argue that becoming overly alarmed is better than not being alarmed enough. But I would argue that seeing the truth and future more clearly is better than either.
Hat tip: Kim Scheinberg
]]>The controversy is not so much whether the atmosphere is heating up, but rather the cause and projected magnitude. As anyone familiar with modeling complex systems understands, the time horizon for accurate predictions is inherently short due to chaotic and otherwise complex feedback dynamics. So it shouldn’t really be a surprise to learn that climate predictions even with the most detailed and best crafted models have a hard time with accuracy in predicting more than a year out. As a consequence, it should also not be a surprise to learn that the role CO2 plays in changing global temperature — and the extent to which it does — is highly uncertain. There are good reasons why we should seek to reduce carbon emissions, but whether global warming is one of them is unclear. What struck home for me was learning that the uncertainty for a 50 year projection was plus or minus 55 degrees centigrade. This does not necessarily mean that it is possible for the atmosphere heat up or cool down by that much, but rather the models used in making temperature predictions are useless for long-range projections. And it matters not whether all the different models converge to the same prediction if the uncertainty factor (as measured by propagated error) is large for all of them. For a good overview of these arguments and the data, see Patrick Frank’s article.
Just because the prediction game is difficult and uncertain doesn’t mean we shouldn’t take action to protect against disaster scenarios. Even if we feel 99% certain that a catastrophe will not occur, we should be looking really hard at how to prevent or mitigate it in the 1%. The more dire the consequences, the harder we should try. The data is clear that global warming has been happening, nobody really refutes that anymore. Logic dictates that without further evidence to the contrary, we should assume that the atmosphere will continue to warm, at least for some period of time. The aggregate effects of global warming are also hard to predict, but at the very least we can be pretty sure that many people will die, through famine, lack of clean drinking water, disease, drowning, and a number of other factors. There are several schools of thought on how to react to this sobering situation. One is to try to reverse global warming and stabilize the temperature. Another is to mitigate the toll on life and suffering (both human and non-human). A third approach is to attempt both in some combination. Ultimately the debate over global warming is over which of these routes we should take. And the route that you advocate should depend on your belief in the certainty of the various predictions.
If you believe with high certainty that CO2 is the main culprit (this is called the “anthropogenic global warming” hypothesis or AGW for short) then it makes a lot of sense to put all your eggs into basket number one, assuming that you can have an impact in time. If you are highly certain that CO2 is not to blame, you might prefer to take the second approach, using the massive resources that would otherwise have been spent curbing emissions to instead address the more direct causes of death and suffering that will be greatly exacerbated by global warming. But given Frank’s arguments (and that of many other highly respected climatologists), it would be foolish to feel certain enough of either claim to put all your eggs into one basket. The only rational approach is to do both in some combination.
But how do we know how to spend our limited resources appropriately, especially when new data unfolds constantly which should feed into our approach and spending decisions? My skeptical friend wrote me the following email the other day, which put a smile on my face:
You know where I stand on the science here, but there are obviously strong feelings on both sides and I’m not more than 90% confident we won’t experience a catastrophic AGW outcome.
Ross McKitrick has come up with an absolutely brilliant scheme that allows each side to put its money where its mouth is. It turns out that the climate models predict a very specific AGW heat signature–with warming occurring first and foremost in the tropical troposphere. So he proposes to tie a carbon tax to temperature rises there. If AGW is true, then this tax will steadily increase. If it’s false, it will remain near zero (ignoring subsidies under global cooling) regardless of warming from other sources. Then we can all just shut up and let nature and the economy take their courses.
This is a plan I would strongly back. Hooray for clever people!
This blog has a good overview. This is Ross’ source material.
Presumably, tax revenues from this scheme could be put directly towards mitigating death and suffering, and voila, we have a self-tuning adaptive solution. The astute reader will realize though that in the event that carbon emissions are not a significant factor in global warming (but warming persists nonetheless) there is no way to pay for tragedy prevention/relief. While this is true under McKitrick’s model, there’s nothing that says we can’t as a society decide to throw carbon emissions under the gas guzzling bus (so to speak) and let the tax float with global average temperature.
Your thoughts welcome.
]]>
Click here for Discussion Forum
]]>The argument goes that all media is biased, and if you watch, listen or read more, it simply enhances that bias while giving you a false sense that you are getting more information and different perspectives. And since all of the one-to-many forms listed above have the same inherent biases and incentives, your confidence goes up without increasing your actual knowledge. A lot of biased information and misinformation is worse than none at all.
While The Black Swan makes this point somewhat tangentially, Jeff Cohen brings home the point from the perspective of a former main stream journalist who was forced out for raising these very issues.
Not that I needed any more excuse to opt out of following main stream news, but the above examples and arguments have been enough to make me consciously avoid all network, cable and local television news, not read newspapers, and only listen to radio news as it comes up on NPR* between the shows that I like listening to.
The way I stay up on what’s going on in the world is via osmosis, internet news aggregators like Digg and Reddit, and a few selected forms such as 60 Minutes and The Daily Show (which actually does a great job of highlighting the bias, spin and manipulation coming from MSM). I suspect I’m not alone in this shift. I also realize that my mix is not perfect, especially when it comes to information about the world outside the U.S. So, I’m wondering, how do others get their information about current events, and what are some sources and methods for becoming better informed about the world as a whole?
—
* NPR news isn’t any better than CNN, it’s only a matter of convenience that I listen to it at all.
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>I was concerned that it may look noble but that they might be profiting from the bid/ask spread, so I wrote and asked them. Here is their response:
No fees, except the 5% on top of any funds put into your account. That fee does little more than cover the credit card processing charge. For example, if you want to put $5 into your account, you will be charged $5.25, and you will have $5 in your account to trade with. After a while, maybe you’ll have grown your account to $50, all of which you can ask us to give away to the charity of your choice. No fees on donations. Another way of saying it is that 100% of the funds that are in trading accounts will eventually be given away to charities chosen by the winners.
Very cool, I hope it takes off.
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Dear Gotham Prize Board,
Congratulations on your first set of awards, and congratulations to Drs. Varshavsky and Carol on their respective prizes. I am a big fan of the prize approach and applaud the Gotham Prize for its pioneering effort in this regard.
I’d like to express my concern, however, on the pre-qualification process.
As you know, whenever you pre-screen ideas, the backgrounds and biases of those doing the screening will inevitably bias the results and eliminate potentially some of the most innovative and groundbreaking approaches. If the decision process itself is good, there needn’t be a pre-qualification process.
Additionally, I find it hard to believe that anonymity (as is claimed in your FAQ) can be maintained in all cases, and by having an additional barrier for non-standard and “disruptive” approaches to reach the final evaluation process, the problem of bias becomes exacerbated. It would be nice if anonymity were truly possible, but often times an idea or approach has a certain “signature” quality to it (either in its writing style, content or reference network) that belies the identity of its creator(s). I fear that those who are on the fringes but who have been doing work and are somewhat known will be discriminated against, while those who have never published or spoken publicly about their ideas will gain unfair advantage via the pre-qualification process.
One benefit of allowing all comers is that ideas that have been vetted publicly and refined based on criticism have a higher likelihood of effectiveness than entirely new approaches that have not been vetted at all (and by virtue of which may seem more pristine and effective than they really are). And just because an earlier incarnation of an idea/approach was flawed does not mean that it doesn’t deserve a thorough new look. Sadly, human nature biases us all against ideas we once passed judgment on, despite new refinements in either the idea or our own understanding.
I hope you will consider the elimination of the pre-qualification process and also consider a more open award decision process than currently exists, possibly guided by your expert Board, but not limited to it. At the very least, a transparent process is better than a closed one and will allow you to avoid certain inevitable criticisms and pressures. Most importantly such changes will give confidence to courageous and creative individuals that your methods are fair and open-minded, which will in turn yield you more and better submissions over time.
Respectfully,
–Rafe Furst
]]>
Click here for Discussion Forum
]]>Starvation may help cancer treatment. “As little as 48 hours of starvation afforded mice injected with brain cancer cells the ability to endure and benefit from extremely high doses of chemotherapy that non-starved mice could not survive.”
Cancer is evolution in action. This isn’t a surprise to most researchers these days, but it is to the general public. Very simply, the cells in your body are individual living organisms, and just like populations of humans and animals, populations of cells undergo Darwinian natural selection. The difference is that the multicellular organisms (like humans), evolved stringent mechanisms to keep cellular evolution in check because it is bad for the multicellular organism. When evolutionary pressures find a crack in those mechanisms, the shackles come off, cell populations evolve more freely and this is what we call cancer. Another difference is that once the host organism dies, the cancerous process stops, and it has to evolve anew within each multicellular organism. Or maybe that’s not entirely true…
Some cancers are transmittable. The article talks about the Tasmanian devil population, in which the cancer itself — not a cancer-causing virus — is transmitted via biting. Genetic diversity in the population is very low and thus the new host’s immune system does not react to the invasion of cancerous cells, thinking that they are part of their own body. Can cancer be transmitted from human to human? It is very rare, but under special conditions, yes. Mothers can transmit tumors to their unborn fetuses, and the article talks about a cancer patient who had a piece of her tumor transplanted to her (healthy) 85-year-old mother. The transplant was done with everyone’s knowledge and consent as a way to try to better be able to treat the daughter. Unfortunately the daughter died, and the mother contracted the cancer and died fifteen months later.
Lamarkian inheritance is real. Remember back to high school biology when you were taught that acquired characteristics are not heritable? Well, that turns out to be not quite true. Protection against (or susceptibility to) cancer, for instance, can be acquired through diet and then passed on to offspring. This study shows that acquired characteristics can be passed down more than one generation. It’s sobering to think that my choice of eating a Big Mac today instead of broccoli could impact my grandchildren’s chances of getting cancer.
Hat tips: Ann Kulze, John Pepper
]]>
Click here for Discussion Forum
]]>Themes
TED sessions have their own explicit themes, but I detected a few implicit themes based on the overlapping content of the talks.
Global Awakening: there is something afoot that is palpable that is more than a political or cultural movement. Al Gore and Samantha Power talked explicitly about this referencing a “higher consciousness”. See levels of organization and cultural agency. Counterpoint theme: The Failure of the System to protect its individual constituents and serve their needs (see Sue Goldie).
Compassion / Cooperation: as juxtaposed to the mindset of the selfish gene, social Darwinism, “nature red in tooth and claw”, Libertarianism, Objectivism, free-market radicalism, etc. See cooperation.
Breaking the Spell: scientific results and arguments that challenge deeply held myths and cognitive illusions about who we are and the nature of the universe. See limits of knowledge.
Education Revolution: systematic primary, mandatory education is less than 200 years old, serves outdated needs and assumptions, adheres to outdated concepts, ignores mountains of data on effective and ineffective methods, and is thus in critical need of overhaul. See Ken Robinson, Neil Turok and also Teaching as a Subversive Activity. See also Larry and Sergey’s talk about how their educational background factors big into Google’s success.
Thoughts
• Whenever I start thinking pessimistically about the massive momentum in “the system”, I remind myself that we are only ever (at most) one generation away from potential total overhaul. The keys are (1) cultural change precedes systemic change and (2) systemic change comes from working outside of, and on the margins of, the system that need changing. Neil Turok has effectively redesigned the modern university — out of necessity. His example has profound impact not only for Africa, but for education the world over.
• Sometimes developing societies (which is to say the simpler systems) can make change and innovate faster than more complex ones. See Clinton’s wish for world-class healthcare for Rwanda.
• Language shapes/is conscious thought. I’m increasingly aware of concepts I would like to express but the words I have at my disposal can only approximately and awkwardly convey the full meaning. Coining new terms or co-opting old ones is problematic and ends up as jargon, creating a barrier between those who accept and “get” the meaning and those who feel alienated by the jargon. How do we augment the lexicon with fidelity and without alienation?
• Visual processing/reasoning in brains is largely unconscious, and a ubiquitous part of human experience (even in the congenitally blind). The majority of our language hinges on visual metaphor. Every once in a while we will be shown examples and techniques for tapping into this incredibly powerful system of understanding and communication, like Hans Rosling’s data visualization and Chris Jordan’s art which gives unique insight into comparative scale. How do we tap into this system on a much more regular, efficient and mass communicative basis?
• Whenever there’s a seeming paradox, question the assumptions. You will always find that the statement of the paradox itself presupposes and creates it.
• The chatter in our brains, whether we are conscious of it or not, is incessant and integral to our sentience. Yet there is no scientific study of this realm beyond what is done in psychotherapy. Why is that?
• Terrorism = class of memes
• Memetic evolution is a concept that has been around for over 30 years and a subject that has been talked about several times at TED as a real and powerful force, as real as biological evolution. Yet, I get the sense that people don’t want to accept memetics as real and therefore ignore the implications. It is unsettling to think that we as individuals are not fully in charge. It’s why people don’t quite see organizational agency for what it is and fumble around in the dark when it comes to causality.
• Each level has its own form(s) of energy/information: ATP, money, fame, political power, will power, joules, pagerank, etc.
• Why does there have to be a Theory of Everything with universal laws, fundamental particles and cosmological constants? Might there just be an infinite/fractal regress with “laws” as emergent properties, and no true constants?
• The smaller we go in scale in exploring the universe and the farther back in time we look, the more homogeneity and symmetry we find. As structure unfolds in time and scale, the universe becomes more complex, symmetry is broken, heterogeneity arises, new levels of organization are created, new forms of value appear.
• How do we determine system boundaries, where one system ends and another begins? Of course, no systems are truly independent of one another; independence and interdependence lay on a continuum. In some sense, it’s all one system. OTOH, the amount of informational feedback within and between subnets can be empirically discovered to a certain extent. And in this way, we can talk about System A (me) as being a real thing, separate from System B (you), even though there is interdependence and information flowing between us. Boundaries of systems at adjacent levels have a unique character owing to the special relationship of emergence.
• The brain is a simulator (of potential futures). Computer simulations are extensions of the brain in this regard.
• It’s important to set priorities as a society, but tricky to add rationality to the mix in the face of media, financial incentives, political agendas. Perhaps the truth market concept can be tweaked to create “urgency markets”.
• We humans love panaceas. Assuming that all panaceas are (mostly) placebo effects, there is a danger in extrapolating too far. Can Sri Sri Ravi Shankar‘s breathing and meditation bring about world peace? It doesn’t hurt to try as long as we don’t exclude all of the other important work to be done because we are all too busy breathing. But what would happen if everyone in the world actually practiced meditation for an hour a day?
• Kaki King points out that music is one of the only acceptable “right brain” expansive activities left in our society. It’s true; most other such activities (most arts, spiritual experiences, sexual experiences, et al) are marginalized and deemed either worthless or morally destructive by many.
• Given the inescapable truths revealed by Nassim Nicholas Taleb, are we making a huge mistake by messing with fundamentally unpredictable complex systems, as Craig Venter is doing and as CERN is doing with the Large Hadron Collider? And by unpredictable complex systems I am referring not only to the systems of study, but the larger socio-technical systems in which the activity is embedded. Neither Venter’s nor Brian Cox’s answers to these sorts of criticisims were very reassuring.
• How about that mushroom guy?! Incredible. (Paul Stamets)
• We are in the midst of the 6th great extinction period on Earth, and the only one for which we can take (partial) credit. What would have happened had an omnipotent being intervened in the other extinction periods and saved all species? How sustainable would that state of affairs been, and what downstream effects would it yield? The problem today is how do we make good, rational choices in the face of such massive unpredictability. Whether or not we are responsible for the extinction of [insert your favorite endangered species], what would happen if we propped that species up, and at what cost to the rest of the ecosystem?
• The sanctity of national sovereignty needs to be weakened and must play second fiddle to global priorities if we are to survive and get beyond the global challenges that face us. See Paul Collier.
• Jonathan Haidt had an eye-opening message for the overly Polyanna amongst us: there is strong bias in representation at TED of viewpoints on social issues. Namely, social conservatives (“the other half” of America) were practically non-existent. This is problematic if you want to make real and lasting change. Everyone needs to participate in the discussion.
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click Here for Discussion Forum
]]>
Click here for Discussion Forum
]]>The medium is just starting to come into its own, and what I really like is when the slideshow and the presenter work together to make a whole that is much greater than the sum of its parts. Larry Lessig delivered such a talk. And this one by Siegfried Woldhek on the true face of Leonardo da Vinci might blow you away. If so, you should also go here and search for past talks by Rives and talks by Ze Frank. Upcoming, look out for one by Amy Tan and one by Rives.
So, what are your favorite slideshow presentations, either speakerless or presented by a speaker?
]]>Debunking myths about the “third world”
Click here for Discussion Forum
Watch the end of poverty
Click here for Discussion Forum
]]>Stories of Africa
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>
Click here for Discussion Forum
]]>The universe is queerer than we can suppose
Click here for Discussion Forum
]]>I will be blogging about things that piqued my interest at TED, but below are some cool links that I came away with:
- Johnny Lee’s Wii Remote Projects
- 1-800-GENOCIDE
- World Wide Telescope
- World Mapper
- Hyperscore
- Once Upon a School
- Your Morals
- A Vending Machine for Crows
- CurrentTV
- LiveScribe
- Robert Lang’s Origami
- Zero Footprint
- Dr. Helen Fisher on Chemistry.com
- UN Alliance of Civilization
- 19 • 20 • 21
- Interfaith Youth Core
- Campaign for Real Beauty
- CalCars
- Enitiatives
- Emersion Presents
- DIY Drones
- Chris Abani
- Do the Green Thing
- OmNeuron
- Kluster
- Death and the Powers
Feel free to post your own link suggestions that you feel fit the “theme” (such as it is).
]]>
Big Problems
There are some very big problems in the world that can be solved but only if there is collective will to do so. Global warming, curing cancer, poverty traps, and so on. Free markets alone cannot get us there because of inherent externalities and insufficient market structure geared towards the problems at hand. One way this has been addressed is via internalizing externalities (e.g. pollution markets). But such an approach requires global political consensus for most big problems. Another approach that obviates this roadblock is to externalize incentives with large cash prizes, ala the X Prize Foundation. What I propose is to set a self-organizing system for the X Prize approach, but for arbitrary problems of interest to a critical mass of philanthropic citizens.
A Proposal
The basic idea is a series of close-ended prize funds targeted at specific problems with specific fundraising goals. There are three stages: fundraising, pre-award, post-award. In the first two stages the fund returns interest. Once the prize is awarded, the principal is a charitable donation and the fund is dissolved. Each individual investor (be it a person or an organization) receives one vote irrespective of their capital contribution. Each year during the pre-award stage the fund votes on whether the goal has been achieved. Once it has, all claims on the prize are vetted and formal decision process is used to award the prize, possibly across multiple claimants in differing amounts.
Example: Cure Cancer Annuity Fund
- Opens in 2010 with the simple goal to “cure cancer”
- Fundraising target: $10B
- Minimum investment: $10K
- Interest rate during fundraising: 15%
- Fundraising goal is reached in 2013
- Fund consists of a combination of 9347 investors
- Includes individuals, corporations, charities and trusts
- Guaranteed interest rate during pre-award: 5%
- Award is adjusted for inflation
- Excess returns beyond 5% are dividended out pro-rata
- Cure cancer goal vote achieves super-majority in 2020 after creeping up each year prior
- Prize is awarded in differing amounts to 34 different recipients
- Range is from $4B to $2M
- Recipients include individuals, corporations, academic institutions, non-profits and collectives thereof
- Decision process is done via percent allocation aggregation:
- Each investor gets 100 units to allocation to claimants in any combination
- Allocations are added and normalized to 100%
- Depending on each investor’s local tax structure they may receive credit for:
- Capital contribution post-award
- Capital contribution at point of investment
- Exemption from tax on interest
- etc.
Please note that the above is straw-man example, just a starting point. No doubt the model can be improved upon in many ways. If you have ideas on how to do so, please share below in the comments!
]]>
Mechanical Turk is matchmaker between people who have spare time do to tasks that humans are good at and people (or organizations) that need such tasks done. These HITs (Human Intelligence Tasks) range from doing research to giving opinions for a survey to beta testing a website to giving advice on a travel destination to whatever you can dream up.
If you want a task done, you simply post a HIT description, determine how many different people you want to respond and how much you are willing to pay for each qualifying response (typically between 10 cents and a few dollars). Then you watch people from around the world respond. You accept or reject each response, and only accepted responses get paid the pre-specified amount.
If you are someone with time on your hands and looking to make some cash, you can browse through the posted HITs and choose to respond to whichever ones you like (either because you have easy access to the answer, the cash looks attractive, or you just find the task interesting).
What is compelling about the marketplace dynamic is that HIT posters get their projects done extremely cheaply, certainly for less than they would have to spend doing it themselves or paying for it locally. It’s hard to compete with the combined intelligence and resources of thousands or millions of people on the internet for both cost and quality (after all, there is a self-selection of expertise when people choose to respond to your HIT). From the responder’s standpoint, they get to leverage — and get paid for — their knowledge, expertise and time, whereas without the Mechanical Turk marketplace, they would not have nearly the opportunity to do so on a regular basis. It’s hard to believe that someone would spend any time at all responding to a 10 cent HIT, but when most of the world lives on a few dollars per day, 10 cents here and 10 cents there makes a lot of sense.
As a case study, I recently used Mechanical Turk to gather some data that I was interested in. I knew that there was likely a lot of data out there, but my initial attempts at crafting Google and Wikipedia searches didn’t yield much, probably because I needed to invest more time and be more creative with my search terms.
Before I tried Mechanical Turk though, I tried two other methods: I asked friends and colleagues who I thought might happen to know a bit about the topic, and I also posted a question on Yahoo! Answers. The former yielded nothing due to the fact that nobody had the information at their fingertips and it was not worth their time to do my research for me for free. Yahoo! Answers I was more hopeful for since I have had good success in the past with it, but what I found in this case is that since few people had the answers I was looking for at their fingertips, nobody responded. Seems as though you have to pay people to do actual research, even if it’s not very hard research.
Proving this point, Mechanical Turk yielded 20 acceptable data points over the course of a few days, for which I paid a whopping sum of $20. And I’m convinced that I could have gotten the same response for much less money had I used a strategy of starting low and upping the price I was wiling to pay per HIT as needed to fill my request. At a $1 price point per HIT, mine was on the high end; the average hit seemed to be about 25 cents. Not bad considering that I probably would have been willing to spend a couple hundred dollars to not do it myself. Plus it got done a lot faster than if I were on the case.
Mechanical Turk, like Yahoo! Answers, Wikipedia, Rent-a-Coder and open-source software development, is an example of crowdsourcing. The book, Wisdom of Crowds is a can’t miss if you like this sort of thing (as I do). So is the work of Louis von Ahn, who invented (among other things), CAPTCHA, that security mechanism with distorted text that you have to type to prove you are human.
The beauty of Mechanical Turk and all other crowdsourcing systems is that the more people who use it, either on the demand side or the supply side, the more useful it is it everyone. In other words, value is being created where it existed in only latent form before (more on this subject in a later post).
Check out Mechanical Turk next time you have a task that you don’t want to do yourself, or if you are bored and looking for a way to make a few bucks in your underwear.
]]>At the very least, it would look much different than “a locatable region of genomic sequence, corresponding to a unit of inheritance” (call this the “standard definition”).
Gerstein’s definition, “a union of genomic sequences encoding a coherent set of potentially overlapping functional products,” while more accurate, is not really a useful definition. It just says there is structure to the information on a DNA sequence which corresponds to higher level function in the cell or organism. But we knew that already, it doesn’t tell us anything about the structure or how it relates to function.
The standard definition is a stronger claim, but it’s harder to reconcile with the evidence. Simply read the rest of the Wikipedia page to see the contradictions. Locatable regions of sequential base pairs only partially correspond to identifiable function. And as far as I can tell — somebody please explain if I’m wrong here — these locatable regions are not units of inheritance. Inheritance involves a very accurate copying of the entire genomic sequence, and in the case of sexual reproduction a very preservative recombination operation called crossover. The heterogeneity conferred by mutation (and crossover) acts either on points in the sequence or sequential regions, but these are mostly random occurrences, not limited to supposed gene boundaries. So if during inheritance, a gene can be chopped in two, partially deleted, inserted into, etc, in what sense is a gene a unit of inheritance? You could argue that an individual base-pair is a unit of inheritance, and you could argue that the entire genomic sequence (modulo a few mutations and crossover) is a unit of inheritance. But not a gene.
Then there’s the mystery of “junk” DNA, the portions of the genome which don’t directly code into identifiable products like proteins. In many species (including humans) the non-coding portion of DNA comprises over 98% of the genome. Wayt Gibbs in a Scientific American article points out that certain segments code for RNA which doesn’t get turned into protein, but which are actively functional in a number of ways not previously appreciated:
To avoid confusion, says Claes Wahlestedt of the Karolinska Institute in Sweden, “we tend not to talk about ‘genes’ anymore; we just refer to any segment that is transcribed [to RNA] as a ‘transcriptional unit.’”
Still, these RNA-only genes are only likely to double the number of identified functional units in the genome, leaving in humans 96% or more of the genome unaccounted for. Recently, the ENCODE project made a startling revelation:
“The majority of the genome is copied, or transcribed, into RNA, which is the active molecule in our cells, relaying information from the archival DNA to the cellular machinery,” said Tim Hubbard of the Wellcome Trust’s Sanger Institute…. [From COSMOS article, emphasis mine]
The pilot project posits:
Perhaps the genome encodes a network of transcripts, many of which are linked to protein-coding transcripts and to the majority of which we cannot (yet) assign a biological role. Our perspective of transcription and genes may have to evolve and also poses some interesting mechanistic questions. [From Nature article]
In other words, most of the so-called non-coding DNA does code after all (just not directly into proteins), and there is good evidence that these genetic products are not junk after all, but rather constitute nodes in a multi-level genetic, epi-genetic, proteomic and metabolic network. Sole, et al outline both theoretical and empirical bases for this line of thinking, which not only agrees with the above findings but also gives a plausible explanation for one of the biggest questions about the gene model having to do with robustness. More generally, Sole, et al represent a shift in thinking towards systems biology which is long overdue.
With all this in mind, we turn to a real genetic headscratcher about recent experiments in which ultraconserved (and thus presumed critical) portions of mouse DNA were deleted but to no apparent effect. While the network theory could in principle solve the mystery, it’s worth going through the comments on the blog because (a) plausibility doesn’t imply veracity, and (b) there could easily be more than one relevant cause/dynamic. I’ve summarized what I feel are the main arguments and referenced the corresponding comments by number:
- The deleted genes serve functions that are latent (21,26,32, 35, 52, 55, 63, 78, 82)
- Gene-level information interacts with other types of information in a complex, indirect way (71, 86, 87) [ precursor to the network theory ]
- Genes are selfish and look out for their own preservation; the deletable sections could have been introduced by random mutation, viral or recombinatory injection (33, 59, 62, 72, 74, 76)
- They are useful in recombination, but not alone (4, 75, 76, 77)
- They play an important role in the physical structure of the chromosome (4, 54, 71)
- Redundancy is achieved by a form of “checksum”, possibly probabilistic, or other cryptographic mechanism (6, 12, 76)
- Certain genes if mutated properly could be harmful, but if deleted entirely have little effect (59, 76, 84)
- The ultraconserved segments piggyback on the critical genes they are next to (29, 33, 54)
- Some of the genome acts as a latent heterogeneity reserve which evolution thrives off of (53)
- They protect critical genes by reducing the chance such genes will be mutated (75, 76)
- The ultrapreservation has to do with negative selection (37) [ I have to admit to not understanding this argument (or whether it’s a counterargument for negative selection), perhaps someone can explain it ]
Notwithstanding the ultraconserved parts, it is worth pointing out that there doesn’t necessarily need to be any function for non-protein-coding DNA; it could just be junk after all. Alternatively, we are often just looking at an evolutionary snapshot (as suggested by some of the comments above) and it’s hard to say what is functional without looking at the longer context and looking at the evolutionary system as a whole. After all, nothing in biology makes sense except in the light of evolution.
So, why does all this matter, and why am I picking on the gene model even though we all know that it has its flaws? For one, because we don’t all know that it is so terribly flawed. I certainly didn’t until I looked into it. But more importantly, even if we admit at some level that a “gene” is a quaint concept — accurate to describe only a small portion of the genome — by continuing to use the term, we (a) propagate misunderstanding to the vast majority of the population, and (b) continually reinforce flawed thinking and logical fallacies in our own minds that blocks better understanding and insidiously undermines fruitful new ways of thinking of the problems. Ultimately we keep having to narrow the gene definition, add caveats to apologize for its poor explanatory power and come up with post hoc explanations for why empirical results don’t fit the model.
Perhaps it is time to stop using the term “gene” entirely and come up with a lexicon for the elements and processes of the genome which incorporates and integrates models for the informational content beyond that of protein coding, including chromosomal structure, epi-genetic information, and biomolecular and cellular networks.
]]>Parrondo’s paradox is the well-known counterintuitive situation where individually losing strategies or deleterious effects can combine to win…. Over the past ten years, a number of authors have pointed to the generality of Parrondian behavior, and many examples ranging from physics to population genetics have been reported. In its most general form, Parrondo’s paradox can occur where there is a nonlinear interaction of random behavior with an asymmetry, and can be mathematically understood in terms of a convex linear combination.
One of my new favorite pastimes is identifying real world scenarios that I think are examples of Parrondo’s Paradox (PP). Here are some from the world of poker:
- Morton’s Theorem describes situations during the play of hands wherein you may employ a strategy that is losing against two individual opponents, but against both simultaneously it’s a winning strategy.
- There is a related scenario that exists in tournament poker known as implicit collusion, which is a correct strategy when there is a big jump in payouts from one place to the next.
- There are a host of small theoretical advantages to having a short stack (fewer chips than your opponents) in the modern table-stakes form of poker where players can be “all-in” and not risk more than they have in front of them. The advantage comes from playing against two larger-stacked opponents in a hand where you could be forced to fold a hand if you had more chips to risk, but because you are all-in you get have a free draw to a possibly winning hand.
- In the rec.gambling newsgroup in the mid 90s it was discovered that you could “lose by winning” (a reverse PP). This relates to an age-old debate about whether money-management was important for advantage gamblers (like good poker players), who in the interest of “maximizing returns” have a tendency to play as big as they can as long as they think they have an edge. Those that understood the issue realized that if you go broke at any point, you take yourself out of the game and can’t realize your (now purely theoretical) advantage. Thus the well-known Kelly criterion for maximizing log(returns) is also the right strategy for maximizing absolute returns in most practical applications. This is due to the facts that: (1) there is a practical minimum bet size (i.e. you can’t bet arbitrarily small amounts and if your bankroll drops below the minimum you are effectively broke); (2) bankrolls are always finite (but if you had an infinite bankroll, the PP would not exist). Note that the whole confusion and debate might have been obviated if it weren’t for the fact that PP wasn’t really studied as such or well known until 1996 (two years after that rec.gambling thread).
So, what are your favorite examples of Parrondo’s Paradox?
Hat tip to Alex Ryan, and thanks to Derek Abbott for discussion and allowing me to post his paper.
]]>The problem with this state of affairs is that many phenomena in the world, especially in the biological, social and technical realms are non-linear. One basic reason that non-linearity crops up is that events and internal states of objects are often dependent on one another. Despite our desire to simplify and assume independence, the weather in Kansas is affected by the butterfly flapping its wings in China (and a multitude of other factors, some more significant, some less). Often times these dependencies are weak enough that the independence assumption is a good approximation, and thus our linear predictions work out very well. It took thousands of years before we discovered that light does not travel in a straight line, but rather is affected by gravity. We would do well, therefore, to never assume independence, even if the available evidence suggests it. The pursuit of understanding complex systems could reasonably be defined as exploring what happens when we look at things anew after removing the independence assumption from our thinking.
The first thing we see when looking for these heretofore hidden dependencies is that what was linear now becomes exponential. A canonical representation of this effect occurs when you compare the growth of two different types of networks, both with the same exact nodes, but different ways of attaching (i.e. linking) one node to another. In network A, links are added randomly between pairs of nodes. But in network B, nodes that already have links are more likely to get new ones (the more existing links, the more likely a node will get a new one). This is known in the literature as “preferential attachment” and entire books have been devoted to exploring its consequences.
But just as linear growth is an approximation, so is exponential. Why? Because even though our mathematical models of the world work with infinities, the world we live in is finite. There is a finite amount of mass, energy, space, time, etc. in the universe, if not actually, then at least for all practical human experience. In the network example, we began with a fixed set of nodes and so while early on the number of possible new links is large, eventually the network becomes fully connected and there are zero new links to be created. In the real world, even if we allow for growth in the number of nodes, eventually this growth too will run out of steam.
While exponential curves are better than linear for modeling and extrapolating, better still are sigmoids, or S-shaped curves. Sigmoids look like exponentials initially, but instead of unrealistically appearing as unchecked growth that gobbles up all available resources, sigmoids mercifully abate due to the self-referential source of their growth. I say mercifully because we’ve all been subjected to terrifying predictions that are inherently wrong because they ignore the simple truth of the sigmoid: that there’s no such thing as unchecked growth. Growth checks itself, one way or another. Malthusian prophecies will never come true, for the very ability for humans to replicate is dependent on resources that are consumed increasingly by that self-same replication. Pandemics are self-limiting as well, their rate of spread being inversely proportional to their effectiveness at killing their hosts. And while we are seemingly on an exponential train ride to destroying our environment, that too is a misconception; we may very well be on track to wipe out humanity (and many other species with us), but we will never truly destroy our “environment” because as soon as the last of “we” bites the dust, the destructive force is gone too. Actually long before then it will have petered out on the sigmoid.
Although the mercy of the sigmoid as presented seems to offer little solace, we cannot forget that inherent in the self-referential nature of such tragedies is the means for salvation. It may appear at first that I am letting us off the hook by saying that it will all work out by virtue of the sigmoid, that we needn’t lift a finger. However, the truth is just the opposite. We don’t get a sigmoidal soft landing without recognizing the downstream effects of our actions and feeding them back into our decision making process for future actions. If we want the softest landing possible, we must open our eyes to the faintest of connections, realizing they exist whether we can prove so (yet) beyond reasonable doubt.
By understanding the dynamics of complex, non-linear systems — the sigmoidal reality — we can appreciate even more how important our actions and inactions are to our own wellbeing. Whenever we see non-linearity, we should be thinking not of a Greek tragedy beyond our control but rather how this non-linear footprint must be the result of a feedback process, putting us not only in the audience but also on stage. If a linear reality is the mark of the simpleton, and an exponential reality the mark of the doomsayer, then a sigmoidal reality is the mark of optimism… but also of activism.
Like exponentiation before it, sigmoidality brings new understandings that were masked by the less nuanced model. One curious fiction that goes by the wayside in a sigmoidal world is the concept of a tipping point, made popular by Malcom Gladwell’s eponymous book. While it makes for vivid imagery, the critical “point of no return” is rarely found in real systems. This is not to say that we can’t identify the straw that broke the camel’s back, but rather that the back’s breaking is not the end of the story.
An avalanche does not continue to gain momentum indefinitely, it eventually comes to a stop. A meme (such as tipping point itself) eventually slows down, once it has been passed to a significant portion of the population. In fact, the switchover point from speeding up to slowing down makes for a more meaningful use of a tipping point concept, albeit quite different from the original. Ultimately, though what one gains from looking at the world with sigmoidal glasses is more accurate models, better prediction, and hence deeper truth.
]]>When agents emerge, the dynamic processes involved in their emergence sharpen simultaneously via a feedback of information from the higher level to the lower.* There is no use in asking which happened first – dynamics at level 1 or emergence of level 2 – for they are dualisms, each reinforcing the other until at some point we recognize that something happened. This runs counter to the Western understanding of causality, which requires that we fix one or the other as the cause so that we might see how the effect came into being. But this fixture destroys comprehensibility just as a holding onto a ball in the middle of a juggling act causes the whole procession to come crashing down.
Let’s look for a moment at the emergence of a particular social agent, a corporation. In the beginning there is no corporate agent. There may be an idea (in the mind of a person) to create it. Or there may be several people who in going about their lives – be they working for an existing corporation, exploring a new business concept, etc – come to the conclusion that a new entity should be formed. But this still does not qualify as the actual formation or creation of a new entity. Some of the first acts founders typically do include the creation of a written business plan, naming the company, creating an organizational chart, filing incorporation documents and much more. But none of these individually or collectively are essential for the emergence of the new agent, nor is the order of their happening. At these early stages, agency is weak; the outside world barely registers, let alone validates the company’s existence. Even the founders struggle to take seriously the validity of their enterprise. What if it never gets off the ground, never brings in revenue, never turns a profit? But at some point between the lightbulb being turned on and the largest company in the world, we all would agree that General Electric somehow “came into being” as a real thing. It emerged. It is unequivocally (now) an agent that acts to perpetuate itself, improve its wellbeing, protect itself against threats, set and achieve goals, communicate with other agents, sometimes cooperating with them, sometimes competing, selling to them, hiring them, merging, spinning off, etc.
The corporate agent is a cooperatively emerging one, dependent on its constituents somehow aligning their own goals and actions with one another. At first, when it’s just a few close friends or colleagues, this is easy. But as time goes on and new members are added, new functions, sub-goals and sub-plans are created, corporate agency develops and subtle conflicts naturally arise. Conflict between individuals who have different self-interests and whose interests are not entirely aligned with that of the company. Functional components (departments, processes) and corporate goals, once singular are now multiple and not always harmonious. So structure is created (hierarchy, policy, rules, procedures and decision-making processes) to mediate this conflict in service of one main thing: corporate agency. This is an example of information at the higher level (that of the corporate agent) being fed back into the lower level dynamics, constraining constituent agents’ actions to those that promote corporate wellbeing. This is self-reinforcing too. Structures and dynamics which do lead to corporate wellbeing get solidified and amplified, pushing out those elements which do not, further reifying the higher-level agent, and so on. The family health coverage and equal opportunity policy each work to the benefit of certain members and at the expense of others, but both work to the benefit of the company as a whole. These are examples of dynamics which simultaneously promote corporate agency and which are a result of it. To look at the dynamics as prior to agency or vice versa misses a crucial understanding.
The chicken and egg are dual aspects of another type of emergence: autocatalysis. At first glance, autocatalysis and cooperation don’t seem to have much in common, but they share an important characteristic. Both are coherent dynamics that align the existence of two or more agents, just as waves in phase with one another are said to be coherent. Cooperation is about the coherence of agents that exist simultaneously (but at different points in space) whereas autocatalysis is about coherence over time. A begets B which begets C which… begets A. Note that an autocatalytic system might also translate (i.e. move) its constituent agents in space, but it doesn’t need to. In the Game of Life, gliders are autocatalytic systems that move their constituent agents (and thus themselves!) in space, while blinkers are ones that do not.
To see coherence more clearly, it helps to look at systems with decoherent and incoherent dynamics. In the former case, one agent’s existence is negatively correlated with another’s, for example matter and antimatter particles. As we all know, when electrons and positrons meet they annihilate one another. A system with incoherent dynamics is one in which agents coexist and possibly interact, but do not destroy one another or help one another survive. They are coherently neutral. A closed system of gas particles bouncing off one another gets the picture across. If coherence is represented by an upward pointing arrow, and decoherence a downward pointing arrow, then incoherence is one that points orthogonally sideways. Of course, real systems come in many flavors and the arrows more than likely are at non-right angles. Even “purely competitive” systems like marketplaces and predator-prey ecologies are far from decoherent. Market competitors agree to cooperate on the institution of trade itself, and predators survive the meeting, and eating, of their prey (quite nicely, thank you very much).
Emergence itself is tightly coupled with the concept of coherence. And these are also both coupled with the concepts of agency, system, complexity and existence. Alex Ryan introduces a formalism for emergence which while not immediately intuitive, has great explanatory power, and therefore should be taken seriously. One fallout of his formalism is the ability to separate the “interesting stuff”, what he calls novel emergence (life and such), from the more mundane weak emergent results of averaging (e.g. temperature) and scaling (e.g. growth). In the arrows analogy above, perhaps there is a nascent formalism with additional explanatory power. One can imagine a vector analysis and field theory to talk about emergence, including agency. Perhaps we will find that emergent structures like ice and other ordinary “matter” don’t intrigue us as much as “complex systems” because the arrows tend to go sideways in the simpler systems. The interesting stuff happens where agents can interact and affect each other’s very existence.
Natural selection is interesting because the arrows are non-neutral, they point mildly downward. Novel emergence (which includes both the cooperative and autocatalytic types) points north of the horizon, and is also very interesting. Without it there would be nothing interesting for natural selection to shape. But this is stating the obvious. We already knew that chickens and eggs are both very interesting.
—
* See the arrow labeled “Self Referential” in this diagram.
Click here to enlarge.
I have spent a lot of time on this blog discussing evolution and emergence, the distinction between the two and the interplay thereof. All the while I have wished that I had a diagram like like Alex Ryan‘s above (posted with permission), as it does much better then the proverbial thousand words.
]]>“Nature, red in tooth and claw”:
Richard Dawkins used this quote in his book, “The Selfish Gene,” to summarize the behavior of all living things which arises out of the survival of the fittest doctrine of evolutionary biology. His unsentimental view of behavioral biology was originally unpopular when the book was published in 1976, coming at a time when the prevailing worldview of human behavior was tabula rasa….
Dawkins used the quote as a corrective, reminding us that we humans are born into a world with pre-existent genetic imperatives that cause us to be competitive despite the best efforts of education and religion to suppress those imperatives.
From Everything2.com
The myth, which The Selfish Gene perpetuated, contends that where cooperative behavior is found in nature, it can be explained by kinship relations alone. That is, you share genes with your kin, therefore these shared genes have incentive to cause you to cooperate. To the extent that cooperation exists between organisms that are not close kin, the myth explains this away as co-option of general “social mechanisms” that were intended (by evolution) to facilitate kin cooperation.
Thanks to Robert Aumann and Robert Axelrod, the Iterated Prisoner’s Dilemma (IPD) showed that cooperation can emerge naturally without any notion of kinship or shared genes.
In fact cooperation can exist and thrive in the presence of completely “ruthless” populations of defectors who never cooperate. Martin Nowak’s 2006 text, Evolutionary Dynamics, explores the cooperative dynamic in detail. Here are some interesting findings Nowak notes from studying spatial Prisoner’s Dilemma games — a subset of IPD in which the population is arranged in a spatial configuration and individuals interact only with their neighbors:
- Cooperators and defectors co-exist
- “Cooperators survive in clusters”
- “Cooperators can invade defectors when starting from a small cluster”
- One interesting dynamic occurs when two self-sustaining “walker” sub-populations collide into a “big bang” of cooperation which largely takes over the population.
Cooperation can also emerge based on temporal asymmetries during replacement of individuals in a population with reproduction. If the choice of which individual will be replicated is made before randomly selecting an individual to be “killed off”, then selection favors defectors. It turns out however, that if the choice is made after the randomly selected individual is removed from the population, then cooperation is favored. While generalizing from simple models does not always capture what’s going in nature, this surprising outcome should give us some intuition that the central dogma is in need of further examination.
Cooperation appears infectious under the right circumstances. Of course the same can be said of defection (aka competition) if you get to choose your circumstances. However because Earth is a thermodynamically open system where more energy is continually added (from the sun for instance), I would wager that on average, cooperation yields higher payoffs than competition. And because of this asymmetry, cooperation, not competition, is the more “natural” state of affairs. If you catalog all of the interactions between all of the agents in the world and divide them into the three categories of cooperation, competition and neutral, I bet you will find more of the former than the latter. If you start with this premise and look at the world anew, you may find (as I do) that the world is an inherently benevolent place.
]]>There are roughly three groups of people with regard to The Secret:
- Those who already live it and see its tenets as natural and obvious
- Those who could benefit quite a bit, but whose strong critical thinking skills get in the way of efficacy
- Those who could benefit the most, but are not capable of practically employing it for a variety of reasons, including aptitude and discipline
Category 1 folks don’t see what all the fuss is about. Category 2 folks see it as snake oil. Category 3 folks are the biggest fans of The Secret and the ones responsible for its viral commercial success, but for the most part won’t see any long term benefits from it. Part of the reason for this last bit of irony is there is no such thing as a free lunch. If you want real change in your life, you have to be willing to work at it, and a couple of viewings of a DVD may only be the first 1%.
So where do I stand on The Secret? After watching it more than once that I think it’s about 75% “truth” and 25% mumbo jumbo. The true part, contrary to what the DVD would lead you to think is not mystical, can be easily explained by well-understood social and psychological principles, and its truth value is not affected one way or another by the other 25%. In other words, you don’t need to invoke quantum physics, talk of all-pervasive vibrational frequencies, or a vast conspiracy through the ages to either explain or employ The Secret. However, the evidence suggests that for the majority of consumers, those in Category 3, the mumbo jumbo may be necessary to build the “faith” and will to get them to give it a chance. Certainly the material has been published and presented many times before by many different people using different lexicons, slightly different methodologies, and vastly different explanations for why it’s supposed to work. Thus it stands to reason that the commercial/viral/memetic success of The Secret can at least partially be attributed to the packaging, including the bad pseudo-science.
What about the 75%, how can that be explained in ordinary terms without invoking a universal “Law of Attraction”? I’ll start by pointing out that many phenomena that exhibit law-like behaviors can be explained as emergent properties of complex systems without depending on the existence of a universal law of nature or anything supernatural. In a later post, I will argue that even seemingly fundamental laws like gravitation can be seen as emergent rather than innate.
The basic prescription in The Secret is a three step process of “Ask, Believe, Receive”. In other words (as it is explained in more detail), you should consciously think about (“ask the universe”) what you want to occur in your life, then cause yourself to believe that it will come true (again with lots of visualization, talking about it as if it’s already true, etc), and finally wait patiently with an open mind and heart while the law of attraction does its magic to achieve your desire. The strong claim that the video makes is that if you do these three steps correctly, the law will work every single time. Of course, this is an unfalsifiable claim (and thus cannot ever be disproved scientifically) because if you don’t get what you want it implies you weren’t doing the steps right. Maybe you didn’t really believe, or maybe you just haven’t waited long enough.
It’s too easy to play devil’s advocate to the claim that The Secret works. Instead, I’ll attempt the opposite and tell you why it <b>does</b> work (but not all the time), if you either have complete faith, or you have gotten past the specious argument that since it’s got some amount of bogosity and hype that it’s all just hogwash. For those who want the rational argument for why The Secret works, read on.
Imagining What You Want
At any given moment during the day, there are an effectively unlimited number of things you can think about and actions you can take. Due to conditioning, culture, and many other historical constraints, the actual set of thoughts and actions you are likely to engage in is a very small subset of the possible. When you imagine something and focus on it, that biases the set of likely future thoughts and actions towards possibilities consistent with what you are imagining. Additionally, imagining what you want opens your mind to perceiving things you would not otherwise have perceived or would otherwise have perceived differently. Achieving something imagined generally takes a decent number of path-dependent steps wherein any individual step is manageable and gets you in the general direction of the goal, but may require some course correction after the step is taken. Focused imagination is like a beacon that allows you to course correct both consciously and subconsciously, and without it, it’s very easy to get lost, easy for small directional errors to compound, and for you to lose the will to continue towards your goal.
Additionally, there is ample evidence from sports psychology that suggests that visualization is a powerful tool for peak performance. This is not surprising given that fMRIs show that largely the same activation patterns in the brain occur when you imagine doing something as when you actually do the thing.
Talking About What You Want
While it is not magical or the work of some mysterious force, people are attracted to people who are like themselves. When you talk to people about the things you want in your life, they can very easily make a determination as to whether you are like them or not. By surrounding yourself with people who have the same vision as you do and want similar things as you do means that you will all have help in getting there. At the same time, you no longer have as much time or inclination to interact with people who want a different future or who by their own words and actions drag you down or away from your goals. Yes, this is groupthink or brainwashing, but it’s the good kind. As long is what you and your cronies are striving for isn’t something like war or trampling on the rights of your fellow citizens. The Secret still works in those cases, but now it’s The Evil Secret.
Success Breeds Further Success
Certainly with initial success comes confidence, which in turn leads to future success in a virtuous cycle. But more than that, when you are successful at something, others notice and their reactions to that noticing will make it easier for you to succeed in the future. This can be plainly seen in Hollywood where the biggest stars are continually showered with free gifts and enticements to do business. When was the last time you got a weekend vacation to Napa fully paid for by a production company wanting to pitch you a movie script that will make you more money and bring you more fame, so that the next time it’s your whole family being flown by private jet to Fiji to convince you to be the lead in a mega-blockbuster which brings more success, etc. Whether you value things like money or fame is besides the point because whatever it is that you value, as long as there are others who want some of what you have or what you are likely to achieve, they will fall in line with you and make it easier for you to get to the next level or goal.
Believing You Can Achieve
Belief in yourself and in the proposition that “things will work out” is important for achieving goals. We know that if the goal is to get well again, the placebo effect (i.e. belief that you will get well again) accounts for 20% – 40% of the effect of any treatment. When you believe, this reinforces the imagination part, the talking part, and the attraction part (the quiet confidence of a believer is naturally attractive). Belief also blocks out destructive internal dialogs that we all have from time to time, and that many of us have as their standard thought patterns. It also blocks the effects of naysayers. To these points, Postman & Weingartner ask us in Teaching as a Subversive Activity (1969)* to do a thought experiment:
Suppose you could convince yourself that your students are the smartest children in the school; or, if that seems unrealistic, that they have the greatest potential of any class in the school. (After all, who can say for certain how much potential anyone has?) What do you imagine would happen? What would you do differently if you acted as if your students were capable of great achievements? And if you acted differently, what are the chances that many of your students would begin to act as if they were great achievers? … There is… considerable evidence to indicate that people can become what others think they are. In fact, if you reflect on how anyone becomes anything, you are likely to conclude that becoming is almost always a product of expectations — one’s own or someone else’s. We are talking here about the concept of the “self-fulfilling prophecy.”
Self-Fulfilling Prophecy
Belief/faith is an autocatalytic cycle, but it can be delicate; once shaken it unravels quickly. Where once you were crediting all your successes to The Secret, now you see that it was all just bunk. That bicycle you wanted for Christmas didn’t show up, so it has to not work, right? On the other hand, you could have gone out the day after and bought yourself one on sale if you failed to drop enough hints to your loved prior. Of course, now that you are a non-believer, you are in a different sort of self-fulfilling prophecy, one where you are looking for evidence to the contrary at every turn. Guess what you’re gonna find?
Examples of self-fulfilling prophecies can be found everywhere in social life, including the value of stocks, mutual trust between friends, and the safety of urban neighborhoods. When faith-based systems are stressed past a certain point, stock markets crash, friends become enemies, and neighborhoods turn.
For the computer geek, try to think back to the first time you really “got” recursion and could actually code a recursive routine that did something useful. If you were like me, you had to go through the thought process a number of times, get to the base case, and unwind the loop before you convinced yourself that it works. There may have even been an “aha” moment where prior you couldn’t write a recursive routine and afterwards you could write anything. For the less geeky, I liken the role of faith to the first time you were able to ride a bike without training wheels. You don’t need to invoke God or quantum physics to ride a bike or do recursion, but you do have to get out of your own way and choose to be a little out of control — just for a moment — for the virtuous cycle to achieve catalytic closure and become self-sustaining.
Three Types of Secret Admirers
I started this post by dividing the world into three types of people relative to their stance on The Secret. I’ll end in similar fashion by suggesting that there are three kinds of people who find success with The Secret (or any of the numerous similar self-help philosophies):
- Those with unshakable faith in themselves
- Those with unshakable faith in the world, i.e. that things will just somehow “work out for the best”
- Those with both 1 and 2
To those that say The Secret doesn’t work, they are right. But to those who say it does, I would also say they are right.
* Hat tip to Paul Phillips who recommended Teaching as a Subversive Activity.
However, there is a glaring problem with this study. While it seemed comprehensive with respect to thimerosal exposure, it apparently did so by combing health plan records, not by attempting to measure levels of mercury in the actual bodies of the children or their mothers during pregnancy.
If thimerosal were the only possible source of mercury exposure, this might make sense. But we know that certain seafoods and dental fillings are suspect, not to mention potentially unrecognized sources that may exist. While the maximum amount of mercury obtained through thimerosal in vaccines may not by itself be enough to show statistical correlation with autism, it is still entirely possible that total exposure, including pre-natal, could. Additionally, with a high base level of mercury in some children, the addition of thimerosal into the mix might lead to toxicity. As we know from the study of many complex systems, including biological ones, non-linearity is the rule rather than the exception, and in particular, toxicity effects are often non-linear.
This might partially explain why many parents of children with autism report noticing a marked acceleration of symptoms coincidental with vaccinations.
]]>Models
p. 35: We begin with a discussion of the basics of scientific modeling. This topic is so fundamental to the scientific enterprise that it is often assumed to be known by, rather than explicitly taught to, students (with the exception of a high school lecture or two on the “scientific method”). For whatever reasons, learning about modeling is a lot like learning about sex: despite its importance, most people do not want to discuss it, and no matter how much you read about it, it just doesn’t seem the same when you actually get around to doing it.
Being able to see the intellectual tools with which you are working as such, and especially understanding their limitations and biases, is critical for deep understanding. A lot of what I post here harks back to this theme:
- There is No Truth, Only Predictive Power
- Thought as Metaphor
- This Sentence is False
- Parts of the Elephant
- Emergent Causality
Much of what passes for scientific pursuit ignores the fundamental importance of the chosen model and the modeling process itself. And instead of scientific enlightenment we get mathematics in disguise:
p. 71: Many analytic methods provide exact answers that are guaranteed to be true. Alas, all models are approximations at some level, so the fact that, say, a mathematical model gives and exact answer to a set of previously specified approximations may not be all that important.
One underutilized model in particular is going to be very important in advancing our understanding of the universe:
p. 95: Networks may also be important in terms of view. Many models assume that agents are bunched together on the head of a pin, whereas the reality is that most agents exist within a topology of connections to other agents, and such connections may have important influence on behavior.
Emergence
p. 234: The path of the glider can be predicted without resorting to the microlevel rules. Thus, in a well-defined statistical sense, it requires less information to predict the path of the glider by thinking of it as a “thing” than it does to look at the underlying parts. In this sense, the glider has emerged (Crutchfield, 1994).
Yep.
p. 28: Another important question is how robust are social systems. Take a typical organization, whether it be a local bar or a multinational corporation. More often than not, the essential culture of that organization retains a remarkable amount of consistency over long periods of time, even though the underlying cast of characters is constantly changing and new outside forces are continually introduced. We see a similar effect in the human body: typical cells are replaced on scales of months, yet individuals retain a very consistent and coherent form across decades. Despite a wide variety of both internal and external forces, somehow the decentralized system controlling the trillions of ever changing cells in your body allows you to be easily recognized by someone you have not seen in twenty years. What is it that allows these systems to sustain such productive, aggregate patterns through so much change?
While the answer to this question is complex, my (overly) simplistic model is that agents at level 2 emerge from autocatalysis and cooperation of level 1 agents. Over time, natural selection “solidifies” level 2 agency (and constrains level 1 interactions), making these aggregate patterns clearer and more robust. Eventually level 2 agents yield level 3 agents, and so on. Natural selection and emergence go hand in hand, and one does not typically operate without the other in nature.
p. 200: Organizations are able to circumvent a variety of agent limitations. Some organizations are useful because they can aggregate existing characteristics of agents, such as when tug-of-war teams combine each member’s strength or schools of fish confuse predators by forming a much larger and more dynamically shapred “individual”. At other times, the value of an organization comes through internalizing external benefits, such as flocks of geese (or schools of fish for that matter) having an easier time moving by using vortices created by other members of the group. Organizations can also allow agents to exploit specialization and circumvent other innate limitations, such as the ability to acquire or access incoming information, or individual bounds on processing the information once it is acquired.
p. 95: …the most interesting results come about when the outcome of the model is, at some level at odds with the induced motivations of the agents — to use Schelling’s terms, when the micromotives and macrobehavior fail to align. Thus, it is far more interesting to see cooperative behavior emerge when the agents are self-interested than when the agents are presumed to be altruistic, or to see agents aggregate into cities when their goal is to be left alone.
Indeed, it is most interesting when cooperation emerges in the presence of competition as well as when tragedies of the commons emerge despite the best intentions of the lower-level agents. I am particularly concerned about what this latter implies about the long-term viability of our individual human well-being (however you might want to define that) as the vitality of organizational levels above us — cultures, governments, corporations, belief systems, et al — becomes misaligned with our own.
Communication
]]>p. 198: In general, communication is capable of productively altering the interactions in a social system for a few key reasons. First, communication expands the behavioral repertoire of agents, allowing new and potentially productive forms of interaction to prevail. With communication, agents can create new actions that allow them to escape the previous behavioral bounds. The great the potential of communication, proxies in our discussion by processing ability and tokens, the more possibilities that emerge. Second, communication emerges as a mechanism that allows an agent to differentiate “self” from “other”. In the worlds we have explored, agents would like to cooperate in the case of the Prisoner’s Dilemma and hunt stag in the case of the Stag Hunt, but the presences (and inherent incentives) of defectors is an ever present danger to adopting such behavior. Communication emerges as a way either to signal a willingness to be nice or to detect meanness. In these systems, this occurs when a fortuitous mutation gives an agent the ability to “speak” and to respond positively to such communication while avoinding harm from theose agents that say nothing. By detecting “self” in such a way, the agent can improve its performance even in nasty worlds.
- No matter how hard you try to keep your inbox clear, there is an equal and opposite force working to fill it up.
- If you do happen to clear it, soon it will just fill up again.
- If you stop answering emails entirely, eventually they will just stop coming in.
- Each person has their own equilibrium point where the the incoming flow balances naturally with their desire for a clear inbox. (Mine is at about 20 emails)
- The joy you receive from having a clean slate is always less than (and more fleeting than) the anxiety you feel trying to get there.
- When you die, your inbox won’t be empty. (Okay, so I stole that one from Don’t Sweat the Small Stuff)
I’m working on letting go of the anxiety about the situation and being happy with an average of 20. But then, by releasing the pressure won’t my average just go up, causing the anxiety to return?
]]>Explicit Cooperation
This is normally what we think of when we talk about cooperation: two (or more) agents using explicit communication to coordinate their behaviors. Examples are trivial to come by when thinking of human agents or computer technology. In non-human ecosystems, we see explicit cooperation too. Communication is achieved via basic tagging mechanisms such as pheromones, or more complex linguistic processes such as the famous bee dance.
Implicit Cooperation
Sometimes agents end up cooperating “by accident”. Meaning that their actions are determined individually without any explicit communication. But because of the particular state of the environment (including their configuration with one another), they each happen to benefit from the actions of the other(s). The tit-for-tat strategies in the iterated prisoner’s dilemma illustrate the concept. Past actions of one agent is noted and factored into future actions of another. There is no explicit communication, and each agent is (by design) looking out for their own best interests and nothing else. When a tit-for-tat agent interacts with a “narrowly selfish” agent, cooperation does not arise, but when two tit-for-tat agents interact it does. The TfT agent does not get to choose its strategy, rather the environment that it finds itself in that determines whether it will be in a cooperative situation or otherwise.
Physical and biological systems can be seen as exhibiting implicit cooperation as well. Any structure in physical space that is “built” from more basic parts can be viewed as having emerged via cooperation between constituent agents. Bricks cooperate (i.e. mutually benefit) from their physical proximity and particular configuration in a wall, compared to say, being randomly strewn around a construction site. In the random configuration, each brick is exposed on all sides to destructive forces, but in a wall they mutually protect one another. Additionally, bonds between two connected bricks mutually reinforce bonds with a third brick.
In social systems, implicit cooperation occurs all the time, though we tend to assume that all cooperation requires explicit communication. This fallacy helps explain the popularity of conspiracy theories.
Disruptive Cooperation
A third form of cooperation can be illustrated by the analogy, “the enemy of my enemy is my friend.” In a population of agents where a subset (C) are cooperating with one another to their mutual benefit over the general (non-cooperating) populace, the independent agents can gain relative benefit by simply disrupting the cooperation of agents within C. This month’s New Yorker has a cover story in which a highly diverse (and normally antithetical) collection of dissident political parties are presenting a unified front in their opposition to Putin’s all powerful ruling party.
Oncologists note how cancerous cells exhibit extreme heterogeneity and competitive behaviors both with themselves and normal cells. But in a sense — the disruptive sense — they are cooperating with one another to lower the fitness of normal cells which are ultra-cooperative with one another. I have written several times here about the centrality of cooperation in the emergence of new agents. Tumors are new agents that emerge from the interactions of cancerous cells. Perhaps disruptive cooperation is a key factor. Tumors are different than “positive” agents in that they emerge from a substrate of high complexity, feed off of it (in a thermodynamic sense perhaps) and ultimately break down the complex system and hence the source of their own existence. Lest we dismiss this view out of hand, it should be noted that social/cultural agents exhibit this dynamic all the time.
]]>Dawkins make a great case for looking at genes as agents undergoing evolution by natural selection (NS). But this does not mean that the “phenotypic” agents are not also doing the same. In multicellular organisms, there is not just one phenotype, but two: the single cell, and the whole organism. Actually, even this is a simplification. There are many phenotypes ranging from the chromosome level to expressed proteins to cells, organs, immune system, et al. Each of these systems can be looked at as phenotypic agents with all the others acting as the genotype.
There is an asymmetry between these systems in that they are organized hierarchically. Higher levels have been built on lower ones, historically speaking. But once catalytic closure is achieved, the question of which came first (chicken or egg) becomes a red herring.
]]>Part of the reason it is hard to see this basic truth is that we don’t accept that there is a continuum of behaviors between groups of agents which ranges from highly competitive on one end to highly cooperation on the other. When agents cooperate enough, we recognize a new level of agency (e.g. metazoa). But one of the thrusts of this blog is to look at group dynamics not as an all or none proposition. By admitting to this continuum, it becomes clearer how a loose ecology of independent agents, taken together with all of their cooperative dynamics can be seen as an agent which can be subject to selective pressure in the presence of other such “loose group agents.” To deny group selection is to deny NS entirely. ]]>
Metastable systems subsume an important and ubiquitous class of systems here on Earth: autocatalytic systems. Autocatalytic systems are those that pass through two or more distinct states in a cyclical way such that each state will eventually be repeated. Examples of autocatalytic systems include gliders, fire, metabolism, and organism reproduction. Due to the hierarchical organization of natural and artificial complex systems, each “state” is also a configuration of agents at the lower level (e.g. pixels, oxygen molecules, enzymes, organs, parent organisms, etc). Autocatalytic systems are heavily dependent on a rich environment of resource agents for the process to continue indefinitely, and without a source of renewal (i.e. exogenous energy), eventually the system will not be able to sustain itself. Thus, a thorough understanding of an autocatalytic system cannot be gleaned without a thorough understanding of the environment of potentially constituent agents. Of course one agent’s environment is another agent’s… self. Meaning that agents act as each other’s environment and thus can form cross-catalytic cycles. In metastable systems, agency arises (i.e. emerges) when the catalytic loop is closed and one or more cycles is formed. The tighter the loop — the stronger the probability that once in state A, the system will get back to state A — the more stable we say the system is, and the more likely we are to recognize the entire system as an agent at the higher level of organization. Agency is not a binary proposition, but rather a continuum where the metric is stability.

From The Chaos Point. Reproduced with permission from the author.
Given the rich environment required to foment autocatalytic reactions from random ones, it is not surprising that where there is one emergence, there are many similar simultaneous emergences. Stuart Kauffman calls life on earth, including us sentient beings, “we the expected,” by which he is at least partially referring to the non-accidental, and fecund nature of autocatalysis under the right circumstances. And given the resource intensive kind of environment required for autocatalysis to emerge, it’s not surprising that autocatalytic agents would find themselves in competition for increasingly scarce resources. The sigmoid graph of the autocatalytic rate law gives a good intuition for how emergence of agents at a new level of organization quite naturally leads to increasing selective pressure.
Once there is selective pressure exerted over a population of autocatalytic agents, we arrive at the familiar zone of Darwinian evolution. As the population moves up the sigmoid and resources become scarce (either through sustained competition or otherwise changing environment), cooperative behaviors begin to appear. The following narrative illustrates the general dynamic:
When the going gets tough, social amoebas get together. Most of the time, these unusual amoebas live in the soil as single-celled organisms, but when food runs short, tens of thousands of them band together to form a sluglike multicellular cluster, which slithers away in search of a more bountiful patch of dirt.
New research shows that, within this slug specialized cells rove around vacuuming up invading bacteria and toxins, thus forming a kind of rudimentary immune system. The discovery could provide a molecular link between the bacteria-eating behavior of single-celled amoebas and similar behavior by cells of animals’ immune systems.
Science News, August 25, 2007
Cooperative behavior between agents at one level (e.g. amoebas) can lead to the emergence of a new level of agent (e.g. slugs).** And just as with autocatalytic emergence, should a population of higher-level agents emerge and begin to compete, natural selection occurs at the higher level. This does not mean that selection at the lower level(s) ceases, though the very nature of cooperation implies a reduced differential in fitness between cooperating agents, and thus selective pressure is reduced relative to a competitive environment. Additionally, selection at the higher level can work to constrain destructive selection at the lower level. John Pepper et al suggest that animal cell differentiation patterns are an adaptation to suppress evolution at the cellular level and hence stave off cancer.*** At the cultural levels, we find tons of examples of the higher level constraining destructive competition between constituent agents, including the entire legal and governmental enterprises. Not all selective pressures are destructive to higher levels, as evidenced by the adaptive immune system, which clearly makes the animal a more robust agent through increased flexibility in responding to threats.
This brings us back to Voorhees’ notion of a virtually stable system, which “maintains itself on the boundary between two or more attractor basins…. [Energy is expended] to maintain the system on an unstable trajectory, or in an unstable state. This energy expenditure purchases an increase in behavioral flexibility.” The adaptive immune system fits the model of virtual stability, as does Palombo’s theory of emergent ego (to name just one of several theories of mind which conjure notions of virtual stability). While in one sense “virtually stable” systems teeter on an unstable edge, in another sense they are more stable (virtually so) than a “one trick pony” such as the innate immune system, which can be represented by a single basin of attraction. Interestingly, when we find virtually stable systems in nature, they tend to be employing a selection mechanism over a population of agents (in other words, evolution) as the central mechanism of flexibility. But there are other mechanisms of self-monitoring and adaptive control besides natural selection, and we tend to observe these in man-made virtually stable systems such as robotic controllers, constitutional democracies, etc.
At this point, I will indulge in some speculation about the relationship and differences between autocatalytic emergence and cooperative emergence.
- Agents that emerge from either type can be found as constituents for a higher level emergence of either type.
- Autocatalysis is formed from heterogeneous agents which each play a unique role, and thus autocatalysis is more susceptible to disruption (less stable) than cooperative emergence.
- Cooperative emergence often relies on homogeneous agents which are functionally interchangeable and is thus more robust.****
- For a cooperatively emergent agent to persist through time, its constituent agents must also have a way to persist through time, generally in the face of deleterious forces. This means that constituent agents must either either self-repair when compromised (virtual stability), or be replaced by a healthy, functionally equivalent agent. Conceptually speaking such replacement can happen via either replication or re-emergence, but practically speaking, the former is much more prevalent. This is because autocatalytic systems can and do produce by-products, including additional copies of the original autocatalytic set. This is the foundation of agent replication, which, due to geometric growth, yields exponentially greater population sizes than re-emergence alone.
- Autocatalytic systems, being less stable on the whole, exhibit a weaker form of agency than cooperative systems. That is, the agents don’t “look out” for their own well-being as forcefully, and so the degrees of freedom of interaction — and the velocity and multitude of interaction — are greater in autocatalysis than cooperative emergence.
- Cooperative behavior requires direct informational feedback, whereas a single catalytic reaction does not (though once the catalytic loop is closed, a feedback loop has been created).
- Virtual stability only arises in systems that are internally complex enough to contain models of not only their current environment (as all agents either explicitly or implicitly do), but also models of what Kauffman terms the “adjacent possible.” In other words, the system must be able to predict what the environment might be like in the future. Populations of agents undergoing selection have this feature (at the population level), as do neural networks and other local search mechanisms.
- One way to look at agency is as a “unit of survival”, or what persists over time. Replication is a form of meta-survival, but plain old survival counts too. Individual molecules exist for a long time, individual turtles less so.
Populations of agents are said to evolve via natural selection. (And populations are agents too). But agents evolve via mechanisms other than natural selection, for instance ontogenesis and senescence.
There is nothing particularly special about natural selection in the pantheon of complex systems dynamics. It is a model that has great explanatory power for a set of observations and phenomena that are readily available to the naked human eye combined with a cataloging mechanism. With the burgeoning interest in complex systems, and the more general trend towards multidisciplinary discourse, we will eventually have robust models of emergence (and other dynamics) as distinct from evolution.
—
* In an earlier post I classified metastability and various kinds of virtual stability (such as self-repair and environmental representation) as “mechanisms” of stability, and I see now that I probably should have used the word “aspect” instead of “mechanism”. Regardless of whether one views metastability and virtual stability as classes of a more general definition of stability or as distinct from one another, it should be clear that all three are intimately connected and lead to a more general notion of agency than any one alone would.
** I have claimed in a previous post that “emergence of higher levels of organization of complex systems happens via cooperation of agents at the lower level, and that without cooperation, the burgeoning of complexity would not occur.” I should have been less absolute in this claim because I believe now that autocatalytic emergence is importantly different than cooperative emergence.
*** Apoptosis (programmed cell death) is also believed to be such an adaptation.
**** Heterogeneous agents can and do form cooperative alliances, and some co-evolve to the point of strong symbiosis, wherein one cannot exist without the other. A common example is gut fauna (e.g. e. coli), which are necessary for many mammals (including humans) to digest their food, and which have become specialized to the diet and general environment of their host’s innards.
The notion that the “network is the computer” – or at least that it could be – has been around for a while. But all actual implementations to date are either too specialized (e.g. SETI@home) or simplistic (e.g. p2p file-sharing, viruses, DDoS attacks) to be used for generalized computation, or are bound at some critical bottleneck of centralization. To this latter point, search engines hold promise, but the ones we are familiar with like Google are reliant on both central computational control (for web crawling and result retrieval) and central storage (for indexing and result caching). Lately social bookmarking/tagging has been used by those opting in to distribute the role of crawling, retrieval and indexing. It remains to be seen whether keyword tags and clusters thereof are semantically strong enough in practical terms to support general computation. Regardless, whatever heavy lifting is not supported by the representation level will end up falling on the protocol and computational levels. On the other end of the spectrum, the specialized and computationally intensive projects have the issue of how to divide the labor and coordinate results, and no efforts to date have yielded a way to generalize distributed computation without a high degree of specialized programming.
If we look at the various computational models that are theoretically strong enough (in the Turing sense) to do generalized work, there is a spectrum created by the tradeoff between more semantically rich representation and more powerful atomic computation. Using arbitrary web pages as the representational level requires too much processing to get much more than search results based on keywords, and it stretches the limits of our ability to bridge the semantic gap between humans and computers. The Connection Machine model places too much of the representation and programming problem on the shoulders of sophisticated programmers. The Cyc approach seems reasonable in terms of balance, but the problem is still one of centralization in the form of knowledge engineers and a very semantically rich representational language – one that is hard for the average person to understand. If we are going to break down all the central bottlenecks, the representation must be one that is easy for the average human to contribute to, at least declaratively and post hoc. The social network model (e.g. wikipedia) is the logical extreme wherein humans are self-organized as a network computer, not only for creating representation but also for computation itself. Open-source development, social bookmarking/tagging, and other forms of social networking work the same way. The only trouble with this model is that humans are a very limited resource compared to a network of traditional computing devices – even if you take all five billion of us working together as efficiently as possible.
So what if we combine the best of social networking and Cyc to achieve the “sematic web” outlined by Tim Berners-Lee et al in the 2001 Scientific American article? Imagine that current web pages (possibly XHTML) can be annotated with just enough semantic structure that the average human can be a useful contributor via activities that they are already doing, namely wiki, tagging, browsing, searching, email, SMS, blogging, etc. Then define various meta-operations on top of search that effectively turns any search engine into a social theorem prover. Finally, create a p2p search protocol that is lightweight and extensible enough to harness the computational resources of any network enabled device via HTTP, SMTP, SMS, etc. The key to the search protocol is to remove all centralization, though certainly the Google API could be made to comply for some added horsepower and an initial boost.
Under the proposed model, the representation is handled as an emergent behavior of the social network, while the heavy computational lifting and storage can be truly and automatically distributed and handled by devices. The issues of coordination and control must be looked at differently than we normally think of them. Under the proposed model, the network becomes “smarter” and more useful for general computation based on the sophistication of the various meta-operators and the combined, continuous output of crawling, indexing and search. The way you “control” a computation is by creating a search whose explicit results and/or epiphenomena are what you want. If the necessary search operators don’t exist, then you can create a new one and extend the protocol for everyone’s benefit.
]]>I am pleasantly surprised though that the media hasn’t latched onto this as a convenient explanation of the obesity epidemic of the past 30 years. There is an overwhelming avalanche of data that supports the argument that obesity in most cases comes from a combination of eating the wrong foods, eating too much of it, and not exercising enough. Yes, there is a genetic component that predisposes some people more than others, but like the viral explanation, genetics is not determinant but rather it’s just one factor.*
It turns out though that there is a major infectious component to obesity: it spreads through social networks via the mind. The July 26 issue of the New England Journal of Medicine documents findings that between two people who consider each other friends, if one of them becomes obese, the other’s chances of also becoming obese increase by 171%. The effect occurs whether you live next door to each other or 500 miles apart, so the viral explanation seems unlikely to account for the contagion. Also, neighbors who don’t consider themselves friends, and friends (and siblings) of opposite sex don’t affect one another. Same-sex friendship seems to be the only factor in social contagion of obesity. And lest you think it’s a matter of selecting friends who look like you (i.e. correlation vs causation), they controlled for that too.
Turns out obesity is a powerful meme, which suggests that it’s not enough to target individuals if you want combat the epidemic effectively. On a cultural level there must be acknowledgment of the social component to obesity, and there must be campaigns to immunize our vast social network against its spread. The way you stop bad memes from propagating is by a combination of centralized propaganda (i.e. traditional media blitz) and deeper one-on-one communication between individuals (i.e. create a virulent anti-obesity meme).
Spread the word.
* Similarly, there are people who for various reasons can eat all the wrong foods or ridiculously large quantities and not gain weight. Some of the mitigating factors are genetics, adenovirus-36, the amount of exercise one gets, and the total diet one eats (e.g. acidic foods can slow digestion of simple carbs making them less likely to trigger insulin resistance, a major contributor to obesity). For anyone who wants to learn about how to eat healthy and enjoy your food as much or more than you do now, I highly recommend checking out Dr. Ann’s website and book.
But don’t the demands of rationality always compel us to seek the complete truth? Not necessarily. Rational agents often choose to be ignorant. They may decide not to be in a position where they can receive a threat or be exposed to a sensitive secret. They may choose to avoid being asked an incriminating question, where one answer is damaging, another is dishonest and a failure to answer is grounds for the questioner to assume the worst (hence the Fifth Amendment protection against being forced to testify against oneself). Scientists test drugs in double-blind studies in which they keep themselves from knowing who got the drug and who got the placebo, and they referee manuscripts anonymously for the same reason. Many people rationally choose not to know the gender of their unborn child, or whether they carry a gene for Huntington’s disease, or whether their nominal father is genetically related to them. Perhaps a similar logic would call for keeping socially harmful information out of the public sphere.
Like most people trained in a Western educational system (and especially like most scientifically minded people), I am biased toward the notion that knowledge, sharing of truth, communication, and openness to ideas are all good things for societies and individuals alike, and should therefore be fostered. I have even proposed a market system designed to be a globally trusted mechanism for assessing the truth value of claims and the trustworthiness of claimants. But I am not without my own doubts about the inherent “goodness” of knowledge and truth, having warned of the socially deleterious effects of dangerous media and therein suggesting a social/moral responsibility to willingly refrain from propagating it.
In thinking about the problem, I am reminded of the dilemmas and paradoxes for the rational agent when dealing with issues of mutual knowledge vs common knowledge. The distinction between these types of knowledge can be used to explain the existence of a whole host of phenomena involving social networks and information cascades. For instance, a stock market bubble can occur even when we all have mutual knowledge that a stock’s price is much higher than its “intrinsic value”. But as soon as this mutual knowledge becomes common knowledge (for instance some bad news is announced to the general public), the bubble is burst and we are in for a “correction”. As long as each of us believes there may be someone out there — the proverbial greater fool — who doesn’t know the stock price is inflated, we are motivated to buy or hold rather than sell, hence driving the price higher or keeping it inflated indefinitely. Once the bad news comes out, we all instantly know that our assumption can no longer hold and that there are no greater fools left, so we rush to sell, triggering a self-reinforcing negative feedback loop (aka a market crash).
One can easily argue that in institutions like markets, the more common knowledge the better off we are as a group: there’s less market volatility, fewer destructive bubble-crash cycles, less room for corruption, and generally a more fair playing field for everyone. Note that mutual knowledge is a subset of common knowledge — not only do you and I both know X (mutual), but we also each know that the other knows X (common), and so on. Thus, in certain social institutions, we can argue that more information, more knowledge, more truth is better than less. The question on my mind is whether there are also cases in which more knowledge is actually worse, not for individuals as Pinker’s quote above suggests, but for society. This would suggest that exploring dangerous ideas may in fact be a dangerous idea after all.
The example of the Farmer’s Dilemma is a telling one. It’s a variant of the Prisoner’s Dilemma and a member of a very important class of social and economic problems loosely understood as the tragedy of the commons. In formulating the generalized notion of the tragedy of the commons, Hardin points out that there exist social, political, economic problems — very big ones, like over-population, nuclear proliferation, and pollution to name just a few — which have no “technical solution”. Which is to say, more knowledge and greater understanding of the problem won’t by itself lead to a solution. The only way out is to essentially change the rules of the game by agreement, to collude, to cooperate for the common good. In situations like the Farmer’s Dilemma, I would suggest that the common knowledge of each party’s preferences and abilities to reason “rationally” and recursively is what makes the situation tragic. If each farmer were limited to the mutual knowledge of an agreement they made to help one another, and were not allowed to delve into the higher-order logic of common knowledge, the tragedy could be averted. Common knowledge can be dangerous, as it tends to erode the foundation of cooperative behavior. The unfortunate corollary is that the “smarter” the agents in a social system, the deeper they can individually reason about common knowledge, the more dangerous that knowledge becomes.
So, back to the question at hand: are there ideas that are too dangerous for us as a society to explore? I believe that there are, and I believe that they cluster around the notion of common knowledge. Whether any of the putative dangerous ideas that Pinker lists at the beginning of his essay belong to this class is hard to say. But I think it’s at least good to explore this idea. Or maybe not….
]]>1. Crowdsourcing
With “The Wisdom of Crowds” in one hand and Wired’s “Crowdsourcing” issue in the other, businesses and even entire industries are being built on the backs of, well, you and me. But we like it that way. In a sense, the story of Web 2.0 is the story of crowdsourcing: open-source software development, creative commons and GNU licensing models, wikis, blogging, social networking, peer-to-peer networks, and scores of specific crowdsourcing niches like journalism (c.f. Digg, TMZ, et al). Google’s search algorithm is built on crowd wisdom (i.e. PageRank), and eBay’s entire business model is one giant crowdsource. They all work (and only work) because what we the crowd are being asked to do is compelling and of value to each of us individually. The trade is usually a no-cash affair, like when I correct an error I see in a Wikipedia entry. But sometimes crowd members get paid to contribute, and sometimes they even pay to become a member. Regardless, the real currency is Community and Goodwill, built on denominations of Trust and Fairness. Once this non-monetary economy is robust enough, conversion to dollars is manifest. Just be careful to control inflation.
2. User-Generated Content
If you do crowdsourcing right, what you end up with is user-generated content. The real beauty is that not only will your users create your content for you, but if you treat them fairly enough and add enough value yourself, you can turn around and sell them what they created.
3. ROWE Your Boat
ROWE stands for Results-Oriented Work Environment. For many job titles it makes perfect sense to let your employees make their own schedules: as long as they get their work done, what do you care what (or even how many) hours they work? But what about jobs that require frequent team meetings and ones that involve face-to-face customer interaction? ROWE advocates suggest in the former case, if you stop dictating from the top, teams will self-organize out of necessity; everyone has incentive to work out mutually agreeable meeting times, and those who are consistently uncooperative will be weeded out. Similarly, when customer face time is the issue, employees would much prefer to work out scheduling with their co-workers, trading favors where necessary. This yields a more globally optimal solution than could be achieved by centralized scheduling which can’t possibly take into account of each employees individual preferences and dynamically changing circumstances.
4. Failure is Good
Everyone knows that we learn more from failure than from success, but who wants to pay the immediate price when it could mean going out of business or getting fired? The key is to set up an environment in which experimentation is encouraged, consequence of failure is mitigated, and individual failure scenarios aren’t correlated. You are setting up a robust ecology of micro-businesses within your company, all competing for resources in terms of budget and talent. Initial success is backed up with additional investment, and failures are either quickly re-vamped or canned. Most importantly, the case study of failures (as in Harvard’s Case Method) must be analyzed and shared across the company, not as public flogging but rather as a way to learn and not repeat the same mistakes again. Failure in such an environment creates selective pressure for the entire company to evolve to the highest fitness peak in the business landscape.
5. Culture is Key
Jack Welch built GE into the single most successful for-profit corporation in the history of the world. His secret? Create the right company culture for your employees to achieve miracles. Welch’s greatest legacy at GE by his own reckoning was the “GE Way”, a cultural manifesto that he carried around and shared with his managers which at first visualized and ultimately created the GE corporate culture. For Welch, the rest of his job as manager was inconsequential to the bottom line and long-term success of the company. Warren Buffett claims in his 2006 Annual Report that one of his only roles is to “sculpt and harden our corporate culture”, which last year resulted in a valuation increase of $16.9B, the largest net-worth increase ever recorded for any American business, excluding those that resulted from mergers.
6. Everyone is Smart
Everyone in the company, from CEO to janitor, has big potential as a contributor. Yes, they may be vastly different in importance to the company’s success. But everyone is smart. American Airlines famously rewarded employees no matter what their job titles for suggesting ideas that helped the company. One favorite was the flight attendant who built a new house by recommending that the airline reduce their standard three olives per salad down to two (she’d noticed that a large portion of the olives were being left over). Whether you believe this apocryphal or true, the key to harnessing the wisdom of your company’s internal crowd is to provide the right incentives for individuals and make sure those incentives are in perfect alignment with the company’s goals.
7. Let the Market Decide (NASDAQ:LTMD)
Conventional economic theory views markets as efficient allocators of global resources. But they are also great sources of information. Robin Hanson, Professor of Economics at GMU, came up with a great idea: a marketplace where the “products” being bought and sold are ideas, not physical entities or services. Such markets turn out to be amongst the most accurate predictors we have of uncertain future outcomes, and when employed properly, can be used to make smart decisions on anything from corporate strategy to hiring to product innovation. Knowing exactly how and where to employ decision markets inside your company — especially knowing where they should NOT be used — is more art than science at this point. But when companies like Google are using internal markets to make better project timeline forecasts, and when the Department of Defense approves the use of markets to predict the next terrorist attack, you can be sure that LTMD is a long term BUY.
8. Simple is Back
Remember when all the good web sites were portals that did everything and had cluttered home pages, chock full o’ features you never used, and domain names like auctions.com? Notice how Web 2.0 sites tend to have lots of white space above the fold, do exactly one thing very will and tell you clearly what that is on the home page, and have intriguing (yet information-less) domain names like joost.com? Now compare revenues and profitability at time of IPO for Web 1.0 companies vs Web 2.0 companies. Communicating your product and value proposition to the masses is way more difficult than entrepreneurs realize. Expecting 30 million people to absorb anything more than a soundbite, let alone something as complex as a three sentence paragraph, is a lost cause. Even among VCs (the entrepreneur’s greek chorus in lieu of customers), the 30-page MS Word business plan has been scrapped in favor of the 12-slide PowerPoint with lots of pretty pictures. In other words… White Papers? pwn3d. Elevator Pitches? l33t!
9. Lose Control, Embrace Chaos
It’s so simple, just give all of your product away for free and you will be rich! How? Well, we’re not sure. But it’s got to work, after all YouTube is worth a billion after a year of operation and all they do is give away content and spew money down the bandwidth drain. Embracing chaos is antithetical to most people’s business sense and to everything we’re taught in school and by our parents. Must be something in our genes that demands we keep things close to the vest and have a firm hand on the wheel. Obviously you can’t just take your hand off and expect the car to find its way home (yet). But gripping tight enough is not most people’s problem, just the opposite. If you love your wealth, set it free. Have some faith.
10. Kinder + Gentler = Crazy Profitalicious!
Who says you can’t save the environment, eliminate poverty, create community and have fun while you make money? Ayn Rand, Milton Friedman, and Gordon Gekko to name a few. Greed is no longer good. Doing good is good and you will be rewarded in the marketplace. Everyone is talking about you behind your back and in yo’ face, and reputation is important. If you build a better mousetrap, you may get customers in the short run, but if you aren’t nice they will trash you in their blogs, and as soon as someone builds an even better mousetrap (in three months) your customers will jump ship never to return. So the only thing a rational business person can do is to play nice, be good, and feel good about making the world a better place. Oh, and you’ll make a lot of money if you do.
—
Got a Management 2.0 Concept to add? Share it below.
]]>Evolutionary theorists have demonstrated by argument and with simulation that cooperation is not the exception, but rather is a natural consequence of evolutionary systems and arises spontaneously under the right conditions (c.f. Axelrod for instance). I contend that cooperation plays a more fundamental role in CAS and should be seen not merely as a consequence of evolution, but rather as the creator of agency itself. In other words, when two or more agents interact in a cooperative manner such that their individual survival/fitness increases compared to neutral or competitive behaviors, then those agents can be seen to form a new system — new agent — at a higher level of organization.
To be clear, I am claiming that emergence of higher levels of organization of complex systems happens via cooperation of agents at the lower level, and that without cooperation, the burgeoning of complexity would not occur. Consider the emergence of societies of humans and the growth of multi-person groups both in numbers and complexity, from small family groups to clans, tribes, city-states and beyond. While it is true that within these groups competition still exists, it is cooperation that enables growth in complexity. Division of labor (a form of cooperative behavior) is the basic mechanism of value creation in economies. Creation of mores and laws (“social contracts”) are the key enabler for a smooth running society whether it be the size of a family, or a multi-national treaty. Communication itself is a form of cooperative behavior. No communication is necessary to compete, in fact letting one’s dinner know of one’s intentions on the savanna is a sure way to starve.
Even as agents have incentive to cooperate in certain ways, there will usually be incentive to compete in others. The degree to which a newly emergent agent will be recognized as such — for instance a society from individuals, or multicellular organism from single cells — is the degree to which the cooperation is the norm rather than competition. The reason we are quick to call an animal an individual agent distinct from its subsystems of organs, cells, etc, and the reason we do not generally acknowledge societal agency, is because within a society there is much more freedom of behavior of the agents at the lower level, and thus there is more competition to go along with (and to negate the effects of) cooperation. When single cells first banded together into colonies, the cohesion of the multicellular agent at the higher level was not as great as it is today in multicellular life forms. Over time, the benefits to all of subjugating individual interests to common interest took over and formed an agent that not only sustained its constituent parts better than a loose colony would, but that new agent began to be subjected to selective pressures at the higher level. Features of individual lower-level agents that were destructive to the higher-level, such as unchecked motility, proliferation and invasiveness, were selected against by evolution at the higher level. Eventually, individual cells lost their ability to survive on their own and required the multifaceted, tight cooperative interactions of life a a constituent part of the higher-level agent to exist and procreate.
—
* It is worth noting that cooperative behavior need not (and generally isn’t) “intentional”. For instance the emergence and continued integrity of sand dunes from individual grains of sand does not depend on any intentional behavior on the part of the gains. Rather, the initial physical proximity of the grains to each other, combined with external and inter-grain forces like wind, gravity and friction — which act similarly on these physically proximate grains — is de facto a form of cooperation in this context. Coherence in the realm of physics is a form of cooperation amongst waves of all sorts (water, light, quantum).
- a POPULATION of individual agents
- a REPRODUCTION mechanism
- a MUTATION mechanism that yields differential fitness of agents
- a SELECTION mechanism which favors highly fit agents over others for reproduction
For example, at any given time in a culture, there is a large (but finite) population of memes which exist in one or more human minds. These memes reproduce by being transmitted from one mind to the next through standard communication (talking, writing, mass media, etc). Memes clearly mutate over time; for instance it was once thought that the biggest factor in obesity was consumption of fat molecules, but now it is generally thought that carbohydrates are the biggest culprit. Finally, individual human minds select which memes they transmit and which they do not based one personal criteria such as perceived truth, “compellingness” of content, personal self-interest, etc.
The generalized evolutionary argument goes that if all of these preconditions are met within a given system, natural selection (aka Darwinian evolution) occurs ipso facto. Meaning that wherever you can identify these preconditions you will find a system undergoing evolution. Evolution (in the technical sense) is the dynamic process of a population of differentially fit agents bearing offspring which are similar but not identical to themselves. The type of agents is unimportant, it can be biological agents, memes, computer programs, business plans, etc. Natural selection amongst biological agents has no primacy or special place in the pantheon, it is one of a class of dynamics that occur in complex systems.
One aspect of evolutionary systems that was not understood or appreciated until recently is the importance and prevalence of co-evolution. That is, populations evolve in the context of an environment (everything external to the agents themselves). But we cannot forget or ignore that in most cases, the environment consists of many other evolving populations, each of which making up an important part of the respective environment of the others. Thus, when one population, say the antelope population evolves faster and faster offspring, this changes the environment for the cheetah and other predators who need to be able to catch and eat the occasional antelope to survive. The two populations co-evolve such that traits like speed maintain a balance and neither population completely wipes out the other. Co-evolution can take the form of symbiotic relationships, parasitic, competitive “arms races”, and a variety of other forms. As one population evolves it changes the fitness of the other populations in its environment, creating new selective pressures for these populations to evolve as well.
Several factors have lead to the obfuscation of general evolutionary theory, much of it having to do with the language and model of standard biological evolution. The notion of distinct “populations” is an over-simplification, and a human construct to be sure. The problem with relying on the population model is that in naturally occurring systems, there is no boundary line that says this creature belongs to this population and that creature belongs to that one. By drawing lines, we ignore many important interactions between agents, and we also infer interactions that don’t necessarily exist. On the one hand, bacteria and viruses are in direct competition with their hosts for survival, but on the other hand, they are dependent on the host’s survival and health. Conversely, parents are partially competing with their own offspring, and in many species actively exterminate or eat a portion of their own. We may be tempted to say that each species is a distinct population, but any one agent mates with only a small fraction of the entire population. And geographically separated sub-populations never even have a chance to mix with one another. So, is the population of interest a geographically near sub-population, or the much smaller population of those agents that actually mate? The reality is that the term population is a construct that helps us frame the evolutionary dynamic, but we cannot ascribe overly to it lest we miss the (literal and figurative) forest for the trees.
Other limiting constructs include “selection” and “fitness”. The term selection implies an active, intentional action, which implies an outside actor with foresight and a will of its own. In evolutionary systems, no outside actor exists; selection happens by default as some agents die off before reproducing. Imagine the scene where a general is looking for volunteers to fight the enemy. His troops are lined shoulder to shoulder and the general says to them, “if you will serve, take one step forward”. As we know from the comic routines, if enough soldiers take a step backwards, the brave “volunteers” will be easy to spot. They’ve been selected for in the evolutionary sense, but not in the standard sense of the word. Fitness is also problematic because it implies a static, absolute measuring stick by which all are judged. Fitness in an evolutionary system is both dynamic and relative. The fitness of a giraffe’s ultra-long neck is high in the savanna where trees are a certain height and edible leaves are relatively scarce. But in another environment that same asset can easily be a liability. And we know that environments are continuously changing, due to migration, climate, co-evolution and a variety of other factors.
A subtly confusing factor in understanding evolution is the emergence of genotype. Richard Dawkins’ selfish gene argument made an important point: we can consider genes themselves as an evolving population of agents which use their expressed phenotypes (i.e. the organisms in which they reside) to propagate themselves. In other words, from the standpoint of the gene, it is the the phenotype and the organism is the genetic mechanism for the gene’s reproduction. But the selfish gene argument replaces the arbitrary primacy of organism (phenotype) with that that of the gene. Clearly, genes can’t and don’t just exist in the wild naked of their organisms. It is more logical to think of genes and organisms (and other subsystems thereof) as co-evolving populations of agents. The inter-dependency of gene and organism is so tight though — a form of ultra symbiosis if you will — that we fail to recognize this inherent truth.
A final distraction from understanding complex systems evolution is the over-reliance on the model of reproduction, which is not general enough to encompass the variety of evolutionary phenomena observed in the world. As pointed out here, reproduction is one (of many) ways agents preserve their existence and exhibit system stability. The descent chain of parent, child, grandchild, etc is a system that is preserved and renewed through time via a reproductive mechanism. But individual organisms use reproduction and generation at the cellular, sub-cellular and super-cellular levels in much the same way. Lost in the discussion of evolutionary systems are more simple forms of stability such as stasis, movement, self-repair, etc. When viewed from afar over the course of many months, the Namib sand dunes are structures, dynamic systems, which evolve over time. They move in the direction of the prevailing winds, change shapes, converge, decompose, and ultimately co-evolve with one an other and their environment. To focus exclusively on reproduction, misses other important dimensions of evolution.
Darwin’s conception of evolution was “descent with modification”. This view is more general and more accurate than the description of evolution a four piece harmony of population, reproduction, mutation and selection. A truly general evolutionary theory goes something along the following lines. Agents — which is to say systems that preserve themselves through various stabilizing mechanisms — change through time, space and other dimensions, subject to selective pressures (which themselves change).
In other words, evolution is the balancing act that results from a system’s internal pressures to maintain integrity and external pressures that lead to disintegration.
]]>Media cannot be divorced from culture, indeed, it is an integral part. It is at once the Greek chorus reflecting society’s values, and also is (increasingly) the creator and amplifier of evolving and new values. Editorial media (such as TV and newspapers) have agents in charge of who gets what information. These agents take umbrage at, and often simply ignore, claims that they have an active role in shaping or creating social disasters like suicide, mass homicide, and under extreme circumstances even genocide. The reaction is natural, for what news anchor or columnist wants to admit to blood on their hands; after all, you can’t shoot the messenger. Or can you? Suicide rates famously increase after a highly publicized suicide, so perhaps we should hold editors at least partially accountable for violent or self-destructive copycats.
But what about distributed media (such as email and telephone) with no identifiable prime mover? We are all guilty — possibly many times over — of contributing to a gossip chain or urban legend. Most of the time such activity is harmless, and at times we think we are doing good by warning our loved ones against danger. Sometimes though, viral memes do serious damage, like the pyramid scheme that destroyed the economy of Albania in 1997 and lead to the overthrow of the government. Less dramatically, when we don’t protect against computer viruses, we allow (by our negligence) the infection of hundreds of others. By participating in everyday societal activity, we are all guilty to some degree.
In the case of distributed media, wherein all members of society are potential agents of transmission, we have a fairly straightforward way to limit the potential destruction. We simply make it illegal or taboo to engage in activities that would lead to harm. Even if the activities in question are “only” informational, we do recognize their impact. Thus conspiracy to commit murder carries the same sentence as the act itself, and your sacred right of free speech is limited in some cases, like when you incite riots. Social pressures can be just as effective, e.g. blatant and gratuitous gossip is frowned upon and gossipers are avoided by those who would keep secrets.
In the case of editorial media though, we have yet as a society to grapple effectively with the conflict between our intuitive right to free expression, and the harm that such expression causes. The government goes overboard by stifling important disclosure and debate in the name of “national security”. On the other hand, editorial media does not own up nearly enough to its role in causing societal ills or its power to stop them. To wit, how can we justify the actions of the paparazzi to make public life for celebrities a living hell (or in the case of princesses, worse)? You may say that celebs have given up all privacy in their Faustian bargain for fame and fortune, but they are citizens first and public figures second. Even if you discount individual celebrities’ rights entirely, the cult of celebrity has taken us to a point that many people find disheartening at best. What do kids aspire to these days, is it world peace, personal happiness or even making money? No, according to a recent survey, they want to become famous. In other words, they want to be in the cross-hairs of dangerous media.
So, what do we say if it turns out that shows like “To Catch a Predator” actually create more predators than they catch? Just as in the case of suicides, the mere display of certain behaviors will cause some other people to emulate them. How does this happen? In a population of hundreds of millions there is always going to be a distribution of psychopaths, sociopaths, depressives, and marginalized people. The larger the population the longer the tail of the distribution, meaning the larger the sub-population of ill people “at risk” for doing harm, either to themselves or to others. Certainly not all of those at risk will succumb to alluring imagery and memes, but depending on individual circumstances and current pressures, anyone at risk could become violent or suicidal.
It is hard for someone who is not at risk, who thinks somewhat rationally and dispassionately, to understand the thoughts and desires of those at risk. Needless to say they are thinking very differently. What disgusts, horrifies and outrages a “normal” person, might at times seem perfectly acceptable or even attractive to someone on the margins. We all tend to become attracted to images and ideas that we see repeated, and we take our cues of acceptable behavior from those around us. So is it any surprise that someone who is depressed would be increasingly receptive to the idea the more they hear of others who commit suicide? And would it really be much of a stretch to think that all of the closeted pedophiles lurking in chat rooms would become emboldened to act after seeing their “peers” show up on TV week after week? Hey, that person looks pretty normal, just like me!
The solution to the problem of dangerous media is not easy. To try to regulate or legislate would be a mistake because such actions would run counter to the values that our society holds dear. The downward slope towards Big Brother and totalitarianism is slippery indeed. But the alternative does not have to be complacency; we are not forced to choose between the two extremes.
Social issues are never black or white. Cultural values, conventions and mores are more powerful than any laws. The solution is for us, as individuals who “consume” media and as media agents, to realize what we have wrought, realize we have a personal choice, and realize that we can influence those around us by our example and our proclamations. We don’t have to watch or support TV shows that are gratuitous (however we each define that). As decision-makers in corporate media or members of the watercooler gossip gang, we don’t have to encourage or transmit harmful media. The refrain of “giving the public what they want” and “just doing our jobs” becomes more hollow the more we realize of the dynamics at play, particularly the costs to us as a society. Moreover, we have the power to ostracize, shame, coerce and cajole those who insist on societally destructive self-indulgence and turning of blind eyes. We also must realize that, as in the case of all negative self-perpetuating institutions, if we are not aware of and actively countering dangerous media, it has the potential to destroy the very freedoms that we use to justify the status quo.
We have seen the predator, and it is us. We dare not legislate away our freedoms. But as free individuals, we can choose to not participate, to not be part of the problem. And with our newly available time and attention, we can choose to focus on building a healthier society.
]]>Independently Wealthy
John Wood is a model example of someone who had accumulated massive resources and lived a full and busy life, but had some experiences that shifted his perspective to the point where he could no longer continue on his previous path. In the old days, independently wealthy philanthropists like Rockefeller saw their role as to “make as much money as possible, and then use it wisely to improve the lot of mankind.” John Wood and his ilk believe “what kind of man am I if I don’t go face this challenge directly”, and to their peers who say they are crazy or having a midlife crisis they respond “wouldn’t it be a crisis to not follow my heart… at age 35, I’m too young to not do that”. Bill Gates (Woods’ old boss, the man who made him rich) might be seen as old guard, but unlike his industrialist counterparts he lives in a globally connected world of mass communication (that he largely helped create). And when you convince people like Warren Buffett to let you give away all their money too, then you fall into the category of Amplifier.
Amplifiers
The patron saint of Amplifiers is Oprah. She and her brethren like Bono, Brangelina, and Leonardo Di Caprio, leverage their personal capital but more importantly they leverage their even more valuable social networks. They focus massive amounts of media attention to mobilize the masses to action. They trade on their capital of celebrity and political power for what they perceive is the greater good. Rather than sit imperially at the top of their thrones and decide which courtesans receive cash, they choose their causes and when others who would take advantage of their sympathies come calling — and they inevitably do in droves — Amplifiers think about how these outside interests help or distract them from their own mission and act accordingly.
Average Citizens
The value of cash is dependent on how it is spent. The value of human capital (all the non-monetary activity that goes into a project) would be an order of magnitude greater than the cash involved, assuming one could accurately measure it. The potential amount of human capital that could be raised and spent on any given project is several orders of magnitude greater still. This potential human capital (PHC) is what the Average Citizens as a group bring to the table. An individual Average Citizen can make an immediate and enormous impact by (a) mobilizing their peers, (b) attracting Amplifiers to their cause, and (c) convincing the Independently Wealthy to adopt their cause. All three activities have the additional impact of catching discretionary capital from philanthropists* in their “dragnet”. Genevieve Piturro and Barbara Franklin are perfect examples of Average Citizens who are changing the world for the better right now.
So how do traditional charities fit into the New Philanthropy? It is a complex situation. On the one hand, we have all heard about how bad some charities squander their donations in administrative costs. Even grass-roots initiatives can be comically misguided, like the charitable parachutists who raise less money than than it costs to care for their parachuting injuries. Then there are charities that have been operating for over 20 years with very low overhead rates, like the Cancer Research & Prevention Foundation.** Celebrity charitable foundations can act as effective Amplifiers like or they can just make the celebrities look charitable without raising much money at all, as in the case with most celebrity golf tournaments. Ultimately though, we must consider the non-monetary benefits of any charitable endeavor, given the difference in scale between dollars and PHC.
Here are some questions you can ask yourself if you are considering giving money or your time to an existing charitable cause (whether it’s a formal legal charitable organization or a grass-roots initiative):
- Do I believe in the stated Mission and its importance or relevance?
- What kind of PHC exists for this effort?
- How are they currently leveraging their cash to convert PHC to human capital?
- What opportunities exist for me to help contribute meaningfully in ways other than my cash?
- What are the opportunities for attracting Amplifiers to the cause?
- In the case of formal organizations, what do the various watchdogs say about them?
More generally, we should all be thinking about ways we can become Amplifiers for the New Philanthropy. It starts by sharing your thoughts below.
* non-activist givers as well as corporations
** conflict disclaimer: I serve on the CRPF Board of Directors. My choice to join them was in part because of their low overhead rate.
read the article | digg the article
I’d like to make a case for Oprah being in the top 8. Though she’s currently #62 on Forbes’ World’s Most Powerful Women list, she’s one of only two people to make the Time 100 list four times (the other being Bill Gates). Four people have made it three times (GWB, Bill Clinton, Nelson Mandela, and Condaleeza Rice).
Despite the catchy opening, this isn’t a post about lists of powerful people. It’s about how to change the world, right now, and with long-ranging positive impact. The first step is to watch Oprah’s show yesterday about people rich and poor who have found ways to make huge differences. (I’m sure somebody out there will find an online version of the show and post it in the comments here, but if not, order the show). The second step is to read this book, written by one of the founding fathers of complex systems thinking, Ervin Laszlo (recently nominated for a Nobel Peace Prize). If you are familiar with complex systems, do yourself a favor and skip straight to Part Two of the book. The third step is to examine your life and find something that makes you happy while helping others. It can and should be a small step that does not feel intimidating and that you won’t put off until tomorrow. Only you know what that is.
Read the next post too.
]]>I should caution you from taking this model too far, since it’s (as always) an oversimplification. For one, in our constructed example, the higher levels did not emerge from lower levels. They were “designed” by us. The interface between designed levels is likely to be much less integrated than that between emergent levels. Still, some of the dynamics observed in designed multi-level systems appear reminiscent of emergent levels.
Stasis
The most trivial form of stability we can think of is an agent existing in the same place over time without change. This may only make sense as you read on, so don’t get caught up here.
Movement
Keeping time in the equation but allowing physical location to vary, we see that agents can move and continue to exist and be recognized as the “same”. This is obvious in the physical world we live in, but consider what is going on with gliders in the Game of Life. The analogy is more than loose since cellular automata are network topologies which mirror physical space in one or two dimensions. Contrast this to other network topologies, such as the brain, which has many more than two dimensions in its state space.
Metastability
Gliders also exhibit a form of metastability in addition to movement. What is meant by this is that the structure of the glider goes through a cycle of distinct states (4 to be exact) and arrives back at the original state it started in. Other structures in the Game of Life are like this too, including oscillators, which cycle like the glider but don’t move on the grid. Other examples of metastability include equilibrium in dynamic systems, e.g. population genetics, financial markets, electromagnetic fields, etc. In mathematical terms, metastability can be characterized by an attractor or basin of attraction. Quite literally, a mountain lake with an incoming stream and outgoing stream is a basin of attraction, with the lake itself being a meta-stable structure; the constituent water molecules continually flow into, around and out of the lake, yet we recognize the lake as existing separately from the water molecules, as an emergent structure which is stable as long as the rate of flow in matches the rate of flow out.
Depending on how broadly you define the “meta” part, everything we are talking about in this post is a form of metastability. For instance, “metastastic cancer” refers to the notion that phenomena seemingly distant and distinct from the primary tumor is actually a part of the same cancerous process that is responsible for the primary tumor. Even though parts of the cancerous system are in motion, taken as a whole, the cancer is in a metastatic state, i.e. in that it continues its existence as an observable distinct system.
Self-Repair
One of the most prevalent and important mechanisms for agent stability in biological systems is self-repair: wound-healing, immune systems, error-correction in DNA/RNA transcription and synthesis, and others. Autocatalytic chemical sets, when viewed as agents, exhibit a very pure form of self-repair in that they are continuously regenerating their own constituent parts through catalysis. Social agents such as the modern firm or governmental agencies have mechanisms built into their formation documents which call for and bring about the replacement of functional members that quit, get fired or end their term (e.g. president, janitor, HR manager, etc).
Self-Similarity
Fractals are “self-similar” structures, meaning that of you look at them at any level of magnification they look similar. Self-similarity is a form of agency, but it may not be intuitively obvious how so. Consider the classic Russian dolls, where inside one you find another identical one (except for size), and inside that another, and so on. By opening the outer layer and destroying it, you are left with a very similar system with all the same properties as before except smaller and with one fewer doll. Even though you destroyed a part of the agent, in once sense the agent still continues to exist. Self-similarity is found everywhere in nature from galaxies to solar systems, plant structures, nautilus shells, crystals, and so on.
Reproduction
At first blush it may not seem obvious how reproduction (both asexual and sexual) can be thought of as a yielding agency. However, consider a single-cell organism such as a bacterium. It has a particular structure, including a unique genetic code.* The cell divides, creating a “daughter” cell, and now suppose that shortly thereafter the parent cell dies. Now imagine a continued progression of this process creating a chain of descent from parent to child, to grandchild, and so on. Now consider the chain itself as a system, an agent if you will. That agent survives, in a remarkably stable form, through the mechanism of reproduction.
You may object, but the chain isn’t an agent, its constituent parts are. Yet this statement is a critical fallacy, one that belies the limitations of a reductionist-only model. The chain is a system with inputs, outputs and internal structure, just like the cells themselves are (but at a lower level). Agency is a model, a subset of a similar model, that of a system. The notion of the cell itself is a just model that helps us simplify the description and our understanding of a very complex set of structures and dynamics that occur between biochemical molecules in a repeatable and partially predictable way.
Looking at things from this perspective, we see that populations of individual organisms (species, phyla, sub-populations within species, etc) are agents which use reproduction as an essential stability mechanism.
Representation & Prediction
Laszlo points out (EIPS, p.65): “Any given system tends to map its environment, including the environing systems, into its own structure.” Daniel Dennett claims that the fundamental purpose of the brain — to choose a clear example — is to “produce future”. Which is to say the brain allows the organism qua agent it resides in to predict the future state of the world, both independent of the organism and also taking into account all of the actions and plans of the organism itself. For instance, a cat takes in visual and olfactory information and over time forms a mental map of your house and where the food is. When hungry, the cat’s brain predicts that by moving to the location where the food is on the map that it will find actual food. Sometimes this prediction comes true, and other times it does not (as when the food bowl is empty, or has been moved to another location).
It is critically unimportant whether we consider the prediction and mapping to be a conscious or even intentional activity. We may consider systems where there is clearly “just” a stimulus-response process going on, such as in the immune system.** Yet, it is clear that the the immune system works by creating a mapping of the environment of pathogens and “good” entities (i.e. your own cells). It can also be considered to predict future states of its environment, namely that works on the premise where there is one pathogen there are likely many more of the same kind. Don’t get hung up on the intentionality-laden subtext of words like “predict” and “should”. Instead, consider them from a purely functional perspective. There needn’t be a controlling consciousness or designer involved for representation or prediction to occur in agents.
An even more subtle point is the irrelevance of whether mapping and prediction happens “on-the-fly” as the agent interacts with the environment, or whether these features are “baked into the system” by an evolutionary process or a design process. In designed systems, such as computers, it is easy to see that both representation and prediction can occur by being baked into the system via hardware or software. In certain computer systems, mapping and prediction happen on-the-fly as well, such as in the Roomba robot vacuum that cleans your house. We’ve discussed on-the-fly mapping and prediction w.r.t. two evolutionary systems, the brain and the (active) immune system.
There is of course baked-in mapping and prediction in most evolutionarily produced agents. DNA is just one example. Encoded in DNA is an implicit map of the environment in which proteins will get synthesized, as well as a mapping of the environment in which the organism (once formed) will find itself. In general it is not easy (nor is it relevant) to distinguish between representation and prediction; they are two sides of the same coin, and one does not make complete sense without the other.
Informational Feedback
Feedback loops of information are ubiquitous in complex adaptive systems. It is thought that the complexity in “complex” systems derives from informational feedback. However, not all feedback leads to increased complexity. For instance, in so-called “negative feedback” loops, the result is often equilibrium (a form of stability) or the destruction of the system itself. In “positive feedback” loops, growth often occurs, but runaway growth can lead to system instability and destruction as well. With subtly positive and subtly negative feedback, we often see complex (and chaotic) systems behaviors, including systems in states described as “far from equilibrium”. Such systems can be quite stable and resilient, or they can exist on the “edge of chaos” and easily be tipped into self-destruction or what I call an autocatalytic unwinding.
Planning & Intentionality
These stability mechanisms are the ones most accessible and understandable to us humans, as we employ them on a regular basis and can most easily reflect on them. I don’t need to say much here except to point out that consciousness (in the traditional sense) does not necessarily need to be involved. All mammals and reptiles do some sort of planning in their daily activity, whether it be to catch prey, build nests, or move from one location to another. The neurological mechanisms that achieve planning are unimportant, except to say that they involve more than simple stimulus response; they require some form of working memory as well. Intentionality is just another way distinguishing the activity of planning from more automatic seeming activities.
Self-Consistency
As information feeds back within agents, potential always exists for internal conflict. Psychological phenomena such as cognitive dissonance and many forms of pathology are usually modeled as internal conflict and the attempted resolution thereof. Self-consistency is the the opposite of internal conflict, a form of harmony or coherence. At the physical level, waves (in the ocean, or electromagnetic) can harmonize and thereby combine energies, preserving structure, or they can conflict and dampen each other out. Coherence at the quantum level refers to the same sort of self-consistency. Belief systems, as a set of memes, can be more or less self-consistent (from a logical perspective anyway), and thus be more or less resilient in the face of logical attack. One may note that Catholicism, and other ancient religions, as practiced today are often attacked not on the question of their a priori credibility but rather their seemingly contradictory tenets (memes). Notwithstanding an external arbiter or definition, “truth” is simply the self-consistency of logical consequences of a set of assumptions/memes. Once inconsistency has been found, truth-value (at least in formal systems) is destroyed. Even in informal systems, such as the scientific community, inconsistency tends to lead to system breakdown, though not as fast or thoroughly as most scientists would like to think.
From the standpoint of agent/system stability, individual cultures persist and thrive through interlocking, self-consistent and self-reinforcing shared beliefs and values (i.e. a set of memes). In America, some of the more prominent memes are “Independence is a virtue”, “Individual freedoms and rights are paramount”, “There exists a single, omnipotent, omniscient God”, “Truth is knowable by us humans”, “Justice will prevail in this lifetime”, and so on. Contrast this with cultures where individualism is less important than group consensus, personal honor is paramount, monotheism is not the norm, mysticism is valued, judgment can only be expected in the afterlife, etc. It’s clear that some sets of memes are more self-consistent or self-reinforcing than others, and also that individual memes are not all treated equally within a culture. Cultures evolve over time, new memes are created to resolve memetic conflicts, old memes are subjugated or reviled. Culture is passed down from generation to generation, with modification/mutation. Cultures interact with one another, they clash (e.g. the so-called “war on terror”) and they become hybrid (as in the “melting pot” or “salad bowl”). There are also sub-cultures just as there are sub-populations.
Competition
Agents compete with one another when there are fewer resources available than the population of agents needs as a whole. Resources are defined as anything material (such as food) or immaterial (such as attention from others) that have an impact on future existence or stability. Sometimes resources are not limited, or are renewable, and in those cases competition does not benefit the original agent (at least not directly). To engage in competition under such circumstances is a waste of energy that could be deployed elsewhere. On the other hand, competing and winning a set of resources that are not scarce can lead to a future in which the agent does not have to compete for other resources. The most obvious example is if one agent destroys (i.e. kills) another, then all future competition is obviated (for instance, competition for limited food).
Cooperation
Cooperation comes in a number of forms (e.g. symbiosis, parasitism, tacit agreements, explicit agreements, altruism, etc.) In cooperation, two or more agents interact with one another in such a way as to at least make one of them “better off” than before, if not all. In altruistic behavior, an agent makes itself worse off so that another agent may do even better, however over multiple iterations of altruism, individual agents can do better than if they compete. Much has been written about cooperation and competition in the context of game theory and the Prisoner’s Dilemma, so I won’t belabor it here.
Consciousness
Like porn, it’s hard to define but we know it when we see it. I will develop a theory of mind more fully in this blog as time goes on, but for now I will just point out that whatever consciousness is, we recognize it as something different than other activities of the brain and nervous system. In the language of this blog, consciousness is an emergent phenomenon, a level or so above activities like planning, pattern matching, and autonomic response. Laszlo suggest that consciousness is a “limiting case” of an informational feedback process between progressively higher levels, in other words it’s “the final output of the internal analysis of internal analyses.” (EIPS, p.67)
One hallmark of consciousness is an awareness of “self” as an entity that is aware of many things, including being aware of being aware, etc. When combined with prediction and planning we can see how this self-referential structure adds value from the perspective of system stability and continuity. Not only can humans predict the future of their environment, but they can also predict the future “self”, which of course enables more accurate and stronger predictions of the environment with which the self is interacting. I know that if I eat this whole cake, it will taste yummy and I will feel good for a short while. But I also know that I will feel really bad later because of the massive sugar content and its effect on my digestion and also because I will deny my loved ones of the pleasure of cake. On the other hand, my dog is in the corner with a guilty look on his face, clearly scheming (aka planning) on how to approach and eat the entire cake without my noticing and punishing him.
Whether you view my dog in this scenario as exhibiting consciousness or not is besides the point, as is our anthro-centric need to qualify human consciousness as entirely distinct from other phenomena in the world. The point is that consciousness as a mechanism for agent stability is distinct from less complex, lower-level mechanisms from which it emerges.
Upward Bolstering & Downward Constraint
Agents emerge from lower-level agents interacting with one another. So it logically follows that if all the lower level agents are destroyed or their interactive dynamics modified sufficiently, the higher-level agent(s) would cease to exist in the former case or or become unstable in the latter. Thus agent stability is a function of (though not completely dependent on) the lower level from which the agent emerges. In the example of the lake given above, evaporation of water would lead (if unchecked) to the lake’s own destruction. Similarly, cancer is (in part) instability of the cellular structure, which is (in part) due to corresponding instabilities at lower levels (e.g. genetic, genomic, and more).

From The Chaos Point. Reproduced with permission from the author. Handwritten notes added by me.
On the flip side, agents are constrained by levels which emerge above it. For instance, when multi-cellular life emerged from colonies of symbiotic single celled organisms, some of the mechanisms that lead to stability of the single cells (such as reproduction and motility) were destabilizing to the colony itself. In order for the higher-level agent to survive and become more stable it “found” (through evolution) mechanisms to curb or offset destructive amounts of reproduction (c.f. apoptosis) and destructive motility (c.f. cellular lattice tissue structures). A more simple example which everyone can appreciate is the motion-stabilizing effect that ice — an agent which emerges under certain temperature/pressure conditions — has on its lower-level agents, water molecules.
CANCER My current view of cancer is that it involves an unshackling of the stabilizing influences from both below the cellular tissue level (as in genetic and genomic instability), and perhaps as importantly from above (as in exposure to mutagens, compromised or inefficient immune response). This somewhat heretical view*** has logical implications not appreciated fully even by those who understand the core concepts. Chiefly, the vast majority of approaches to curing cancer are misguided at best, and actually accelerate mortality in some cases. Additionally, as argued by Henry Heng, the scientific community and reductionist philosophy in particular has been (understandably) completely and utterly blind to an obvious conclusion. Which is that the levels above cancer — and the agents at the same level — constitute a “cancer environment” that is extremely important if we are ever going to “cure” cancer. In other words, curing cancer can in part be accomplished by preventing its outbreak/emergence in the first place. And finally, approaches which do not acknowledge the downward constraints imposed from level to level may be wholesale doomed to failure. Much more will be said about this in future posts.
SOCIO-TECHNOLOGY That technology stabilizes itself and helps stabilize socio-technical systems is a claim that many would take exception to, arguing for the entirely new existential threats posed by the advent of nuclear and biological weapons, to name just two. However, I will argue that such a view takes too narrow a definition of stability. My claim is that despite the new “variance” in socio-technical stability, the the tendency (aka “expected value”) is towards stability. As socio-technical systems evolve, if they don’t destroy themselves, they become more stable, buttressed by the human level below and whatever emergent levels are to come above. More on this in future posts as well.
* Remember, point-mutations are the rule, not the exception, so it is very likely that any two bacterium cells chosen at random have slightly different DNA sequences. This heterogeneity is a precondition for natural selection to occur.
** Note that there are two basic types of immune systems, adaptive (found only in jawed vertebrates) and innate (found in nearly all forms of life). The differences do not matter for the argument at hand.
*** Though I should note there is a substantial, if unorganized group of researchers who share this view, including Arny Glazier, Henry Heng, Richard Somiari, Albert Kovatich, Carlo Maley and many others versed in complexity theory.

From The Chaos Point. Reproduced with permission from the author.
If we restrict ourselves to considering a sub-class of systems at the “cultural level”, we can take lessons and generalize more clearly from there. Richard Dawkins introduced the concept of a meme as a cultural unit of heredity analogous to the gene, and since that time there have been many attempts to expound on memetic theory, with varying degrees of rigor and predictive/descriptive success. The most complete and compelling thesis I have come across is Durham’s Coevolution, which develops and gives strong supporting evidence to the notion that culture (as represented by the total population of memes in society), co-evolves with the population of genes in the “gene pool”. While this may seem like a somewhat trivial finding, in fact, it is rather profound and it forces us to update our enduring precepts of Darwinian evolution**
One interesting point of gene-meme co-evolution is the sheer distance (in terms of numbers of levels) between the two different units of heredity involved. Of course, by focusing just on genes and memes, we oversimplify and ignore the interconnected dynamics and evolution occurring at all the levels in between. For instance, it is often overlooked that genotype — the level(s) of/near DNA — and phenotype — the level(s) of/near living organism — co-evolve, albeit it within much tighter bounds. In fact, there is such a close (almost isomorphic) relationship between genotype-phenotype co-evolution, that we often identify the two as one and the same process, but they are not. We need only look at genetically identical twins — who exhibit vastly different behaviors — to see the difference between genetic evolution and phenotypic evolution. Still, the distinction between genotype-phenotype evolution is not nearly as great as the distinction between genotype-memotype evolution, as evidenced by the fact that prior to Durham it was assumed by most that they were completely independent phenomena, and prior to Dawkins that they were not even in the same class of phenomena (i.e. evolution).
It is important to bear in mind that while culture emerges from (and in co-evolves with) beliefs and values contained in individual human minds, they are not one and the same. The relationship is that of genotype (individual beliefs/values) and phenotype (cultural memes). But individuals die, and culture persists, thrives and evolves regardless of — and beyond the control of — any one individual.
Socio-Technological Agency
The “bluriness” of the lines between levels is never so apparent as here. Socio-technology simply refers to a system which contains both cultural and technological information. As of yet, technology does not yet generally create itself (ala “artificial intelligence”), and is always embedded in a social/cultural context. Thus we need not consider technology in isolation, since much of the richness of the model is derived from the social context under which technology is created and is used. Similarly, ever since the invention of writing, visual arts and toolmaking, culture cannot truly be considered in the absence of technology. Memes, for instance are transmitted most often in modern society through mass media like television, movies, printed word, and the internet. Yes, memes were transmitted long before technology arose, and have been observed to exist in primates and other highly evolved species. But the proliferation and evolution of memes has accelerated in lock-step with technological innovation. To my mind, social and technological systems co-evolve at such similar rates and with such interactivity as to behave quite like symbiotic species, in which one does not thrive without the other.
* Actually more than one, and as always, levels are not totally discrete and monotonically hierarchical, but rather partially overlapping and partially ordered.
** Ironically, Darwin himself was much less restrictive about the applicability of evolutionary theory than we are today.
Very commonly in Life we start to see the emergence of structures dubbed Gliders, which appear to move diagonally across the grid, indefinitely if nothing stands in their path. At the basic level, the individual cells in the grid are simply responding to input around them — if so many neighbors are on then turn off, if so many are off, turn on, etc. At the level of the cell, you cannot “see” a Glider, nor does it even make sense to speak in terms of Gliders. The only things cells know about are on, off and the state of neighboring cells. From our higher vantage point above the grid, we see the Glider and we say “that’s a real thing, and it’s moving”. In fact, the Glider isn’t really one particular configuration of on/off cells, but rather a repeating cycle of different patterns.
The Glider is an agent. It emerged from the structure of Life, and is commonplace, meaning that many different starting configurations (random or ordered) will yield Gliders after several generations. There are many other common structures (i.e. agents) which emerge in Life, some of which are very stable, others of which oscillate between two or more states, and still others which exhibit looser forms of stability. Compared to real life, agents that we have observed in the Life seem very fragile. Gliders for instance can be destroyed quite easily by coming into contact with just one errant on cell in its path. The reason Gliders, et al are fragile is that they are highly reactive to external stimuli (the errant on cell), and they lack defenses and strategies which lead to stability, such as self-repair.
The hallmark of agency is a pantheon of mechanisms which keep the system structure stable, or relatively so. On the one end there is the simplest mechanism of pure stability: no change in structure, such as exhibited by a typical rock during the course of a day. On the other end there are more complex mechanisms such as consciousness, culture, and socio-technology* In a very real sense, Darwinian evolution selects for mechanisms which are good at achieving stability, or in other words, agency. This dualism is the fundamental relationship between evolution and emergence in complex adaptive systems. Selection cannot happen without agents to select; agents cannot emerge without selective pressure to create distinctive self-preserving structures. Stuart Kauffman first pointed out this missing link in evolutionary theory, what he calls self-organized criticality, and what others call emergence or agency.
We may be tempted try to establish the primacy of one or the other, evolution or agency. Evolutionary biologists could claim that agency appeared first with auto-catalytic sets of chemicals in the primordial soup that pre-dated life on Earth. But this would be a fundamental mistake. Because every system that we have studied exhibits aspects of evolution and agency to varying degrees. Water molecules under the right selective pressure (which turns out to partially consist of literal pressure) organizes into higher level structures like steam, ice, rivers, laminar flows, turbulent flows, etc. Which in turn organize under the right selective pressures into snowflakes, avalanches, water fountains, tributary river systems, snow men, ice sculptures, so on.
* Socio-technology is a generalized notion of technology embedded in a co-evolutionary context with the society that produces it.
]]>
From The Chaos Point. Reproduced with permission from the author.
When we talk of evolution we are usually talking about it in the Darwinian sense: natural selection on a population of agents from the same “species”. In this sense, we think of the population or species itself as being the thing that evolves; after all, the individual agents are born and die, but don’t adapt over their own lifetime in the way the species does. But this is a somewhat arbitrary distinction, for surely individual agents can and do “evolve” in that that they change over time. A most striking example of agent evolution is of course the development of an embryo into a mature adult. The mechanisms of change in evolutionary processes are varied, but when we talk of Darwinian evolution we are focusing on a particular dynamic of selection, heredity and mutation.
When we identify a species, we are in fact acknowledging emergence. The organisms (i.e. agents) simply exist: they each have a unique phenotype (i.e. body) and unique genotype (thanks to mutation and sexual reproduction).* Different “species” then become agents at a higher level of organization. We can talk about them as having agent-like properties such as stability and self-repair (more on this in a later post). A species evolves (as an agent) over the course of reproductive generations of its constituent parts, the organisms.
Other examples of the emergence of agents at a higher level include: multi-cellular organisms, culture, ice, separation of genotype/phenotype, emotion, protons, self, computers, and just about every “type” of thing you can think of. This is a strong, broad claim that is the subtext of this entire blog. If you are not convinced, reserve judgment and continue reading with an open mind.
* Even when we talk of single celled organisms and say one is an exact clone of another, the reality is that there are many point mutations which make their DNA non-identical. We observe that between certain sub-populations the agents interact in such a way that causes reproduction and the creation of more, similar agents. The species boundary is fuzzy though, as evidenced by the existence of some inter-species mating and also the phenomenon of genetic drift. The concept of same or different “species” is just a model. But it is a good model because it has descriptive and prescriptive power.
POSTSCRIPT: An excellent summary of emergence can be found at Wikipedia (which, BTW, is itself an emergent agent).
]]>We all know on rational level though that most of the time there is more than one cause, sometimes uncountably many. “Shit happens” is a way of acknowledging the inherent complexity in the universe and the impropriety of trying to find someone or something to blame in all cases. Understanding emergent phenomena helps us to comprehend and to put to rest the nagging feeling that someone or something is really responsible, even if we can’t put our finger on it. Conspiracy theorists are those people who refuse to believe that every occurrence doesn’t have have a human agent behind it as a root cause. And because they can’t identify one person in particular, they conclude that there must be a conspiracy of multiple agents working in concert.
But for every erstwhile conspiracy, there is another explanation that doesn’t involve intentional agents. For a political leader to come into power in a democracy, many coordinated efforts have to take place, not the least of which is that millions of individual agents must cast a favorable vote. No one person “caused” the leader to be elected. At times there will be a single agent that is the proximate cause, such as the lone gunman who shoots the president. But in order for the assassin to be able to actually get close enough to carry out the deed, many other things must fall into place. Does this mean that there must be a conspiracy working intentionally towards giving the assassin his shot? No. All it requires is a small amount of apathy (or decreased attention or occasional blind eye) from a number of uncoordinated, unknowing, unintending individuals for the assassin’s path to clear. So who caused the president’s death? The assassin was the single biggest individual factor, but there were also many smaller ones as well, without which the event could not have occurred.
In Hollywood, nobody sets out to make a bad movie, nor is it the intention of anybody in the large web of value-providers and puppet masters required to get a movie completed. Yet bad movies get made all the time, and if you ask any individual working on a stinker they can tell you that it is going to be bad. They can see the train wreck approaching, but they are helpless to do anything about it. Other tragedies of the commons are structurally assured, as illustrated by the Prisoner’s Dilemma. In such cases, the individual incentives are such that an undesirable outcome is inevitable for everyone. But even in the case where a win-win is readily achievable (as it is in the case of movies), small decisions by many different agents, combined with small bits of randomness from the outside, often can lead to a conspiracy-like effect. An intentional story would fit the evidence, at least on the surface, but so would an unintentional one. And it is the unintentional explanations that tend to fit the data better when we look below the surface and ask good questions.
* See future post on simplicity bias
]]>Sure, we’ve known for years that regions of the brain are correlated to mental functions like language, vision, controlling distinct parts of the body, et al. And we observe that gross damage to these areas correlates to loss of function. But the observations show many exceptions and edge cases, such as functional compensation during brain damage. An illuminating aspect of brain damage is the continuous (as opposed to discrete) loss of function, which contrasts sharply with damage to human-engineered systems like cars and computers. With technology, generally speaking if a physical region gets damaged, the function it was serving is totally gone. With biological systems, and especially the brain, function degrades “gracefully”, which is to say, you may be dsylxeic or a pour speeler, but y0u still by g3t qui find 99% of the time.
This month’s Wired Magazine cover story is titled “What We Don’t Know” and it goes on to briefly discuss 42 conundrums that have eluded satisfactory understanding forever. The writeup of “Why Do Placebos Work?” says that nobody knows how the well-documented effect actually works. It talks about a “groundbreaking” experiment using functional MRI:
“When a person knew a painful stimulus was imminent, the brain lit up in the prefrontal cortex, the region used for high-level thinking. When the researchers applied the placebo cream, the prefrontal cortex lit up even brighter….”
My point isn’t to take issue with either the research or even the author, but rather to suggest that if the question is “why do placebos work?” we must first acknowledge that the placebo effect may not be limited to the (rather pedestrian) regulation of pain sensation and directly observable bio-chemical pathways. Then we have to acknowledge that knowing isolated properties — such as the “lighting up” of physical space areas of the brain — tells us very little about what’s really going on. More generally, we need to acknowledge that “placebo” is used to describe any mind-body connection that is not otherwise explained by known physiological mechanisms or experimental control.* The article poignantly points out, for instance, how “studies show that empathy from an authoritative yet caring physician can be deeply therapeutic.” Might reported cases of spontaneous remission of advanced metastatic cancer be due to a grand placebo effect? And if so, what would that mean for our understanding of human physiology and the mind/body connection?
Those familiar with cognitive sciences might be objecting at this point that I am ignoring all the work in the “neural net” literature, which takes the view of the mind/brain as essentially a binary networks of neurons connected by axons transmitting electrical impulses modulated by thresholds. In this model — and to be sure, there are many variants — functions like memory, language, visual pattern matching and muscular control emerge from the collective network dynamics. While the connectionist paradigm clearly is a step in the right direction (as based on the predictive and descriptive power of artificial neural net models), it is not the magic bullet for understanding the mind** that it was once heralded as being. What about deductive logic, grammar, situational reasoning, personality, consciousness and a whole host of other observed brain functions that have not been adequately explained with any single model, be it functional, connectionist or otherwise? And what of other models that have good — albeit limited — prescriptive and descriptive power such as Minsky’s “society of mind”, case-based reasoning, evolutionary memetics, and even such passe models as behaviorism? Should we ignore the good parts of these just so we can have a pure, elegant, simple unified theory of mind?***
Later in the Wired story the question is asked “how does human language evolve?” Glossing over the long-raging debate about how to define language, and ignoring for the moment the (unexamined) premise that it’s somehow evolutionary, the writeup concludes with a telling observation: “The parts of the brain thought to be responsible for language are as well-understood as the rest of the brain, which is to say: not so much.” However, there is some daylight in the form of the computer science researcher Luc Steele who argues that “language was a cultural breakthrough, like writing.” To give credence to this hypothesis, Steele purports to have built robots without any explicit language module which nonetheless developed grammar and syntax systems on their own. In different research, neural network computer models have produced overgeneralization errors (followed by self-correction) when learning language constructs that are eerily similar to those exhibited by human children (e.g. “I bringed the toy to Mommy”).
These sorts of emergent property models strike me as incredibly compelling lines of inquiry, if only for the fact that they tend to do much better on the predictive power index than reductionist models, at least for the kinds of tough problems we are talking about here. Emergent property models do have the drawback of seeming “magical” and somewhat impenetrable, not as good on the descriptive power index. But I believe that this is because of the hegemony of reductionist methodology in Western analytic thought, and particularly math and science up until quite recently. It will take a while before we are comfortable with the thought processes and tools that will allow us to reason and build better intuitions about complex adaptive systems. We are currently like the wise men in the dark, and in order to really grok the elephant, we need to start sliding on the dimmer switch.
* Which reminds me of my favorite definition of artificial intelligence: an AI problem is any problem which has not yet been solved; once it’s solved it’s considered an engineering issue.
** Many people, myself included, view that deeply understanding the human mind, and creating true “artificial intelligence” are flip-sides of the same coin.
*** I will argue in a later post that our bias towards simple, single-cause explanations (c.f. Occam’s Razor) sometimes blocks us from acknowledging the inherent complexity of the world and achieving better understanding.
Why Political Parties Exist, Why they are Bad, and How to Eliminate Them
Voting blocs are an emergent property of representative democracies wherein each new voting issue carries with it an automatic right for each representative to vote. In other words, when votes are treated like a continually renewable resource, there becomes incentive for each representative to give away votes on issues they care less about in exchange from something of greater value. When that thing of greater value is money we call it corruption. When the thing of greater value is a promise of future support from an outside agency, we call it lobbying. And when groups of representatives agree on an ongoing basis to trade away votes in exchange for membership, we call it a party.
Once parties exist, they are self-perpetuating. Even if all representatives from all parties were to agree individually that everyone would be better off long-term without parties, there are always short-term political gains to be made by utilizing the party system. Each representative reasons, logically enough, that they are better off first using the party to achieve their immediate goals, and then later pushing for their elimination. Ironically enough, the one issue that should inspire a unanimous voting bloc — namely the elimination of voting blocs themselves — is the one political ideal that parties are not capable of achieving, despite a will to do so. Parties are a true tragedy of the commons.
An alternative basis for representative democracy arises when one considers the possibility of turning votes into scarce and hence valuable resources. When you own a scarce resource, you are loathe to squander it. For instance, suppose a representative body expects to vote on 100 issues during its term in office. We can agree for the common good that each representative only receives 20 votes, to be used however desired.* Given such scarcity, it is not hard to see that to swap even a single vote would be foolish. Furthermore, representatives under this system are forced to plan the allocation of their votes strategically over the course of their term, lest they run out before they can vote on an issue they care about deeply.
Undoubtedly there will be issues that arise as the term progresses that nobody could have predicted. And undoubtedly there will be representatives who plan poorly or recklessly and who then must sit idly by while their colleagues decide such issues. But this is just one instance of a larger class of transgressions representatives commit against their constituency after getting elected. Whether you break an explicit campaign promise, or fail to represent your constituents’ interests through mismanagement of public resources, your fate should be the same come re-election time. Voting track records speak for themselves, and the public knows when its trust has been breached.
In a democracy we acknowledge the right of all citizens to participate equally in their own governance. In a representative democracy, we acknowledge further that we are all better off when there is division of labor: a small group represent the interests others in governance so that those others are freed up to work on providing daily bread for everyone. In an election process consisting of one vote per person, votes are inherently limited and thereby valuable. Where democracy gets derailed is when we take that precious commodity — the will of the people — and we devalue it and dishonor it by allowing our representatives to create new votes each and every time a new issue comes up or piece of legislation is suggested. To achieve the abundance yielded by division of labor while honoring the democratic process, it is imperative that we prevent the counterfeiting of our ideals by our elected representatives.
In a democracy the motto is “one person, one vote.” In a representative democracy the motto should be “one vote, use it wisely.”
* For the purposes of this discussion, it does not matter whether representatives are allowed to cast more than one vote on a single issue or not; the incentive to swap, sell or otherwise broker votes goes away.
]]>Typically when we model a system using networks we are modeling a static structure, meaning we are modelling not the actual dynamics of message passing but rather the potential for message passing. This is an important distinction that often gets lost and leads to confusion. Modelling the actual network dynamics — how information flows through the network over time — is left for another post. Suffice it to say, without understanding the network dynamics, we only get a partial understanding of the system.**
Depending on the actual system of study, more or less can be gleaned from a static network representation of the system. In the case of molecular latices (such as ice), most of what we care about can be understood simply by looking at the network structure and ignoring the dynamics. The links in a network that models ice define/model a physical neighborhood in space wherein water molecules bond with each other; if you are a node, then you are linked to (aka bonded with) your nearest neighbors. Not only does the lattice-shaped network give us great explanatory power (it’s easy to “see” what’s going on), it also gives us good predictive power. For instance, we can analyze the structural integrity of ice by examining the network structure, and we can predict where it is weak and most likely to cleave.
But what happens when ice turns into water? All of a sudden we see that the network structure itself can transform quite rapidly as molecules move from one location to another, and what was once your neighbor in “network space” is now a distant relative, and vice versa. Even if we were able to somehow model liquid water by keeping track of each molecule’s changing location relative to one another, we are still stuck with the problem of how to define the information flow. When it was ice, it was easy: each molecule could be thought of as being in a small set of precise configurations (tightly packed) with its neighbors on the lattice. But with liquid water, the distances in space can vary significantly for each nearest neighbor due to the non-spherical structure of water molecules, and depending on how you define distance. Is it measured from center of mass? From the nucleus of the lone oxygen atom?
The difficulty in applying the network model, which works well for ice, to a (literally) more fluid environment is not just an isolated problem; it’s endemic to all scientific pursuit and ultimately to our understanding of the world. When models break down in explanatory and predictive power so thoroughly — as it does in trying to apply network theory to liquid — we have to find a different model to gain any kind of understanding of what’s really going on. Hence fluid dynamics for liquids, Brownian motion for gasses, etc. Where we have lost our way is when our models continue to have some (or worse, a great deal of) explanatory or predictive power, and we are loath to throw the baby out with the bathwater. Einstein famously threw out the baby of constant time with the bathwater of mutable space to arrive at a deeper understanding (better explanation, better prediction) with a model that held the speed of light constant and allowed space and time to change as needed. Darwin did the same by positing that species are not necessarily distinct forms of life and that they evolve more continuously through a process of heredity, mutation and selection. We need to always keep in mind these lessons in breakthrough thinking: just because our favorite tool is a hammer, doesn’t mean that every problem is a nail.
Notwithstanding the above caveat, there has been a lot of interest*** and progress recently in looking at complex adaptive systems through the lens of the network. Not surprising given that the internet provides us an ever present and fecund playground with which to model not only itself but other systems. So for instance, we know some general equations that help describe “naturally forming”**** networks that characterize the number of links between nodes as a power law; small numbers of nodes accumulate large numbers of links and vice versa in a process termed “preferential attachment”. This model is extremely useful in helping explain the ubiquitous phenomenon that can be summed up as “the rich get richer”, as well as other related insights like the famous “80/20 rule”.
Unfortunately, most work to date has focused on the network structure and formation, to the exclusion of the informational dynamics and as well as transformational dynamics — how do network structures change over time, not just in the “always getting bigger” sense, but also including contraction, stasis, equilibrium, oscillation, meta-stability, basins of attraction, chaotic behavior, sub-structural formation, and so on. Only focusing on static networks or networks with limited dynamics strikes me as akin to trying to understand cancer from the standpoint of genetics and drug therapy alone. I predict that we are on the verge of an explosion of research and results in the application of network theory to a wide variety of social, political, biological, chemical and physical systems/processes, BUT ONLY once we’ve successfully applied and refined better models of network dynamics (both informational and transformational).
For example, what happens when we treat the nodes in a network as also being a population of creatures with heritable traits that replicate and mutate and are thus subject to selection and evolutionary dynamics? We’ve already noticed that networks in various realms exhibit something akin to punctuated equilibrium in which long periods of seemingly incremental change are punctuated by shorter, cataclysmic periods of instability and stochastic behavior. The point is not to focus on one model to the exclusion of another, but rather draw liberally from many different models (ecology, biology, computer simulation, evolution, fluid dynamics, thermodynamics, etc) and rigorously test working hypotheses that seem to fit the data better than current models. What we should be left with is a new understanding and a new model (or set of models or “mash-up” of models) that actually has real predictive power beyond current best practices.
—————–
*We shall leave aside for the moment what kind of information gets passed along the links, but at minimum we can think of binary messages — zero or one.
**We are also forgetting about information that comes into the system from the outside and that which leaves the system and is passed on to the outside. See a future post on “Open vs Closed Systems”.
***See the popular science books, Linked and Nexus.
****as opposed to networks that are engineered top-down, such as a military hierarchy.

From The Chaos Point. Reproduced with permission from the author.
The theme of levels will come up over and over again and refined. Here are some concepts, none of which are truly unique, but often we forget to take them together and follow them to logical conclusions. This too is just a model.
Emergence of Agents
As Kauffman argues very convincingly, self-organized criticality (aka auto-catalysis), is the central mechanism by which a higher level emerges from a lower. Laszlo writes:
The formation of higher-level “suprasystems” through the interlinking of previously relatively autonomous systems (which are thereafter subsystems) is a familiar notion in systems theory. Suprasystems emerge through the creation of “hypercycles” in which the subsystems are linked by cycles that mutually catalyze each other: so-called cross-catalytic cycles. The result is that the subsystems become increasingly interdependent, while the suprasystem jointly constituted by them takes on structure and autonomy.
A didactic concept is that of “agency”, namely that we should think of the subsystems (atom, molecule, cell, person, whatever) at any level of organizations as agents which: (a) interact with one another (i.e. pass messages), and (b) have a strong self-interest in preservation. Often times that preservation comes in the trivial method of continuing to exist in its current form (as opposed to say, breaking apart or disappearing). Other times, and especially at the organic levels and above, preservation happens through a Darwinian process of “descent with possible modification”; namely we can look at the child (or clone in the case of clonal cells) as a form of preservation (with only slight modification).
Higher Levels Constrain Lower
The organizational structure and dynamics at the higher level constrains the interactions of the elements/agents at the lower level. For instance, when multicellular organisms emerged and started becoming selected for, their cellular constituents stopped evolving (in the Darwinian sense) for all intents and purposes.* Darwinian evolution is just one type of interaction that can occur within a level, so it is not inconsistent to say that lot of interaction happens between cells within a multicellular organism (hormonal, neuronal, etc). We must keep in mind that the higher level may itself change over time, in which case the type and amount of constraint applied to the lower level may change accordingly.
Emergence of Properties
Agents emerge with the ability to communicate with one another, even if that communication seems trivial, such as the gravitational force one mass applies to another. The types and amounts of intra-level communication varies quite drastically from one level to another. From the collective interactions of agents — which is constrained by the higher level — various properties emerge that we recognize only at the higher level. For instance, water molecules interact differently with one another depending on temperature and pressure constraints (among others). At the higher level, we recognize the properties of hardness, wetness, volume, etc. which are emergent properties not present at the level of the water molecules themselves.** In the end, things we take for granted as being solid concepts are really emergent properties: “price” of a stock, “memory” in the brain, “government”, “kidney”, “epidemic”, “computer”, “cancer”, ” “atom”, “time”, “truth”.
Levels Aren’t Strict
We tend to think of levels as being a strict ladder, one level on top of the next and so on. But in reality one level may sit on top of two or more which may be at different “heights” themselves:
System A System E
/ | |
/ | System D
System B |
| |
System C
Additionally, the concept of level is just a model. It is a useful construct to remind us that the type of information that flows between systems at different levels — namely constraints and emergence — is of a different sort than the intra-level information that flows at either of the levels themselves. In other words, we can’t understand what’s going on in System B by looking at System C alone (the reductionist paradigm), System A alone, or even System B alone. Often times we confuse properties of System A (e.g. hardness of ice) as being properties of System B (i.e. the water molecules and their interactions), but they are really properties of the agents of System A rather than anything meaningful within System B.
————
* This is not strictly true, however selective pressures at the higher levels have both reduced selective pressures at the cellular level and put shackles on the evolutionary process via mechanisms such as apoptosis (programmed cell death). The “evolutionary” view of cancer says that cancer is evolution at the cellular level. While I agree with this view, I don’t think it’s the whole picture, and some important consequences are often brushed under the table or not considered.
** Notice that temperature and pressure are also emergent properties and are no more special than other properties like hardness or wetness; we just know techniques for manipulating those properties more directly than others.
While the range of metaphors we can and do employ may be vast, seemingly arbitrary, and potentially limitless, we are in fact extremely limited by our embodied nature. Meaning our brain is embedded in our bodies, which have a very particular form, and (critically) a very specific sensory nervous system. And it is our basic sensory perceptions that are the building blocks of metaphorical thought. It should not be surprising that the richest and most widely used metaphors have to do with vision, hearing and feel: “I see the future”, “I hear you”, “Her story was very touching”. One need only to carefully read a single page of a book, or take special note of a conversation to uncover how central is our use of metaphor in language. And while many have proposed that language is the cornerstone of higher thinking, it would be more precise to say that metaphor is the cornerstone of both language and analytical thought.
It is noted that much controversy exists as to what constitutes and distinguishes thought, language, consciousness, intelligence and a host of related terms. One of the main truths about complex adaptive systems is that universal truths and grand unified models do not exist. Therefore to say that all thought/intelligence/language/etc. is based on metaphor is foolish. We know of many cognitive processes (even those attributed to “higher thinking”) which are, for instance, based on stimulus-response mechanisms in the brain and nervous system. Many human thought processes can be explained best via stimulus-response mechanisms, often to the delight of salespeople and other “compliance professionals” (as Robert Cialdini calls people whose job it is to convince others to do their bidding). Why do we automatically think something is more valuable and desire it more upon learning that it is more scarce than we originally thought? It is doubtful that metaphor (or logic) has much to say on this. Speaking of logic, it is a very common but false conception that most (or even a fraction) of our daily lives are governed by logical or analytical thinking. The use of the metaphor, Brain as Computer, is somewhat unfortunate as being the principle Western analytical method for understanding human thought processes themselves. Some future posts on cognitive psychology will discuss this more thoroughly, including the notions recently put forth in the literature that without emotions, humans cannot think logically.
Notwithstanding the impossibility of claiming metaphor as the only important mechanism for human analytical thought, Lakoff and Johnson’s contribution to our understanding of not only cognition, but of the nature of the universe cannot be underestimated. By painstakingly unraveling the metaphorical process with specific, in depth analyses and examples, the authors lead us to the inevitable conclusion that even our most seemingly fundamental understandings of the world around us, including galaxies, atoms and even time itself are founded principally on metaphor. We can’t really “see” electrons orbiting an atom’s nucleus, but based on experimental results and measurements, we infer an image (read: metaphor) of an “electron cloud” (i.e. Electron as Cloud metaphor) to describe the probabilistic nature of an electron’s location at any given point in time. This metaphor has logical consequences based on our understanding of what a cloud is and does, but these consequences may or may not hold true when applied to electrons. Some of the most puzzling mysteries in all of science have only been solved once we let go of the limitations of the entrenched metaphors we hardly ever notice we are using. To wit, the famous example of light behaving sometimes like particles and sometimes like waves. We can’t help but think that “deep down” light must “really” be one or the other since particles and waves are very different beasts. But our current best understanding is that it is a little like both. We need the Light as Particle and the Light as Wave metaphors in order to understand best what is going on. What we once thought was reality turned out to be false. In reality, our metaphors née models (Particles and Waves) are simply insufficient by themselves to explain the “underlying reality”.
]]>One of the most subtle yet profound things I learned (am still learning) is that the universe is not the same thing as the models we use to describe and analyze it. By “universe” I mean everything in it as well, everything that we can hope to “know” something about, including humans beings, societies, time, space, knowledge, truth — everything. By models I mean “facts,” “universal truths,” scientific theories, hypotheses, common sense, mental models, anything we refer to either explicitly or implicitly when we say we know or believe something. That the map and the terrain it describes are not one and the same may seem obvious, but many of the greatest misunderstandings, paradoxes and scientific or philosophical debates over the centuries can be explained by realizing that the people involved are trying to say something important about the universe, while the best they can ever do is say something important about their model of the universe. And by their very nature, models are either not entirely accurate or they are incomplete. Often times, in the hard sciences, the models are extremely good and rarely, if ever, fail to predict what they are trying to describe. Which makes it all the more difficult for us to accept evidence suggesting that these models are in need of revision. In addition, we humans seem intent on over-generalizing, or applying a model created to describe one realm to trying to describe another (seemingly) similar realm. The failure to apply Newtonian physics to the atomic and subatomic realms is just one example in a history replete with misapplications and over-generalizations of scientific models.
These thoughts didn’t really click until I took a class with an emeritus professor of mechanical engineering named Stephen Kline. It’s difficult to find a more cut and dry “hard” science than M.E., yet this very well-respected leader in his field was suggesting that the only way to make breakthroughs in understanding in his own field was to start looking at things from a multidisciplinary perspective, by which he meant the judicious and critical application of new models to old domains. Consequently, if I believe one thing and you believe another, yet your beliefs (aka model) have better predictive power than mine, I’m going to adopt your beliefs over my own. But those new beliefs are bound to change when even better predictive models come along.
]]>In my experience there are very few discussions I have had with believers (in anything) that have lead to much more than frustration on both sides and a re-entrenchment of existing beliefs. Invariably I have learned the most when I have allowed myself to let go of attachment to my current beliefs, and tried to take in what is coming at me with only low-level critical thinking. As a very analytical person (scientifically and academically trained) who tends to bristle at “fuzzy thinking”, emotionality, and especially anything that smacks of pseudo-science or new age philosophy, it is often difficult for me to have any breakthroughs in understanding. For similar reasons, those who claim to have unshakable faith in one thing or another — be it religion, science, self, love, whatever — tend to have even more difficulty learning past a certain point in their development.
I do not hold scientists or academics to any different standards than I do religious believers. Often times it is the former that are least willing to question their assumptions and open their mind to unseen possibilities. But because of the type of “faith” that scientists and academics have, in which they are trained to question assumptions, they are more likely than those with religious convictions to be open to discovering something new.
Some religions and belief systems are less dogmatic than others, such as Buddhism. Others, like Judaism, do not require faith, only practice. But in the end, all belief systems have some atomic, indivisible core, which if successfully challenged, destroys their ultimate truth. To put a fine point on it, there is at least one core belief that Science holds, which it cannot do without: Truth exists.
The paradox of what I have said and what I will say should not escape anyone. If there is no such thing as Truth, everything becomes pointless.* Descartes wrestled with a similar conundrum when he allowed himself to challenge all of his assumptions and beliefs, including the existence of God and of himself. Ultimately he wondered if ANYTHING really exists. He eventually bottomed out with “I think, therefore I exist“, which everyone recognizes as a very clever (perhaps fundamental) self-fulfilling prophecy. But Descartes failed to define “Existence”, “I” and “Thought”, leaving open the possibility of an epiphenomenal universe without any underlying physical reality. Study of the nature of language and cognition, as well as results in symbolic logic, suggest that it may ultimately be futile and intractable to “bottom out” into something fundamental as Descartes tried to do. How would you define Existence or Truth? Think hard before you answer.
———————
*Contrary to what some people may conclude from the above, I am not a Nihilist or even a nihilist (with a small “n”). And I am one of the happiest people I know.
]]>“The cure for cancer is within reach” –>
- The nature of “cancer” is more complex than I originally imagined.
- Cancer is both real and fictional at the same time, just like every other model.*
- To make any progress on the fundamental understanding of cancer, we need to admit that whatever definition we use of the concept of “cancer” — and there are many proposed and competing ones — proscribes/limits the set of all possible models that you have to work with.
- Ultimately in order to make progress on what you thought was the goal (e.g. “cure cancer”), you have to be willing to change the definition (of “cancer” itself).
- But changing the definition, then changes the goal because the “underlying reality” you thought you were studying is ultimately just the model you created in the first place.
- For instance, if “cancer” turns out to be a normal consequence of life, then perhaps what we are really after is extending life by looking at cancer as — and treating it as if it is — a chronic condition which flares up from time to time.
- Ultimately the new models you use to understand the nature of cancer and what it would practically mean to “cure” it, are just new models and will inevitably be show to be wrong at some point.
- The new models we create will undoubtedly have explanatory power (by which I mean they are as true as anything is) in other domains — for instance we, we may find “cultural cancers” just as we find “computer viruses”.
“Cancer is an evolutionary process” –>
- Yes, but it is also a process that involves other dynamics like metastasis, self-organized criticality, ecology dynamics, etc.; “evolution”** alone cannot sufficiently describe cancer to give us the understanding we seek.
- And these are all just models, imperfect to the bone. Any model that adds predictive power to the current best hypothesis is worth exploring.
“Curing cancer is mostly an engineering problem now that we have the right model” –>
- We don’t have the “right” model and never will; we only have an ever more refined model as we learn.
- Do we have a refined enough model to cure cancer? No, because our conceptions of “cancer” and “cure” have shifted and will continue to shift as we learn more.
- Can we re-frame the question of curing cancer and be more precise about what we mean? Yes, for instance we can re-frame our notion of “cure” to be “extending life by detecting and halting tissue-level metastatic process in tumors indefinitely”.
- But something similar to cancer can crop up at a different level of organization, so be careful about using “indefinitely” and confining yourself to the tissue level.
- Can we say we don’t care if we die because we didn’t understand cancer enough to stop those same cancer-like behaviors at other levels? Yes, but we still end up dead.
- Can we say that we don’t care if we die “eventually” as long as we’ve staved off death from well-understood processes? Yes, as long as we are happy with the improvement in life-span and don’t mind dying of other causes.
- In other words, if we say we want to increase life-expectancy by a specific length of time (say 15 years) by controlling the class of of maladies commonly referred to as cancer, then I believe it’s an engineering problem (but a very large one).
“What can we do collectively to work towards and achieve the goal of a cure” –>
With a re-framing of cancer and a re-framing of cure, we can do the following in parallel:
- Work on engineering related to achieving the re-framed goal.
- Continue the “meta-science” of cancer, by which I mean engage in a process of continually challenging our assumptions (no matter how basic) about all levels of organization, including levels above the human body (such as mind, society, computation) and levels below the level of DNA (such as molecules, atoms, subatomic particles, etc).
- We need to develop new intuitions about how to think about complex adaptive systems (which is to say, everything) because our intuitions are what drive and constrain the models we build and use to understand the world.
—————
* Which is to say, everything we can ever know as humans about reality is based on models, which are by definition convenient fictions. A cheeky way I heard of saying this was “all models are wrong, but some are more wrong than others”.
** Most people think of evolution as Darwin suggested, roughly, selection on populations of creatures with heritable traits. But you get more predictive power when you add an evolving environment (as in “co-evolution”), punctuation (as in “punctuated equilibrium”), meta-stability (as in a more dynamic notion of equilibrium), phase shifts, strange attractors, etc, etc, etc.
]]>As both a practical and self-indulgent matter, I am launching a this blog that is essentially my personal journey to understand better. I intuitively know that there will be a limit to how much understanding I can get unless I connect with people through a continuous dialog that challenges assumptions at every turn. If interested, I invite you join me, with one caveat: you must be willing challenge everything, nothing is sacrosanct, not God, Truth, Self, or even Existence. I heard a quote this week that was from one scientist to another that sums up my feeling on the matter: “That statement is so wrong, not even its opposite is true”.
As a philanthropic and self-indulgent matter, I will also try my best to connect with others who are interested in studying and “curing” “cancer”, and are willing to challenge their assumptions, starting with the group that I met with the last two days, and helping grow that community to become self-sustaining. You will be able to learn more about that shortly on the this blog, which will lead you to a different forum.
Warning for the faint of heart and blissfully ignorant, the rabbit hole runs deep…
]]>













