Wednesday, January 26, 2011

Don't let the dumb money lead

When I don't invest in a company it's because (a) I didn't like the company, (b) I didn't know enough about the company's business to actually add any value, or (c) I didn't like the terms of the deal.  But I recently said no to a company where I liked all three of those things. What I didn't like was the lead investor.

Because of the way this company's process unfolded, they ended up with a wealthy financier as the lead investor. And even when a professional venture investor later offered to lead, they decided to stick with the original lead.   I think this is a big mistake for both the company and the investor.

Throughout my startup-related career I have repeatedly run into very successful hedge-fund operators, investment bankers, real estate magnates and big company executives who wanted to invest in startups.  I think supporting startups is an excellent use of money and I have always encouraged them... to follow experienced venture investors in deals.  None of them have ever taken my advice*.  They all thought that since they were so good at doing the exceedingly complicated deals they had made their mark with, doing a simple startup financing would be a snap.

Finance is a set of disciplines separated by a common language. What is complicated in your public market/LBO/distressed debt/M&A deal is not what is complicated in a startup deal (and vice-versa, I should think.) Just because you can navigate a DCF, a shareholders' agreement, and Delaware law doesn't mean you can do successful venture capital deals.

For several years an acquaintance who was one of the top people at one of the most successful private equity shops in... I don't know... the world, ever** would call me up about some startup or other he planned to back. He's sharp and invariably had very good reasons for backing the company. But these conversations always made me uneasy. He didn't know what competitors were doing or planning (or even who the competitors were, other than what the entrepreneur told him), he didn't know what valuations were and why, he had zero idea what customers wanted, and the terms he asked for were far more onerous than standard venture capital terms.

This last one is what bothered me the most, even though onerous terms may seem as if they are in the investor's favor***. The problem is that he had dictated these onerous terms because he intended to use them. And it wasn't even the terms themselves that bothered me, it was the attitude that ownership is a zero-sum game, that the pre-investment negotiating tension between investor and entrepreneur continued unabated after the investment. This dynamic should be very different in a startup than in, say, a LBO. In much non-startup finance, for instance, missing your annual plan means a renegotiation of control and ownership. In a startup a plan is just that, a plan. Things never go according to plan, and good entrepreneurs anticipate that and adjust. Your investors need to know this. And more than know it, be comfortable with it****. When investors and company executives start to fight, value is destroyed. Someone who sets up and expects this dynamic even before making the investment is poison.

Smart people who know nothing about the startup world besides what they've read in the Wall Street Journal should not be your lead investor or a control person on your board of directors: there's an excellent chance that they will not only not help your company but will actually harm it.

So who should lead your deal? Founders of venture backed startups know both startups and venture capital; they are great. People who have lead deals for many years and seen the cycle from startup to exit a few times are ideal, of course. People who have learned the trade by following professional VCs in many deals and being very involved from startup to exit can fit the bill. And, finally, non-venture backed entrepreneurs can be valuable, so long as their journey wasn't too easy: having empathy for the founder when things don't go according to plan is critical in remaining constructive.

Now I've been all of the first three at some point or another (and vicissitudes, I've had a few) so you could accuse me of self-promotion here. But the beautiful thing about being me is that I'm not a professional venture investor right now, so I have no particular reason to be self-serving; I invest because I want to see a particular startup succeed, not because anyone pays me to do it. So here's my advice: if you have no other choice, take the money; but if you have a choice between non-venture investor money and venture money, take the latter.

-----
* There is obviously a selection bias here.  If they were willing to take this advice, they probably would not have ended up talking to me in the first place.
** My not knowing the pecking order of PE shops or hedge funds is kind of part of the point. Your typical PE guy probably has no idea of the differences between, say, RRE and Venrock. And knowing these things is pretty important when you go to do your next round.
*** I don't believe this, but that's another post.
**** This isn't license to go missing your plan. If you miss your plan, you need to know why and what you're going to do about it. That's a big part of the point of a plan, after all.

Wednesday, January 19, 2011

The bubble this time

What is a bubble anyway? A positive feedback loop where the governor is on a time-delay. It's not necessarily a money thing. It's just that financial bubbles are easy to spot because price is easy to measure and a graph of the exponential positive feedback followed by the screeching halt (and ensuing positive feedback on the decline) of the governor kicking in is easy to read.

But I don't think we're in a financial bubble. Some prices seem pretty high, but nothing like the willing-suspension-of-disbelief levels I saw in 1999. The positive feedback loop is in the innovation industry.

Let me say, first, that I'm in favor of innovation. I believe long-run economic growth per capita is driven by innovation. I've dedicated my last fifteen years to trying to create or nurture innovative ideas, I'm part of this bubble.

But innovation is pretty constant. Look at this graph of US GDP per capita.


If the primary determinant of the slope is the level of innovation, then the level of innovation is remarkably constant over the time period shown*. This means that while those of us in the innovation industry are doing our jobs, grinding it out year after year, there's not much innovation in the innovation department. Also, it means that all the things we do to cheerlead innovation don't really have much of an effect. Innovation is a system we don't fully understand, one where we do not know how to change the level of output.

But over the past six months I have been inundated with talk of innovation. Incubators, summer programs, new venture funds, university initiatives, think-tank initiatives, government initiatives, innovation consultancies, etc. etc. The level of innovation stays the same, but the industry built around it is growing exponentially. As it will continue to, until reality kicks in; a classic bubble.

Again, don't get me wrong. I support this. In fact, despite the carnage that bubbles cause, I don't believe they're all bad. Here's a quote I've used before:
"Reckless, booming anarchy," in short, produced fundamental progress. It was not a stable system, racked as it was by bank failures and collapsed business ventures, outrageous speculation and defaulted loans. Yet it was also energetic and inventive, creating permanent economic growth that endured after the froth was blown away... Those who gambled on the future rise of the public lands in the West... were madmen only in the short-run business sense--only in thinking that future prospects could be realized all at once by means of an infinitely expansible credit system--and not in their basic sense of direction.
This is Greider describing the 1830's. Bubbles are the reckless booming anarchy that create permanent economic growth once the froth blows away. And, especially if you're an entrepreneur, more people trying to fund you, more people trying to give you below-market rent, more people trying to introduce you to more other people, more talented engineers willing to forgo big-company salaries for the chance to build something meaningful, it's all good.

But here's my worry. I've been investing in NYC tech startups for 15 years now. That means I've lived through 2002-2003 and 2007-2008. Those were hard times, times when a lot of people decided that starting tech companies was a bad idea, when people who had been gung-ho up and disappeared. Most of the people who started companies in the late 1990s stopped trying to start companies after the bubble burst: having learned a hard lesson, they decided not to put that valuable learning to use. The same positive feedback that creates exponential growth creates exponential decline.

I can't complain. My big breaks have come by being steadfast when others were fleeing. If I hadn't persisted in 2003, I wouldn't have the wherewithal to make investments today. In late 2007 and early 2008, I got the chance to invest in some amazing companies--even though I had no track record as an angel--because so few others were willing to write checks. But I'm just a born contrarian**. My worry is that when things smooth out, when people calm down a bit, this new build-up in the innovation industry suddenly disappears, leaving a whole new generation of entrepreneurs high and dry and with a distaste for the rhetoric of the venture capitalists and others who encouraged them to take a risk and do something meaningful.

There's no known way to recognize or gradually deflate a bubble. But this bubble, like all bubbles, is just froth around the constant innovation that occurs, bubble or not. It's possible to focus on the reality of the underlying innovation and not on the froth. Some VCs--USV, First Round Capital, Chris Dixon/Founder's Collective, Roger Ehrenberg/IA Ventures were the ones I ran into--continued investing in 2007-2008, when things looked grim. HackNY was running hackathons and NYC Seed was trying to support the NYC tech startup culture when others were backing away. There are many others who continued to build the ecosystem here then, and that gives me reason to believe they will continue to build it when the carpetbaggers have left.

I'm not saying that entrepreneurs should think about these things when raising money or that engineers should when looking for a job. But those of us who have the breathing room to make choices now, and who care about building a NYC tech ecosystem for the long run, should try to support the entities and people we think will be here whether it's rain or shine.

[Edit: I want to make sure this is clear: I'm not saying we should support people who were here, I'm saying we should support people who will be here. Having been here when it was hard is just a good indicator that they will be here when it's hard again, as it inevitably will be.]

-----
* For a longer run look at economic growth, which I think supports this thesis by showing that the level of innovation does change occasionally, read A Farewell to Alms.
** A loved one tells me that I am not a contrarian, I am just contrary.

Wednesday, January 12, 2011

Getting Gold When Buying Iron

"The theory of Induction is the despair of philosophy--and yet all our activities are based upon it."
-- Alfred North Whitehead, Science and the Modern World

Jonah Lehrer writes, in the New Yorker,
[A]ll sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
The answers Lehrer finds for this phenomenon range from regression to the mean to publication bias.  But he shies away from asking the obvious question that rears its ugly head: what rational basis do we have to expect the scientific method to work at all?  Have we have been bred to believe the universe owes us something that we have no logical reason to believe?

*****

Let's step back.  Science has two main stages: discovery and justification.  Scientists come up with hypotheses and then they test them.  The former is a creative act, something that defies description.  The latter is what we think of as the scientific method: the gathering of empirical data to test hypotheses.

In most cases all of the data can not be collected, only a sample.  Einstein postulated as part of his theory of relativity that gravity would bend light rays. The empirical observation during the solar eclipse of May 29, 1919 that light rays were indeed bent by gravity is seen as powerful evidence that his theory is correct.  But not all light rays have been observed, only a few.  Making a few observations and then generalizing these few measurements to cover all the measurements that have not been made is called induction.  Induction is the centerpiece of scientific reasoning.  If the sun rose yesterday, and the day before and the day before that and it also rose today, we have good reason to think it will rise tomorrow.  Induction.

But do we really have good reason?  On what grounds should we believe that induction is a valid way to reason?  Why should we believe that just because something keeps happening, it will continue to happen?  Why should we believe that just because some light rays are bent by gravity, all light rays are bent by gravity?

The answer is, obviously, we should believe this because it works.  This is what has always happened.  Every time we have correctly used induction in the past, it has led to valid conclusions.

But wait, do you see the problem?  We have just proved induction by... using induction.  There is, in fact, no other way we know of to support it.

*****
Frank Ramsey draws the distinction between deductive reasoning--which is supported by formal logic--and inductive reasoning--which is not--in his Truth and Probability (1926, pdf):
The conclusion of a formally valid argument is contained in its premisses; that to deny the conclusion while accepting the premisses would be self-contradictory; that a formal deduction does not increase our knowledge, but only brings out clearly what we already know in another form; and that we are bound to accept its validity on pain of being inconsistent with ourselves. The logical relation which justifies the inference is that the sense or import of the conclusion is contained in that of the premisses.
But in the case of an inductive argument this does not happen in the least; it is impossible to represent it as resembling a deductive argument and merely weaker in degree; it is absurd to say that the sense of the conclusion is partially contained in that of the premisses. We could accept the premisses and utterly reject the conclusion without any sort of inconsistency or contradiction.
This echoes an earlier argument by David Hume, in his Treatise on Human Nature (I.III.VI)*:
[T]here can be no demonstrative arguments to prove, that those instances of which we have had no experience resemble those of which we have had experience. We can at least conceive a change in the course of nature; which sufficiently proves that such a change is not absolutely impossible. To form a clear idea of any thing is an undeniable argument for its possibility, and is alone a refutation of any pretended demonstration against it.
Hume says there can be no proof using induction.  And, if you think about it, there can't even always be belief: probabilistic reasoning, though it seems to support some limited subset of inductions, is not always--or even usually--good enough.  Of the total population of light rays, what would the sample size need to be to give you statistical comfort in the theory of relativity?

Hume goes further.  His rejection of induction as rationally supported is just the inevitable fallout from his rejection of causation altogether (I.III.XIV):
I am sensible that of all the paradoxes which I have had or shall hereafter have occasion to advance in the course of this treatise, the present one is the most violent... any two objects or actions, however related, can never give us any idea of power or of a connexion betwixt them: that this idea arises from the repetition of their union: that the repetition neither discovers nor causes any thing in the objects, but has an influence only on the mind by that customary transition it produces... 
A cause and an effect are two things that we associate simply because the two events always seem to happen together; but there is no way we can discover a connection between the two that guarantees this "causation" to continue.

Our belief in induction and causation come from our belief in the uniformity of nature.  If two events always happen together, or if an event happens the same way over and over, then we believe there must be some unchanging underlying mechanism.  If this light ray does this, then that one does too.  If the earth is turning, it will keep turning.  Why wouldn't it?  Why would anything have changed?

Well, on the contrary, why would it keep turning?  Why would anything stay the same?  The actual unanswerability of these very simple questions makes our entire quest for knowledge seem very fragile.

*****

But in the end Hume was a skeptic who realized that skepticism was not a realistic option, nor a productive one.  Even if the idea of causation itself is suspect--and thus induction--their use goes constructively on.  The scientific method, statistics, these things seem to work.  People who deny induction get hit by cars, their children catch diseases other peoples' children have been vaccinated for.  We have been formed by evolution to believe in induction: babies do not reason about cause and effect, they simply recognize it.  We have been bred to believe that the universe is uniform and unchanging.

Back to Lehrer's problem.  The subtitle of his article is "Is there something wrong with the scientific method?"  He may just as well have asked "Is there something wrong with reality?"  Induction is central to all of our thinking; we do not have any alternative way to reason.  Ramsey again:
We are all convinced by inductive arguments, and our conviction is reasonable because the world is so constituted that inductive arguments lead on the whole to true opinions. We are not, therefore, able to help trusting induction, nor if we could help it do we see any reason why we should, because we believe it to be a reliable process. It is true that if any one has not the habit of induction, we cannot prove to him that he is wrong; but there is nothing peculiar in that. If a man doubts his memory or his perception we cannot prove to him that they are trustworthy; to ask for such a thing to be proved is to cry for the moon, and the same is true of induction. It is one of the ultimate sources of knowledge just as memory is: no one regards it as a scandal to philosophy that there is no proof that the world did not begin two minutes ago and that all our memories are not illusory.

We all agree that a man who did not make inductions would be unreasonable: the question is only what this means. In my view it does not mean that the man would in any way sin against formal logic or formal probability; but that he had not got a very useful habit, without which he would be very much worse off, in the sense of being much less likely to have true opinions.

This is a kind of pragmatism: we judge mental habits by whether they work, i.e. whether the opinions they lead to are for the most part true, or more often true than those which alternative habits would lead to.
Just as with Gödel's incompleteness, or the stack of turtles holding up the universe, we continue ignoring the logical problem of relying on the things they show are inconsistent.  We relegate to the back of our minds the question of why the heck anything seems to work at all.  Because how would we exist in a world where that concern was everpresent?

The philosopher Nāgārjuna said something that Hume would have understood:
Not from itself, not from another, not from both, not without cause,
Never in any way is there any existing thing that has arisen.
This, in the inimitable method of Indian philosophers, is not meant negatively, it is meant as an aid to understanding the ineffable. The Roman Skeptic Sextus Empiricus came to the same conclusion, even if it does not seem that way.  He said that without any way to be sure in our knowledge, we need to suspend judgement over the truth or falsity of our beliefs and simply exist in the present evidence of our senses.  He thought the only truly rational way to live was to rely simply on what is, not on what we think will be.

*****

I've always been the kind of person who wants to know.  After learning how to program, I needed to know how the compiler worked, then the operating system.  Then I needed to know how the microprocessor worked, then integrated circuits, then the logic gate, then the transistor.  How transistors were made, then where silicon comes from.  It is hard for me to accept, in trying to understand, that there may be levels below which I can't pass.

In Hui-k'ai's Gateless Gate, Wumen (Mumon) tells this story:
The temple flag was flapping in the wind.  Two monks argued: one said the flag was moving, the other said the wind was moving.  They argued, but could not agree.  The Sixth Patriarch said "It is not the wind that moves, it is not the flag that moves, it is your mind that moves."
Mumon retorts "It is not the wind that moves, it is not the flag that moves, it is not the mind that moves."  For him.  For me, it is the mind that moves.  Perhaps this is the easier problem to solve.

------
* I had to take out most of the freakin commas from Hume, they were very distracting.

Wednesday, December 15, 2010

OpenRTB and Architectural Innovation

I sent out a good number of emails on Sunday asking for opinions on OpenRTB.  People were mainly dismissive, partly because of the secretive way in which it was concocted, partly because some think Google is better accomodated than challenged, but mainly because the current goals of the spec are slight.  After reading the spec, I was a bit underwhelmed myself.  But I've changed my mind.

In a post in April, I talked about architectural considerations in ecosystem design, taking the engineering concept of End-to-End as an analogy.   In that post I was pushing for architectural change because innovation is highly dependent on a layered and modular architecture*.  Architecture is important.

But architectural change is also interesting in determining winners and losers, and that was my deeper motivation.  From a (technically astute) businessperson's point of view, the article to read is Rebecca Henderson and Kim Clark's Architectural Innovation: the Reconfiguration of Existing Product Technologies and the Failure of Established Firms.  They point out that "architectural" change favors innovators over incumbents.

In laying out their thesis, the authors talk about the failures of incumbent firms in adapting to relatively minor market changes, despite their deep expertise in the core components of the new products being built**.  They also make the distinction between incremental change, radical change and architectural change to draw attention to the fact that seemingly minor changes in the architecture of a system are actually more likely to cause incumbent dislocation than radical changes in the underlying technology.  This is an extremely important point.

The essence of architectural innovation is the reconfiguration of an established system to link together existing components in a new way... Architectural innovation is often triggered by a change in a component... that creates new interactions and new linkages with other components in the established product...
Established firms often have a surprising degree of difficulty in adapting to architectural innovation. Incremental innovation tends to reinforce the competitive positions of established firms, since it builds on their core competencies... In contrast, radical innovation creates unmistakable challenges for established firms, since it destroys the usefulness of their existing capabilities...
Architectural innovation presents established firms with a more subtle challenge... established organizations require significant time (and resources) to identify a particular innovation as architectural, since architectural innovations can often initially be accomodated within old frameworks.  Radical innovation tends to be obviously radical--the need for new modes of learning and new skills becomes quickly apparent... the introduction of new linkages is much harder to spot.  Since the core concepts of the design remain untouched, the organization may mistakenly believe that it understands the new technology***.
Dismissing OpenRTB as not being really anything very interesting at all is missing the point.  OpenRTB is not radical change, it barely qualifies as incremental change.  But it is architectural change.  It is the reconfiguration of existing linkages, or the beginning of it

Architectural change is subtle.  It is ignorable, for the time being.  But it may--may--be enough to change the existing architecture, the architecture that is more and more contained within Google.  Those that realize this and adapt to it will prosper.  Those that dismiss it--either because it is too subtle or because they are a large incumbent and ignore those who profess to compete with them--might find themselves Xeroxed.

There's a subtle belief in our VC-backed community that technological innovation is the sine qua non, the ne plus ultra.  Build a better mousetrap and the world will beat a path to your door.  Unfortunately, we're wrong.  Build a better technology and the incumbent will copy your innovation: they will notice a better technology pretty quickly.  What we need to do is change the competitive dynamic by shaping the architecture of the ecosystem.  This is something we can do without the permission of the Incumbent, and something the Incumbent will have a hard time responding to.

OpenRTB is just as start.  But the more new linkages we can create in the ecosystem, the better chance we have to compete based on merit.

-----
* An overview of this argument and its consequences was brought to my attention by Brad Burnham somewhat after I wrote about it: Internet Architecture and Innovation.  I want to say that this book is really excellent, because it looks like it, but I haven't had the time to do anything but flip through it yet.
** I am not in love with their chosen examples.  In both cases the lower-cost/lower-functionality disruptive (as in Innovator's Dilemma disruptive) element is also present, so as natural experiments they leave something to be desired.  Christensen, in fact, draws heavily from Henderson's work in his book--IMHO, more heavily than he seems to admit.  His description of this paper, in Dilemma, mentions their thesis primarily as a study in organizational structure.  Christensen then talks about "value networks" instead of architecture--and Dilemma is an essential read--but I think Henderson and Kim's work is more directly to the point here.
*** Read the article, really.  The case studies alone are worth it (and might give some comfort to those of us who despair at the ever-incipient chaos in our little sub-industry by highlighting that this is not something we managed to invent ourselves but is, in fact, normal.)

Wednesday, December 8, 2010

Old Style vs. New Style VC

Wrote a comment over on David Lerner's blog on his "Are Super-Angels Extinct" post. He said

What I am saying is that some of these superangel funds may structurally resemble traditional VC funds, but they are something altogether different- and more akin to an angel group.
My comment was that structure matters.  If you set up the Red Cross just like a bank, with the same incentives, they would have caused the Panic of 2007.

Maybe it's just two different world-views.  I've thought about this and written about it for a long time.  My take is that if you want to create a better venture capital system, you need to change the system, not just the people.  Calling yourself a super-angel and saying you are different does not change the investing world.  Changing the processes and structures of the investing world will change the investing world.  There are firms (and super-angels) doing this.

In the spirit of changing the game, some "old-style" and "new-style" comparisons:

Old-Style New Style
Money managers Company builders
Organized like a law firm Organized like a start-up
Generalists 'T-shaped'
Managing $1bn Managing $25mm
Living on management fees Living on expected future carry
Scaling by hiring more partners Scaling by being more efficient
"Loose lips sink ships" "You should read my blog"
Going to NVCA meetings Going to the R Meetup
Joining Angel Investor groups Writing $25k checks
Starting an owned seed fund/ incubator/ hackathon Supporting a grass roots effort
Sports jackets Kicks
Networkers Community builders
Lawyers Series Seed
Participating preferred Straight preferred
51% 15%
LPs measure success by: IRR LPs measure success by: IRR

Saturday, December 4, 2010

Coinvestor Graph Code

I've had a bunch of people asking for the data behind the coinvestor network map and a few asking for the code.  It's actually pretty easy to generate the basic graph from the Crunchbase API.  The hard part was cleaning the Crunchbase data, augmenting it with some other data sources and then making sense of the resulting graph.

But even so, the skeleton code is below. It will get you started. Then take a look at the raw output of Crunchbase's API. Play a bit with NetworkX and note that you can tag any node or any edge with whatever data you want. Then go create your own! If you find anything interesting, let me know.


# Crunchbase to NetworkX network builder
#
# Builds a network from the Crunchbase database and outputs it in graphml format.
#
# Required modules:
#    simplejson (http://undefined.org/python/#simplejson)
#    networkx (http://networkx.lanl.gov/)

import urllib2, simplejson as json, networkx as nx

def getCBinfo(namespace, permalink):
    api_url = "http://api.crunchbase.com/v/1/%s/%s.js" % (namespace, permalink)
    return json.loads(urllib2.urlopen(api_url).read())

def add_clique(G,investors):
    # Take a set of investors and add them to the graph, along with edges
    # between them all. Where an edge already exists, increment its weight.
    l_inv = len(investors)
    if l_inv > 1:
        # add nodes
        for inv, typ in investors:
            G.add_node(inv, inv_type = typ)
        # add edges
        for i in range(0,l_inv-1):
            for j in range(i+1,l_inv):
                if G.edge[investors[i][0]].has_key(investors[j][0]):
                    G.edge[investors[i][0]][investors[j][0]]['weight'] += 1
                else:
                    G.add_edge(investors[i][0],investors[j][0],weight=1)
    return G


# Main.
# Get the list of companies Crunchbase has data on
company_names = json.loads(urllib2.urlopen("http://api.crunchbase.com/v/1/companies.js").read())

# initialize Graph
G = nx.Graph()

# Iterate through companies, getting CB data on each
for company in company_names:
    try:
        co_info = getCBinfo('company', company['permalink'])
    except:
        continue

    # For each company make a set of all investors
    investors = set()
    if co_info.has_key('funding_rounds') and co_info['funding_rounds']:
        for iround in co_info['funding_rounds']:           
            for investment in iround['investments']:
                for i_type in ['financial_org','person','company']:
                    if investment[i_type]:
                        investors.add((investment[i_type]['permalink'],i_type))

    # Add investors and edges between them to the graph
    G = add_clique(G,list(investors))

# Write the network to a graphml file /projects/cb_graph.graphml
# NetworkX supports many other formats as well, check the docs.
nx.write_graphml(G,"/projects/cb_graph.graphml")

Thursday, December 2, 2010

Co-evolution and other housekeeping

A couple of weeks ago I wrote something for AdExchanger's 'Predictions for 2011' series. It needed to be brief, and I wanted to talk about what entrepreneurs will be beta testing two years from now.  I made a couple of observations that are pretty obviously true and then ventured this prediction:

Towards the end of [2011], the smartest entrepreneurs will start thinking about how to reinvent the core platforms to better support the needs of the best emerging applications. A dynamic similar to the software/hardware co-evolution of the '70s and '80s will begin, creating similar strategic opportunities.
I actually think this is a pretty tame thought, but once John published it people starting asking me what I meant and how it's actionable.

***** 

I should talk about what I'm thinking generally, though.  I've been pretty quiet since the Summer and where I used to be tightly focused on the data-driven web-based display-ad exchange ecosystem, I'm now a bit broader*.

1. In web-based display I've continued my 2010 push on the publisher side of things.  In addition to my investment in MetaMarkets--who are doing some pretty astounding things--and advising PubGears, I've got another pub side investment that should be announced anon.  I'm still looking for others doing something unique here, but to some extent I feel like these three companies are building 80% of what's undone in getting publishers back their negotiating leverage in a data-driven world.

2. The only other pieces of the web-based ROI feedback loop that I articulated in February that remain unaddressed are the piece at the marketers and the piece at the individuals.  I am still actively looking for companies in the former.  I've met a couple of really exciting ones and am working with them to get to launch-readiness, but want to meet more; this is a big area.  I'm trying to figure out what, if anything, could work at the latter.  I would have invested in Hunch, had I the opportunity back then, but I think there are other ways to address the consumer side of the people-product matching problem.

3.  Social may be one.  The social loop will share superficial characteristics with the display loop, but it's really completely different.  A softer, more subjective approach is needed. IMHO, the area with the most near-term leverage will be tools that help communicators understand the impact of how they are communicating and then help them make better decisions.  I've made an investment here that will probably also be announced anon.  I'm treading carefully in other areas of social because I've seen a lot of ideas imported from the display ad world rise and plateau: social is different and harder to scale.

4. Mobile.  In addition to my long-ago investment in Pinch Media (now Flurry) I've made an investment in a geo-data startup.  This one has not announced publicly, but it's a big idea.  Other than that, again I'm treading carefully.  Steve Jobs' reaction to Flurry earlier this year is an example of why: no matter where you invest in mobile, you're at the mercy of some pretty ruthless gatekeepers.  That said, it's exciting to think about how much value startups could add through the mobile platform.

5. Data is a big problem.  And, to be clear, it's been a big problem for a long time so it has some big and expensive solutions sold by big and slow corporations.  But the companies that are bringing data-munging solutions into the reach of smaller companies operating without teams of specialists are pretty interesting.  I've been trying to get a feel for the process, on a small scale, as my last couple of posts show.  But I've got a lot to learn before I even begin to feel competent.  (If you have something to teach me, I'll buy lunch.)  I want to find companies that can democratize data science.

6. Respect for the individual.  I think about this a lot.  I don't talk about it a lot.  It's a difficult area and using a rational thought process on something that's pre-rational in our makeup is tough.  I haven't been able to organize my thoughts, so I have no idea how I support the value-creation process.  But it's always there in the back of my mind.  I am open to suggestions.

So that's what I've been doing and thinking.  Points 2 through 5 will probably continue to be my focus through most of 2011.

*****

In terms of the AdExchanger bit, here's what I was getting at.

At IBM in 1988, in meetings to discuss moving an instruction's execution from the microcode engine to the hard-coded execution engine, I realized that our design team was at the tail-end of a long chain of product-market fit interactions. Because of the way end-users were using applications, applications had changed which parts of the operating system were critical paths, and the operating system designers had come back to us hardware folk asking that certain instructions be optimized for performance.

This was part of a long-standing and ongoing co-evolution between the hardware and the software.  Innovations on the hardware side changed what software was viable.  Innovations on the software side changed what was needed of the hardware.  Read Melinda Varian's intensely interesting VM and the VM Community, where she talks about the development of IBM's time sharing operating system, and think about it as a co-evolutionary process.  IBM in 1964 opened an office in Cambridge as a liaison to the MIT software engineers developing the first time-sharing operating systems.  Because of its proximity to the users, this office was instrumental in pushing IBM to change its mindset from building machines optimized for batch processing to building machines optimized for time-sharing.

During the next few years, both sides--the operating system writers and the hardware designers--pushed the other side to optimize around what they thought was needed or what they could deliver.  The software engineers pushed for address relocation capability and other features needed for time-sharing. The hardware engineers spec-ed a new processor to meet those requirements.  The hardware engineers floated the idea of virtual machines.  The software engineers built an operating system to take advantage of them**. 

Calling this co-evolutionary may be oversimplifying.  In fact, each layer of the stack co-evolved with the adjacent layers.  Sometimes the tension between different layers evolving differently led to entirely new organisms forking off, like Multics (which then evolved into Unix and its descendants.)  This process can happen in any multi-layer ecosystem that has different actors in different layers.

The data-driven display ecosystem is like that.  The tensions between co-evolving layers is evident (AppNexus/Google anyone?)  And forks are emerging.  I expect that within three years there will be a major fork away from the owned exchanges into a crossing platform that can better support the demands of the adjacent layers: the data exchanges, analytics, the DSPs and the SSPs.  Right now none of these are especially happy with what the exchanges are offering.  Not to say that they're unhappy, but they each have a laundry list of things they would improve or change.  The best of them are creating ways to avoid the exchanges, but only because there is no good alternative.  A platform would still be most efficient.

The exchanges, on the other hand, seem to be pretty content with the way things are.  Or, at least, they feel that they should be controlling the pace and direction of the evolution.  This opens the way for entrepreneurs to build something disruptive.  If you're an entrepreneur with the chops to build something here, you should start thinking about it soon.  2012 will be too late.

-----
* No Thanksgiving jokes, please.
** One great anecdote talks about how the early CP/CMS OS could only keep one virtual machine in memory at a time.  So when a user logged on, the OS would reserve space on the paging drum for the copy of their virtual machine while somebody else's was running.  When the paging drum was full, space would be reserved on disk.  Since the paging drum was so much faster than disk, people started showing up to work earlier and earlier so they could get a slot on the drum.  Finally the OS designers made the page slot allocation mechanisms dynamic.  They just weren't morning people.